Skip to content

Feat: Real-time Blockchain-Database Synchronization System#114

Merged
respp merged 14 commits intoStellar-Rent:mainfrom
MPSxDev:feat-sync-service
Aug 19, 2025
Merged

Feat: Real-time Blockchain-Database Synchronization System#114
respp merged 14 commits intoStellar-Rent:mainfrom
MPSxDev:feat-sync-service

Conversation

@MPSxDev
Copy link
Contributor

@MPSxDev MPSxDev commented Aug 4, 2025

StellarRent Logo

Pull Request | StellarRent

📝 Summary

This PR implements a comprehensive real-time blockchain synchronization system between Stellar blockchain and Supabase database, ensuring data consistency across the StellarRent platform.

🔗 Related Issues

Closes **#102.

🔄 Changes Made

Implemented a complete blockchain synchronization architecture including:

Core Services:

  • SyncService: Main orchestration service for blockchain event polling and processing
  • BlockchainEventListener: Real-time event monitoring for Stellar contract events
  • SyncController: Admin API endpoints for sync management and monitoring

Database Schema:

  • sync_state: Tracks synchronization progress and state
  • sync_events: Stores all blockchain events for tracking and debugging
  • sync_logs: Detailed logging for all sync operations

Event Processing:

  • Support for booking creation, updates, cancellations, and payment confirmations
  • Automatic retry mechanisms and error recovery
  • Type-safe event processing with comprehensive error handling

Admin Tools:

  • REST API endpoints for sync management
  • Real-time status monitoring and dashboard
  • Manual sync triggers and failed event retry functionality

Testing & Type Safety:

  • Comprehensive integration tests with proper mocking
  • Complete type safety with no any types
  • Mock types and utilities for testing

��️ Current Output

No visual changes - backend service implementation with comprehensive API endpoints for sync management.

🧪 Testing

Comprehensive testing implemented including:

  • Unit tests for all sync service methods
  • Integration tests for event processing flows
  • Type safety validation across all components
  • Mock blockchain interactions and database operations
  • Error handling and recovery scenarios

✅ Testing Checklist

  • Unit tests added/modified
  • Integration tests performed
  • Manual tests executed
  • All tests pass in CI/CD

⚠️ Potential Risks

  • Network interruptions may temporarily affect sync operations
  • High-volume event processing may require performance tuning
  • Database connectivity issues could impact sync state persistence

🚀 Next Steps & Improvements

This change lays a solid foundation for further optimizations. Some areas that could benefit from future improvements include:

  • 🔹 Performance optimization - Event batching and parallel processing
  • 🔹 Webhook support - Real-time event notifications
  • 🔹 Multi-contract support - Monitoring multiple smart contracts
  • 🔹 Enhanced monitoring - Real-time dashboards and alerting
  • 🔹 Advanced filtering - Event type and property-specific monitoring

💬 Comments

This implementation provides a robust, type-safe foundation for blockchain synchronization that can scale with the platform's growth. The comprehensive testing ensures reliability, while the admin tools provide full visibility and control over the sync process.

Summary by CodeRabbit

  • New Features

    • Blockchain sync service with start/stop, status, manual sync, retry and clear-old-data actions; service now auto-starts on server boot.
  • API

    • Admin-only sync endpoints with pagination, dashboard, events and logs retrieval, plus retry/clear controls.
  • Database

    • New sync tables, indexes, dashboard view, automated timestamps, and admin-only access policies.
  • Configuration

    • New backend environment example covering blockchain, sync, server and rate-limit settings.
  • Logging

    • Asynchronous persistence of sync logs to the database with structured sanitization.
  • Tests

    • Unit tests covering lifecycle, event processing, status mapping and error handling.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 4, 2025

Walkthrough

Adds a blockchain synchronization subsystem: DB migration (sync tables, view, RLS), an event listener and SyncService for Soroban, persisted async logging to sync_logs, admin controller/routes and env template, service startup integration, and unit tests for sync behavior and internals.

Changes

Cohort / File(s) Change Summary
Database Migration & Schema
apps/backend/database/migrations/001_create_sync_tables.sql
Creates sync_state, sync_events, sync_logs tables, indexes, triggers, RLS admin-only policies, sync_dashboard view, grants, and inserts initial sync_state.
Environment Configuration
apps/backend/env.example
Adds backend environment variables for Supabase, JWT, Soroban RPC/contract, Stellar network settings, sync parameters (polling, retries, delays), server settings, testing flags, and rate-limiting.
Blockchain Event Listener
apps/backend/src/blockchain/eventListener.ts
New listener class to poll Soroban RPC, parse contract events, register async callbacks per event type, persist events to sync_events, handle retries/errors, and exported factory.
Sync Service
apps/backend/src/services/sync.service.ts
New SyncService singleton: polling loop, state persistence in sync_state, event processing/storage/upserts, handlers for booking/payment events, manual trigger, status APIs, and logging integration (event fetch currently stubbed).
Logging Integration
apps/backend/src/services/logging.service.ts
Rewrites LoggingService to async DB persistence into sync_logs, adds error serialization/sanitization helpers, changes public signatures to async, and exports singleton loggingService.
Sync Controller
apps/backend/src/controllers/sync.controller.ts
Adds SyncController exposing start/stop/status/manual trigger, paginated events/logs, dashboard, retry failed events, clear old data; uses Supabase and loggingService; exports singleton instance.
Sync Routes
apps/backend/src/routes/sync.routes.ts
New Express router with admin authentication middleware (Supabase token + env admin lists), pagination/security validation, and route bindings for sync controller endpoints.
Server Initialization
apps/backend/src/index.ts
Renames initializer to initializeServices, wires /api/sync routes, starts sync service during startup with logging and error handling.
Testing Types
apps/backend/src/tests/mockTypes.ts
Adds mock interfaces for Supabase query builder and SyncServiceTestAccess to expose private methods for testing.
Sync Service Tests
apps/backend/src/tests/sync.service.test.ts
Adds Jest tests covering SyncService init, lifecycle (start/stop), event processing, manual sync, status mapping, error resilience, and DB interactions.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Controller
    participant SyncService
    participant EventListener
    participant SorobanRPC
    participant SupabaseDB
    participant LoggingService

    Client->>Controller: POST /api/sync/start|/stop|/trigger|/status
    Controller->>SyncService: start()/stop()/triggerManualSync()/getStatus()
    SyncService->>EventListener: startPolling()
    loop every poll interval
        EventListener->>SorobanRPC: fetch latest ledger / events
        SorobanRPC-->>EventListener: events[]
        EventListener->>SyncService: parsed events
        SyncService->>SupabaseDB: insert `sync_events`, upsert `sync_state`, update domain tables
        SyncService->>LoggingService: log operation/result/error
        LoggingService->>SupabaseDB: insert `sync_logs`
    end
    Controller->>SupabaseDB: query `sync_events`/`sync_logs`/`sync_dashboard`
    SupabaseDB-->>Controller: results
    Controller-->>Client: HTTP response
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related issues

Possibly related PRs

Suggested reviewers

  • respp

Poem

Hopping through ledgers with twitchy nose and ear,
I capture each event and tuck it safe and clear.
Tables, views, and logs in tidy little stacks,
Controllers and routes keep the sync on track.
A rabbit cheers the code — hop on, sync, and steer! 🐇✨

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 8bec58e and 5acc8d0.

📒 Files selected for processing (2)
  • apps/backend/src/controllers/sync.controller.ts (1 hunks)
  • apps/backend/src/tests/sync.service.test.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • apps/backend/src/controllers/sync.controller.ts
  • apps/backend/src/tests/sync.service.test.ts
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 25

🧹 Nitpick comments (10)
apps/backend/env.example (2)

2-4: Add security warning for sensitive data.

Consider adding comments to warn developers about securing these sensitive values in production environments.

 # Database Configuration
+# WARNING: Replace with actual values and keep secure in production
 SUPABASE_URL=your_supabase_url
 SUPABASE_ANON_KEY=your_supabase_anon_key
 SUPABASE_SERVICE_ROLE_KEY=your_supabase_service_role_key

22-24: Consider more granular sync configuration.

The sync service configuration might benefit from additional tuning parameters for production environments.

 # Sync Service Configuration
 SYNC_POLL_INTERVAL=5000
 SYNC_MAX_RETRIES=3
 SYNC_RETRY_DELAY=1000
+# Maximum events to process per batch
+SYNC_BATCH_SIZE=100
+# Timeout for blockchain requests (ms)
+SYNC_REQUEST_TIMEOUT=30000
apps/backend/database/migrations/001_create_sync_tables.sql (1)

6-6: Consider removing hardcoded primary key default.

Using a hardcoded default value for the primary key could cause issues if multiple sync states need to be tracked in the future.

 CREATE TABLE IF NOT EXISTS public.sync_state (
-    id INTEGER PRIMARY KEY DEFAULT 1,
+    id SERIAL PRIMARY KEY,
     last_processed_block BIGINT NOT NULL DEFAULT 0,
apps/backend/src/routes/sync.routes.ts (1)

15-18: Consider adding input validation middleware.

Sync operations could benefit from request validation to prevent malformed requests from causing issues.

Add validation middleware for sensitive operations:

+import { body, query } from 'express-validator';
+import { validationMiddleware } from '../middleware/validation.middleware';

+// Validation for manual sync trigger
+const validateSyncTrigger = [
+  body('fromBlock').optional().isNumeric(),
+  body('toBlock').optional().isNumeric(),
+  validationMiddleware
+];

 // Sync service management
 router.post('/start', requireAdmin, syncController.startSync.bind(syncController));
 router.post('/stop', requireAdmin, syncController.stopSync.bind(syncController));
 router.get('/status', requireAdmin, syncController.getStatus.bind(syncController));
-router.post('/trigger', requireAdmin, syncController.triggerManualSync.bind(syncController));
+router.post('/trigger', requireAdmin, validateSyncTrigger, syncController.triggerManualSync.bind(syncController));
apps/backend/src/tests/mockTypes.ts (2)

2-14: Consider adding return type information to mock methods.

The current mock interface doesn't specify return types, which could lead to less precise type checking in tests.

 export interface MockSupabaseQueryBuilder {
-  select: jest.Mock;
-  insert: jest.Mock;
-  update: jest.Mock;
-  delete: jest.Mock;
-  upsert: jest.Mock;
-  eq: jest.Mock;
-  not: jest.Mock;
-  order: jest.Mock;
-  limit: jest.Mock;
-  range: jest.Mock;
-  single: jest.Mock;
+  select: jest.Mock<MockSupabaseQueryBuilder, [string?]>;
+  insert: jest.Mock<MockSupabaseQueryBuilder, [Record<string, unknown> | Record<string, unknown>[]]>;
+  update: jest.Mock<MockSupabaseQueryBuilder, [Record<string, unknown>]>;
+  delete: jest.Mock<MockSupabaseQueryBuilder, []>;
+  upsert: jest.Mock<MockSupabaseQueryBuilder, [Record<string, unknown> | Record<string, unknown>[]]>;
+  eq: jest.Mock<MockSupabaseQueryBuilder, [string, unknown]>;
+  not: jest.Mock<MockSupabaseQueryBuilder, [string, unknown]>;
+  order: jest.Mock<MockSupabaseQueryBuilder, [string, { ascending?: boolean }?]>;
+  limit: jest.Mock<MockSupabaseQueryBuilder, [number]>;
+  range: jest.Mock<MockSupabaseQueryBuilder, [number, number]>;
+  single: jest.Mock<Promise<{ data: unknown; error: unknown }>, []>;
 }

16-18: Add method chaining support to mock client.

Consider whether the mock client should support method chaining patterns commonly used with Supabase.

 export interface MockSupabaseClient {
   from: jest.Mock<MockSupabaseQueryBuilder>;
+  // Add other client methods if needed for comprehensive mocking
+  // auth?: jest.Mock;
+  // storage?: jest.Mock;
 }
apps/backend/src/tests/sync.service.test.ts (1)

66-111: Extract repeated mock setup to reduce duplication

The Supabase mock setup is repeated across multiple tests. Consider extracting it to a helper function.

Add a helper function at the top of the describe block:

describe('Service Lifecycle', () => {
  const setupSupabaseMock = () => {
    const mockSupabase = supabase as jest.Mocked<typeof supabase>;
    mockSupabase.from.mockReturnValue({
      select: jest.fn(() => ({
        single: jest.fn(() => Promise.resolve({ data: null, error: null })),
      })),
    } as unknown as ReturnType<typeof supabase.from>);
    return mockSupabase;
  };

  it('should start and stop service correctly', async () => {
    setupSupabaseMock();
    // ... rest of the test
  });
  
  // Apply to other tests as well
});
apps/backend/src/services/sync.service.ts (1)

223-242: Missing critical implementation for event fetching

The getContractEvents method is not implemented and always returns an empty array, making the sync service non-functional.

This is a critical missing piece. Would you like me to help implement the actual event querying from the Stellar network or create an issue to track this implementation?

apps/backend/src/blockchain/eventListener.ts (2)

1-1: Remove unused import

The xdr import from Stellar SDK is not used in the code.

Apply this diff:

-import { Contract, Networks, xdr } from '@stellar/stellar-sdk';
+import { Contract, Networks } from '@stellar/stellar-sdk';

154-165: Consider implementing exponential backoff for retries

The current retry mechanism uses a fixed delay. For better resilience, consider implementing exponential backoff.

Apply this diff to implement exponential backoff:

       } else {
         // Wait before retrying
-        await new Promise((resolve) => setTimeout(resolve, this.config.retryDelay));
+        const backoffDelay = this.config.retryDelay * Math.pow(2, this.retryCount - 1);
+        await new Promise((resolve) => setTimeout(resolve, Math.min(backoffDelay, 60000))); // Cap at 60 seconds
       }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c3fe02b and 8ad499b.

📒 Files selected for processing (10)
  • apps/backend/database/migrations/001_create_sync_tables.sql (1 hunks)
  • apps/backend/env.example (1 hunks)
  • apps/backend/src/blockchain/eventListener.ts (1 hunks)
  • apps/backend/src/controllers/sync.controller.ts (1 hunks)
  • apps/backend/src/index.ts (3 hunks)
  • apps/backend/src/routes/sync.routes.ts (1 hunks)
  • apps/backend/src/services/logging.service.ts (1 hunks)
  • apps/backend/src/services/sync.service.ts (1 hunks)
  • apps/backend/src/tests/mockTypes.ts (1 hunks)
  • apps/backend/src/tests/sync.service.test.ts (1 hunks)
🔇 Additional comments (5)
apps/backend/env.example (1)

34-35: Verify rate limiting defaults for production readiness.

The current rate limiting configuration allows 100 requests per 15 minutes, which might be restrictive for a production blockchain synchronization service.

Please verify these rate limiting values are appropriate for the expected API usage patterns, particularly for admin sync operations that might require higher throughput.

apps/backend/database/migrations/001_create_sync_tables.sql (1)

81-88: Ensure auth.role() is available

The RLS policies in your migrations reference auth.role(), but we don’t define that function anywhere in our SQL. Supabase provides auth.role() out of the box, so please confirm it’s present in your target database. If you run these scripts locally or outside of Supabase, you may need to create a stub or include the auth schema and role() function yourself.

Affected files:

  • apps/backend/database/setup.sql
  • apps/backend/database/migrations/00002_storage_and_rls.sql
  • apps/backend/database/migrations/00004_create_wallet_auth_tables.sql
  • apps/backend/database/migrations/00005_create_profile_table.sql
  • apps/backend/database/migrations/001_create_sync_tables.sql
apps/backend/src/index.ts (1)

13-13: LGTM: Clean integration of sync functionality.

The integration of sync routes and service initialization follows established patterns and maintains consistency with the existing codebase structure.

Also applies to: 15-15, 63-63

apps/backend/src/tests/mockTypes.ts (1)

21-25: LGTM: Clean test access interface design.

The SyncServiceTestAccess interface provides a clean way to test private methods while maintaining type safety. This is a good pattern for comprehensive unit testing.

apps/backend/src/controllers/sync.controller.ts (1)

129-131: Fix incorrect undefined check for processed parameter

The condition processed !== undefined will always be true because req.query.processed returns a string or undefined. The check should be for the string value.

Apply this diff:

-      if (processed !== undefined) {
+      if (processed) {
         query = query.eq('processed', processed === 'true');
       }

Likely an incorrect or invalid review comment.

CREATE TABLE IF NOT EXISTS public.sync_events (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
event_id TEXT NOT NULL UNIQUE,
event_type TEXT NOT NULL CHECK (event_type IN ('booking_created', 'booking_updated', 'booking_cancelled', 'payment_confirmed')),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove duplicate constraint definition.

The event_type constraint is defined twice - once inline and once as a separate constraint.

     event_type TEXT NOT NULL CHECK (event_type IN ('booking_created', 'booking_updated', 'booking_cancelled', 'payment_confirmed')),
     booking_id TEXT,
     property_id TEXT,
     user_id TEXT,
     event_data JSONB NOT NULL,
     processed BOOLEAN NOT NULL DEFAULT FALSE,
     error TEXT,
     created_at TIMESTAMPTZ NOT NULL DEFAULT timezone('utc'::text, now()),
     processed_at TIMESTAMPTZ,
     
-    CONSTRAINT valid_event_type CHECK (event_type IN ('booking_created', 'booking_updated', 'booking_cancelled', 'payment_confirmed'))

Also applies to: 29-29

🤖 Prompt for AI Agents
In apps/backend/database/migrations/001_create_sync_tables.sql at lines 19 and
29, the CHECK constraint for the event_type column is defined twice, once inline
and once as a separate constraint. Remove one of these duplicate constraint
definitions to avoid redundancy and potential conflicts, keeping only a single
CHECK constraint for event_type.

COUNT(CASE WHEN se.processed = false AND se.error IS NOT NULL THEN 1 END) as failed_events_count,
MAX(se.created_at) as last_event_time
FROM public.sync_state s
LEFT JOIN public.sync_events se ON true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Optimize dashboard view join condition.

The LEFT JOIN with ON true creates a Cartesian product, which could be inefficient for large datasets.

 FROM public.sync_state s
-LEFT JOIN public.sync_events se ON true
+LEFT JOIN public.sync_events se ON s.id IS NOT NULL

Or consider restructuring the view to use subqueries for better performance:

CREATE OR REPLACE VIEW public.sync_dashboard AS
SELECT 
    s.last_processed_block,
    s.total_events_processed,
    s.failed_events,
    s.last_sync_time,
    s.updated_at as last_updated,
    (SELECT COUNT(*) FROM public.sync_events) as total_events,
    (SELECT COUNT(*) FROM public.sync_events WHERE processed = false) as pending_events,
    (SELECT COUNT(*) FROM public.sync_events WHERE processed = false AND error IS NOT NULL) as failed_events_count,
    (SELECT MAX(created_at) FROM public.sync_events) as last_event_time
FROM public.sync_state s;
🤖 Prompt for AI Agents
In apps/backend/database/migrations/001_create_sync_tables.sql at line 103, the
LEFT JOIN with ON true causes a Cartesian product, leading to inefficiency.
Replace the join with subqueries in the SELECT clause to aggregate the needed
sync_events data, as shown in the suggested view definition, to optimize
performance and avoid unnecessary row multiplication.

Comment on lines +107 to +110
GRANT SELECT ON public.sync_dashboard TO authenticated;
GRANT ALL ON public.sync_state TO authenticated;
GRANT ALL ON public.sync_events TO authenticated;
GRANT ALL ON public.sync_logs TO authenticated; No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Reconsider broad permissions for authenticated users.

Granting ALL permissions to all authenticated users on sync tables might be too permissive for a production environment.

 -- Grant permissions
 GRANT SELECT ON public.sync_dashboard TO authenticated;
-GRANT ALL ON public.sync_state TO authenticated;
-GRANT ALL ON public.sync_events TO authenticated;
-GRANT ALL ON public.sync_logs TO authenticated;
+-- Grant specific permissions based on actual service needs
+GRANT SELECT, INSERT, UPDATE ON public.sync_state TO service_role;
+GRANT SELECT, INSERT, UPDATE ON public.sync_events TO service_role;
+GRANT SELECT, INSERT ON public.sync_logs TO service_role;
🤖 Prompt for AI Agents
In apps/backend/database/migrations/001_create_sync_tables.sql around lines 107
to 110, the GRANT statements give ALL permissions on sync_state, sync_events,
and sync_logs tables to all authenticated users, which is too broad. Modify
these statements to grant only the necessary minimal permissions (e.g., SELECT,
INSERT, UPDATE) instead of ALL, based on the actual access requirements for
authenticated users in production.

Comment on lines +12 to +13
SOROBAN_NETWORK_PASSPHRASE=Test SDF Network ; September 2015
STELLAR_SECRET_KEY=your_stellar_secret_key
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix duplicate network passphrase configuration.

The STELLAR_NETWORK_PASSPHRASE appears in both Soroban and legacy configurations with the same value, which could lead to confusion and maintenance issues.

 SOROBAN_NETWORK_PASSPHRASE=Test SDF Network ; September 2015
 STELLAR_SECRET_KEY=your_stellar_secret_key

 # Legacy Stellar Configuration (for backward compatibility)
 STELLAR_RPC_URL=https://horizon-testnet.stellar.org
 BOOKING_CONTRACT_ADDRESS=your_booking_contract_address
 STELLAR_SOURCE_ACCOUNT=your_stellar_source_account
-STELLAR_NETWORK_PASSPHRASE=Test SDF Network ; September 2015
+# STELLAR_NETWORK_PASSPHRASE uses SOROBAN_NETWORK_PASSPHRASE for consistency

Also applies to: 19-19

🤖 Prompt for AI Agents
In apps/backend/env.example around lines 12 to 13 and line 19, there is a
duplicate configuration for the network passphrase under different names
(SOROBAN_NETWORK_PASSPHRASE and STELLAR_NETWORK_PASSPHRASE) with the same value.
Remove one of these duplicate entries to avoid confusion and maintain a single
source of truth for the network passphrase configuration.

Comment on lines +286 to +319
private async handleBookingCreated(event: Record<string, unknown>): Promise<void> {
const eventData = event.data as BlockchainEventData;
const { data: existingBooking } = await supabase
.from('bookings')
.select('*')
.eq('escrow_address', eventData.escrow_id || '')
.single();

if (existingBooking) {
// Update existing booking with blockchain data
await supabase
.from('bookings')
.update({
status: this.mapBlockchainStatus(eventData.status || ''),
updated_at: new Date().toISOString(),
})
.eq('id', existingBooking.id);
} else {
// Create new booking record
await supabase.from('bookings').insert({
property_id: eventData.property_id || '',
user_id: eventData.user_id || '',
dates: {
from: new Date((eventData.start_date || 0) * 1000).toISOString(),
to: new Date((eventData.end_date || 0) * 1000).toISOString(),
},
guests: eventData.guests || 1,
total: eventData.total_price || 0,
deposit: eventData.deposit || 0,
escrow_address: eventData.escrow_id || '',
status: this.mapBlockchainStatus(eventData.status || ''),
});
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add data validation and improve date handling

The method lacks validation for required fields and the Unix timestamp conversion doesn't specify timezone handling.

Apply this diff to add validation:

 private async handleBookingCreated(event: Record<string, unknown>): Promise<void> {
   const eventData = event.data as BlockchainEventData;
+  
+  // Validate required fields
+  if (!eventData.escrow_id || !eventData.property_id || !eventData.user_id) {
+    throw new Error('Missing required fields in booking created event');
+  }
+  
   const { data: existingBooking } = await supabase
     .from('bookings')
     .select('*')
-    .eq('escrow_address', eventData.escrow_id || '')
+    .eq('escrow_address', eventData.escrow_id)
     .single();
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
private async handleBookingCreated(event: Record<string, unknown>): Promise<void> {
const eventData = event.data as BlockchainEventData;
const { data: existingBooking } = await supabase
.from('bookings')
.select('*')
.eq('escrow_address', eventData.escrow_id || '')
.single();
if (existingBooking) {
// Update existing booking with blockchain data
await supabase
.from('bookings')
.update({
status: this.mapBlockchainStatus(eventData.status || ''),
updated_at: new Date().toISOString(),
})
.eq('id', existingBooking.id);
} else {
// Create new booking record
await supabase.from('bookings').insert({
property_id: eventData.property_id || '',
user_id: eventData.user_id || '',
dates: {
from: new Date((eventData.start_date || 0) * 1000).toISOString(),
to: new Date((eventData.end_date || 0) * 1000).toISOString(),
},
guests: eventData.guests || 1,
total: eventData.total_price || 0,
deposit: eventData.deposit || 0,
escrow_address: eventData.escrow_id || '',
status: this.mapBlockchainStatus(eventData.status || ''),
});
}
}
private async handleBookingCreated(event: Record<string, unknown>): Promise<void> {
const eventData = event.data as BlockchainEventData;
// Validate required fields
if (!eventData.escrow_id || !eventData.property_id || !eventData.user_id) {
throw new Error('Missing required fields in booking created event');
}
const { data: existingBooking } = await supabase
.from('bookings')
.select('*')
.eq('escrow_address', eventData.escrow_id)
.single();
if (existingBooking) {
// Update existing booking with blockchain data
await supabase
.from('bookings')
.update({
status: this.mapBlockchainStatus(eventData.status || ''),
updated_at: new Date().toISOString(),
})
.eq('id', existingBooking.id);
} else {
// Create new booking record
await supabase.from('bookings').insert({
property_id: eventData.property_id || '',
user_id: eventData.user_id || '',
dates: {
from: new Date((eventData.start_date || 0) * 1000).toISOString(),
to: new Date((eventData.end_date || 0) * 1000).toISOString(),
},
guests: eventData.guests || 1,
total: eventData.total_price || 0,
deposit: eventData.deposit || 0,
escrow_address: eventData.escrow_id || '',
status: this.mapBlockchainStatus(eventData.status || ''),
});
}
}
🤖 Prompt for AI Agents
In apps/backend/src/services/sync.service.ts around lines 286 to 319, the
handleBookingCreated method lacks validation for required fields and does not
explicitly handle timezone when converting Unix timestamps to ISO strings. Add
checks to ensure essential fields like escrow_id, property_id, and user_id are
present before proceeding. For date conversion, explicitly convert Unix
timestamps to UTC ISO strings by using appropriate Date methods or libraries
that handle timezone correctly. This will prevent invalid data insertion and
ensure consistent date handling.

Comment on lines +290 to +309
describe('Error Handling', () => {
it('should handle network errors gracefully', async () => {
// Mock network error
global.fetch = jest.fn(() =>
Promise.reject(new Error('Network error'))
) as unknown as typeof fetch;

const mockSupabase = supabase as jest.Mocked<typeof supabase>;
mockSupabase.from.mockReturnValue({
select: jest.fn(() => ({
single: jest.fn(() => Promise.resolve({ data: null, error: null })),
})),
upsert: jest.fn(() => Promise.resolve({ data: null, error: null })),
} as unknown as ReturnType<typeof supabase.from>);

await syncService.start();

// Should not crash the service
expect(syncService.getStatus().isRunning).toBe(true);
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Clean up global fetch mock to prevent test pollution

Mocking global fetch without restoring it can affect other tests. Ensure proper cleanup.

Apply this diff:

 it('should handle network errors gracefully', async () => {
+  const originalFetch = global.fetch;
   // Mock network error
   global.fetch = jest.fn(() =>
     Promise.reject(new Error('Network error'))
   ) as unknown as typeof fetch;

   const mockSupabase = supabase as jest.Mocked<typeof supabase>;
   // ... rest of the test

   // Should not crash the service
   expect(syncService.getStatus().isRunning).toBe(true);
+  
+  // Restore original fetch
+  global.fetch = originalFetch;
 });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
describe('Error Handling', () => {
it('should handle network errors gracefully', async () => {
// Mock network error
global.fetch = jest.fn(() =>
Promise.reject(new Error('Network error'))
) as unknown as typeof fetch;
const mockSupabase = supabase as jest.Mocked<typeof supabase>;
mockSupabase.from.mockReturnValue({
select: jest.fn(() => ({
single: jest.fn(() => Promise.resolve({ data: null, error: null })),
})),
upsert: jest.fn(() => Promise.resolve({ data: null, error: null })),
} as unknown as ReturnType<typeof supabase.from>);
await syncService.start();
// Should not crash the service
expect(syncService.getStatus().isRunning).toBe(true);
});
describe('Error Handling', () => {
it('should handle network errors gracefully', async () => {
// Save and mock fetch
const originalFetch = global.fetch;
global.fetch = jest.fn(() =>
Promise.reject(new Error('Network error'))
) as unknown as typeof fetch;
const mockSupabase = supabase as jest.Mocked<typeof supabase>;
mockSupabase.from.mockReturnValue({
select: jest.fn(() => ({
single: jest.fn(() => Promise.resolve({ data: null, error: null })),
})),
upsert: jest.fn(() => Promise.resolve({ data: null, error: null })),
} as unknown as ReturnType<typeof supabase.from>);
await syncService.start();
// Should not crash the service
expect(syncService.getStatus().isRunning).toBe(true);
// Restore original fetch
global.fetch = originalFetch;
});
});
🤖 Prompt for AI Agents
In apps/backend/src/tests/sync.service.test.ts around lines 290 to 309, the
global fetch is mocked but not restored, which can cause side effects in other
tests. Fix this by saving the original global fetch before mocking it and
restoring it after the test completes, using afterEach or finally block to
ensure cleanup regardless of test outcome.

Copy link
Contributor

@respp respp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @MPSxDev , great implementation! Just a few adjustments left: you only need to address the valid suggestions from CodeRabbit, which are essentially the following: when mocking global.fetch, make sure to restore it at the end of the test to avoid test contamination. All of this aligns well with the current structure of the SyncService, its endpoints, and the tests included in the PR.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
apps/backend/src/index.ts (2)

25-31: Don’t swallow blockchain sync startup failures — fail fast or rethrow.

If the sync service fails to start, the app continues running in a degraded state. This mirrors prior feedback.

Apply this diff to escalate the failure (exit in production, rethrow otherwise):

   // Initialize blockchain sync service
   try {
     await syncService.start();
     console.log('✅ Blockchain synchronization service started');
   } catch (error) {
     console.error('❌ Failed to start blockchain sync service:', error);
-  }
+    if (process.env.NODE_ENV === 'production') {
+      process.exit(1);
+    }
+    throw error;
+  }

75-76: Await initializeServices; only log success on resolution.

Currently the success log prints immediately, even if initialization fails. This was previously flagged; please handle the promise and exit in production on failure.

-  initializeServices();
-  console.log('✅ All services initialized successfully');
+  initializeServices()
+    .then(() => {
+      console.log('✅ All services initialized successfully');
+    })
+    .catch((error) => {
+      console.error('❌ Failed to initialize services:', error);
+      if (process.env.NODE_ENV === 'production') process.exit(1);
+    });
🧹 Nitpick comments (4)
apps/backend/src/index.ts (4)

15-15: Ensure syncService lifecycle is well-managed (idempotent start, graceful shutdown).

The singleton import is fine (see services/sync.service.ts export). Please verify:

  • syncService.start() is idempotent if called more than once (e.g., hot reloads).
  • A corresponding stop()/dispose() is available and called on process termination.

If missing, I can help add a stop method and signal handlers.


20-24: Init sequence is reasonable; consider initializing services before accepting requests.

Starting cleanup and other services after the server begins accepting requests can expose endpoints before dependencies are ready. Prefer initializing services first, then calling app.listen, or gate readiness via a health/readiness endpoint until initialization completes.


63-63: Secure /api/sync for admin-only access.

These endpoints manage synchronization. Ensure the route file enforces appropriate auth/role guards (e.g., admin-only) in addition to the global rate limiter.


73-81: Add graceful shutdown to cleanly stop sync and schedulers.

To avoid orphaned jobs or dangling connections on SIGINT/SIGTERM, consider adding:

// Place near startup code
function setupGracefulShutdown() {
  const shutdown = async (signal: string) => {
    try {
      console.log(`\nReceived ${signal}, shutting down gracefully...`);
      // If available:
      // await syncService.stop();
      // await stopCleanupScheduler();
    } catch (e) {
      console.error('Error during shutdown:', e);
    } finally {
      process.exit(0);
    }
  };
  ['SIGINT', 'SIGTERM'].forEach(sig => process.on(sig as NodeJS.Signals, () => void shutdown(sig)));
}
setupGracefulShutdown();

If stopCleanupScheduler() isn’t exported from cleanup-schedular, I can add it.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 8ad499b and db1c97b.

📒 Files selected for processing (1)
  • apps/backend/src/index.ts (3 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
apps/backend/src/index.ts (2)
apps/backend/src/services/cleanup-schedular.ts (2)
  • runInitialCleanup (29-37)
  • startCleanupScheduler (5-18)
apps/backend/src/services/sync.service.ts (1)
  • syncService (471-471)
🔇 Additional comments (1)
apps/backend/src/index.ts (1)

13-13: Wire up /api/sync routes — LGTM

The route registration for the sync admin endpoints is correctly namespaced under /api/sync.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

♻️ Duplicate comments (11)
apps/backend/src/services/logging.service.ts (1)

245-253: Fix inconsistent logBlockchainError signature

The logBlockchainError method signature changed to accept operation and error parameters directly, but this breaks consistency with logBlockchainSuccess which accepts a TransactionLog parameter. This makes the API inconsistent and potentially breaks the intended workflow of updating an existing log entry to failed status.

apps/backend/src/services/sync.service.ts (3)

334-336: Fix incorrect RPC endpoint URL construction

The URL construction for fetching the latest ledger is incorrect. Stellar RPC uses JSON-RPC protocol, not REST endpoints.


416-419: Reconsider error propagation strategy

The method marks the event as failed but then re-throws the error, which could halt the entire sync process for a single failed event.

Apply this diff to allow the sync process to continue with other events:

   } catch (error) {
     console.error(`Error processing event ${event.id as string}:`, error);
     await this.markEventFailed(event.id as string, error as Error);
-    throw error;
+    await loggingService.logBlockchainError('processEvent', error);
+    // Don't throw - allow sync to continue with other events
   }

426-459: Add data validation for required fields

The method should validate that essential fields are present before processing to prevent invalid data insertion.

Apply this diff to add validation:

 private async handleBookingCreated(event: Record<string, unknown>): Promise<void> {
   const eventData = event.data as BlockchainEventData;
+  
+  // Validate required fields
+  if (!eventData.escrow_id || !eventData.property_id || !eventData.user_id) {
+    throw new Error('Missing required fields in booking created event');
+  }
+  
   const { data: existingBooking } = await supabase
     .from('bookings')
     .select('*')
-    .eq('escrow_address', eventData.escrow_id || '')
+    .eq('escrow_address', eventData.escrow_id)
     .single();
apps/backend/src/tests/sync.service.test.ts (3)

458-467: Test relies on private method access

Accessing private methods through type casting breaks encapsulation and makes tests fragile. The status mapping logic should be tested through the public API.

Consider extracting the status mapping logic to a separate utility function that can be tested independently, or test it indirectly through the event processing flow.


234-241: Test the actual implementation instead of mocking the method under test

The test uses type casting to access a private method directly, which breaks encapsulation and makes tests brittle. Private methods should be tested through the public API.

Consider refactoring to test through the public API that internally uses this method. If direct testing is needed, consider making the method protected or extracting the logic to a separate testable unit.


471-489: Clean up global fetch mock after test

The test modifies global.fetch but doesn't restore it, which could affect other tests.

Apply this diff to restore the original fetch:

 it('should handle network errors gracefully', async () => {
+  const originalFetch = global.fetch;
   // Mock network error
   global.fetch = jest.fn(() =>
     Promise.reject(new Error('Network error'))
   ) as unknown as typeof fetch;

-  const mockSupabase = supabase as jest.Mocked<typeof supabase>;
-  mockSupabase.from.mockReturnValue({
-    select: jest.fn(() => ({
-      single: jest.fn(() => Promise.resolve({ data: null, error: null })),
-    })),
-    upsert: jest.fn(() => Promise.resolve({ data: null, error: null })),
-  } as unknown as ReturnType<typeof supabase.from>);
+  try {
+    const mockSupabase = supabase as jest.Mocked<typeof supabase>;
+    mockSupabase.from.mockReturnValue({
+      select: jest.fn(() => ({
+        single: jest.fn(() => Promise.resolve({ data: null, error: null })),
+      })),
+      upsert: jest.fn(() => Promise.resolve({ data: null, error: null })),
+    } as unknown as ReturnType<typeof supabase.from>);

-  await syncService.start();
+    await syncService.start();

-  // Should not crash the service
-  expect(syncService.getStatus().isRunning).toBe(true);
+    // Should not crash the service
+    expect(syncService.getStatus().isRunning).toBe(true);
+  } finally {
+    // Restore original fetch
+    global.fetch = originalFetch;
+  }
 });
apps/backend/src/index.ts (1)

20-32: Add production exit strategy for critical service failures

The error handling logs failures but allows the application to continue running, which could lead to an incomplete system state in production.

Apply this diff to add proper production error handling:

 async function initializeServices() {
   // Initialize cleanup scheduler
   await runInitialCleanup();
   startCleanupScheduler();

   // Initialize blockchain sync service
   try {
     await syncService.start();
     console.log('✅ Blockchain synchronization service started');
   } catch (error) {
     console.error('❌ Failed to start blockchain sync service:', error);
+    // Exit on critical service failure in production
+    if (process.env.NODE_ENV === 'production') {
+      console.error('💀 Exiting due to critical service failure');
+      process.exit(1);
+    }
   }
 }
apps/backend/src/controllers/sync.controller.ts (3)

12-17: Good fix: awaited async logging calls across endpoints

Thanks for addressing the earlier feedback by awaiting logBlockchainSuccess/logBlockchainOperation where appropriate. This prevents lost logs and unhandled rejections.

Also applies to: 38-43, 87-92, 308-313, 425-433


282-306: Batch failed-event retry and trigger a single manual sync

Current loop triggers a manual sync per event, which is inefficient and risks overload. Batch the updates and call syncService.triggerManualSync() once.

Apply this diff:

-      let retriedCount = 0;
-      let successCount = 0;
-
-      for (const event of failedEvents || []) {
-        try {
-          retriedCount++;
-
-          // Clear error and mark as unprocessed
-          await supabase
-            .from('sync_events')
-            .update({
-              error: null,
-              processed: false,
-              processed_at: null,
-            })
-            .eq('id', event.id);
-
-          // Trigger manual sync to reprocess
-          await syncService.triggerManualSync();
-
-          successCount++;
-        } catch (retryError) {
-          console.error(`Failed to retry event ${event.id}:`, retryError);
-        }
-      }
+      const ids = (failedEvents ?? []).map((e) => e.id);
+      let retriedCount = ids.length;
+      let successCount = 0;
+      if (ids.length > 0) {
+        const { data: updated, error: updateError } = await supabase
+          .from('sync_events')
+          .update({
+            error: null,
+            processed: false,
+            processed_at: null,
+          })
+          .in('id', ids)
+          .select('id');
+        if (updateError) {
+          throw updateError;
+        }
+        successCount = updated?.length || 0;
+        // Trigger manual sync once after updating all events
+        await syncService.triggerManualSync();
+      }

342-433: Make clearOldData atomic via a Postgres RPC (transaction); current code is only “best-effort”

The comment suggests transactionality, but Supabase JS doesn’t support multi-statement transactions; the current two deletes can partially succeed. Move deletion into a SQL function and call it via supabase.rpc to guarantee atomicity.

Apply this diff to the controller:

-      // Use Supabase's transaction support to ensure atomicity
-      // First, get the count of records to be deleted for validation
-      const { count: eventsCount, error: countError } = await supabase
-        .from('sync_events')
-        .select('*', { count: 'exact', head: true })
-        .lt('created_at', cutoffDate.toISOString());
-
-      if (countError) {
-        throw new Error(`Failed to count old events: ${countError.message}`);
-      }
-
-      const { count: logsCount, error: logsCountError } = await supabase
-        .from('sync_logs')
-        .select('*', { count: 'exact', head: true })
-        .lt('created_at', cutoffDate.toISOString());
-
-      if (logsCountError) {
-        throw new Error(`Failed to count old logs: ${logsCountError.message}`);
-      }
-
-      // Perform both deletions in sequence, but with proper error handling
-      // If either fails, we'll handle it gracefully
-      let eventsDeleted = 0;
-      let logsDeleted = 0;
-      let partialFailure = false;
-      let failureDetails = '';
-
-      try {
-        // Delete old events
-        const { data: eventsResult, error: eventsError } = await supabase
-          .from('sync_events')
-          .delete()
-          .lt('created_at', cutoffDate.toISOString())
-          .select('id');
-
-        if (eventsError) {
-          throw new Error(`Failed to delete old events: ${eventsError.message}`);
-        }
-        eventsDeleted = eventsResult?.length || 0;
-
-        // Delete old logs
-        const { data: logsResult, error: logsError } = await supabase
-          .from('sync_logs')
-          .delete()
-          .lt('created_at', cutoffDate.toISOString())
-          .select('id');
-
-        if (logsError) {
-          throw new Error(`Failed to delete old logs: ${logsError.message}`);
-        }
-        logsDeleted = logsResult?.length || 0;
-      } catch (deletionError) {
-        // If deletion fails, we have a partial failure scenario
-        partialFailure = true;
-        failureDetails =
-          deletionError instanceof Error ? deletionError.message : 'Unknown deletion error';
-
-        // Log the partial failure
-        await loggingService.logBlockchainOperation('clearOldData_partial_failure', {
-          days,
-          cutoffDate: cutoffDate.toISOString(),
-          eventsDeleted,
-          logsDeleted,
-          failureDetails,
-          expectedEventsCount: eventsCount || 0,
-          expectedLogsCount: logsCount || 0,
-        });
-
-        // Return partial success response
-        res.json({
-          success: true,
-          message: `Partially cleared sync data older than ${days} days. Events deleted: ${eventsDeleted}, Logs deleted: ${logsDeleted}. Some deletions failed.`,
-          cutoffDate: cutoffDate.toISOString(),
-          eventsDeleted,
-          logsDeleted,
-          partialFailure: true,
-          failureDetails,
-          expectedEventsCount: eventsCount || 0,
-          expectedLogsCount: logsCount || 0,
-        });
-        return;
-      }
-
-      // Both operations succeeded
-      await loggingService.logBlockchainSuccess(logId, {
-        action: 'old_data_cleared',
-        cutoffDate: cutoffDate.toISOString(),
-        eventsDeleted,
-        logsDeleted,
-        expectedEventsCount: eventsCount || 0,
-        expectedLogsCount: logsCount || 0,
-      });
-
-      res.json({
-        success: true,
-        message: `Cleared sync data older than ${days} days`,
-        cutoffDate: cutoffDate.toISOString(),
-        eventsDeleted,
-        logsDeleted,
-        partialFailure: false,
-        expectedEventsCount: eventsCount || 0,
-        expectedLogsCount: logsCount || 0,
-      });
+      // Perform both deletions atomically via a Postgres function (transaction)
+      const { data: cleared, error: rpcError } = await supabase.rpc('clear_old_sync_data', {
+        p_cutoff: cutoffDate.toISOString(),
+      });
+      if (rpcError) {
+        throw rpcError;
+      }
+      const eventsDeleted = cleared?.events_deleted ?? 0;
+      const logsDeleted = cleared?.logs_deleted ?? 0;
+
+      await loggingService.logBlockchainSuccess(logId, {
+        action: 'old_data_cleared',
+        cutoffDate: cutoffDate.toISOString(),
+        eventsDeleted,
+        logsDeleted,
+      });
+
+      res.json({
+        success: true,
+        message: `Cleared sync data older than ${days} days`,
+        cutoffDate: cutoffDate.toISOString(),
+        eventsDeleted,
+        logsDeleted,
+        partialFailure: false,
+      });

Add this SQL function in a migration to ensure transactionality:

-- apps/backend/database/migrations/00x_clear_old_sync_data_function.sql
create or replace function public.clear_old_sync_data(p_cutoff timestamptz)
returns table (events_deleted bigint, logs_deleted bigint)
language plpgsql
security definer
set search_path = public
as $$
declare
begin
  -- run both deletes in a single transaction
  with deleted_events as (
    delete from public.sync_events
    where created_at < p_cutoff
    returning 1
  ),
  deleted_logs as (
    delete from public.sync_logs
    where created_at < p_cutoff
    returning 1
  )
  select
    (select count(*) from deleted_events) as events_deleted,
    (select count(*) from deleted_logs) as logs_deleted
  into events_deleted, logs_deleted;

  return next;
end;
$$;

Ensure the service role key is used (already configured) and permissions allow executing this function.

🧹 Nitpick comments (4)
apps/backend/src/tests/sync.service.test.ts (3)

308-314: Mock should return more realistic responses

The mock fetch response returns an empty events array, but the comment indicates it should simulate actual events. This inconsistency could hide issues with event processing.

Consider making the mock return actual test events to better validate the sync process:

       .mockResolvedValueOnce({
-        json: () => Promise.resolve({ events: [] }),
-      } as Response); // getContractEvents response (empty for this test)
+        json: () => Promise.resolve({ 
+          events: [
+            {
+              id: 'test-event-1',
+              type: 'booking_created',
+              // ... other event properties
+            }
+          ] 
+        }),
+      } as Response); // getContractEvents response with test events

508-531: Remove duplicate test case

This test is identical to the test already defined at lines 81-104. Having duplicate tests adds no value and increases maintenance burden.

Apply this diff to remove the duplicate test:

-  it('should use custom polling interval from environment variable', () => {
-    // Save original environment variables
-    const originalPollingInterval = process.env.SYNC_POLL_INTERVAL;
-    const originalRpcUrl = process.env.SOROBAN_RPC_URL;
-    const originalContractId = process.env.SOROBAN_CONTRACT_ID;
-    const originalNetworkPassphrase = process.env.SOROBAN_NETWORK_PASSPHRASE;
-
-    try {
-      // Set custom polling interval
-      process.env.SYNC_POLL_INTERVAL = '10000';
-      process.env.SOROBAN_RPC_URL = 'https://test-rpc.stellar.org';
-      process.env.SOROBAN_CONTRACT_ID = 'test-contract-id';
-      process.env.SOROBAN_NETWORK_PASSPHRASE = 'Test SDF Network ; September 2015';
-
-      const customSyncService = new SyncService();
-      expect(customSyncService.getPollingIntervalMs()).toBe(10000);
-    } finally {
-      // Restore original environment variables
-      process.env.SYNC_POLL_INTERVAL = originalPollingInterval;
-      process.env.SOROBAN_RPC_URL = originalRpcUrl;
-      process.env.SOROBAN_CONTRACT_ID = originalContractId;
-      process.env.SOROBAN_NETWORK_PASSPHRASE = originalNetworkPassphrase;
-    }
-  });

533-556: Remove second duplicate test case

This test is identical to the test already defined at lines 106-129. Having duplicate tests adds no value.

Apply this diff to remove the duplicate test:

-  it('should fallback to default polling interval for invalid environment value', () => {
-    // Save original environment variables
-    const originalPollingInterval = process.env.SYNC_POLL_INTERVAL;
-    const originalRpcUrl = process.env.SOROBAN_RPC_URL;
-    const originalContractId = process.env.SOROBAN_CONTRACT_ID;
-    const originalNetworkPassphrase = process.env.SOROBAN_NETWORK_PASSPHRASE;
-
-    try {
-      // Set invalid polling interval
-      process.env.SYNC_POLL_INTERVAL = 'invalid';
-      process.env.SOROBAN_RPC_URL = 'https://test-rpc.stellar.org';
-      process.env.SOROBAN_CONTRACT_ID = 'test-contract-id';
-      process.env.SOROBAN_NETWORK_PASSPHRASE = 'Test SDF Network ; September 2015';
-
-      const fallbackSyncService = new SyncService();
-      expect(fallbackSyncService.getPollingIntervalMs()).toBe(5000); // Default fallback
-    } finally {
-      // Restore original environment variables
-      process.env.SYNC_POLL_INTERVAL = originalPollingInterval;
-      process.env.SOROBAN_RPC_URL = originalRpcUrl;
-      process.env.SOROBAN_CONTRACT_ID = originalContractId;
-      process.env.SOROBAN_NETWORK_PASSPHRASE = originalNetworkPassphrase;
-    }
-  });
apps/backend/src/routes/sync.routes.ts (1)

292-312: Consider using a more robust SQL injection detection approach

While the current SQL injection detection is good, consider using a proper input sanitization library or parameterized queries instead of pattern matching, which can be bypassed.

Consider using a library like validator or express-validator for more robust input validation:

+import validator from 'validator';

     // Security: Check for potential SQL injection patterns in string parameters
     const stringParams = ['eventType', 'status', 'operation'];
     for (const param of stringParams) {
       if (otherParams[param] && typeof otherParams[param] === 'string') {
         const value = otherParams[param] as string;
-        // Check for common SQL injection patterns
-        if (
-          value.includes(';') ||
-          value.includes('--') ||
-          value.includes('/*') ||
-          value.includes('*/') ||
-          value.includes('union') ||
-          value.includes('select')
-        ) {
+        // Use validator for more robust SQL injection detection
+        if (!validator.isAlphanumeric(value.replace(/_/g, ''))) {
           console.warn(
             `⚠️  Potential SQL injection attempt detected in parameter ${param}: ${value} by user ${req.user?.id}`
           );
           validationErrors.push(`Invalid characters detected in ${param} parameter`);
         }
       }
     }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between db1c97b and 8bec58e.

📒 Files selected for processing (7)
  • apps/backend/src/blockchain/eventListener.ts (1 hunks)
  • apps/backend/src/controllers/sync.controller.ts (1 hunks)
  • apps/backend/src/index.ts (3 hunks)
  • apps/backend/src/routes/sync.routes.ts (1 hunks)
  • apps/backend/src/services/logging.service.ts (1 hunks)
  • apps/backend/src/services/sync.service.ts (1 hunks)
  • apps/backend/src/tests/sync.service.test.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • apps/backend/src/blockchain/eventListener.ts
🧰 Additional context used
🧬 Code Graph Analysis (6)
apps/backend/src/routes/sync.routes.ts (3)
apps/backend/src/types/auth.types.ts (1)
  • AuthRequest (5-7)
apps/backend/src/config/supabase.ts (1)
  • supabase (205-217)
apps/backend/src/controllers/sync.controller.ts (1)
  • syncController (456-456)
apps/backend/src/services/logging.service.ts (2)
apps/backend/src/types/common.types.ts (1)
  • TransactionLog (31-37)
apps/backend/src/config/supabase.ts (1)
  • supabase (205-217)
apps/backend/src/tests/sync.service.test.ts (2)
apps/backend/src/services/sync.service.ts (3)
  • syncService (611-611)
  • SyncService (59-608)
  • mapBlockchainStatus (506-514)
apps/backend/src/config/supabase.ts (1)
  • supabase (205-217)
apps/backend/src/controllers/sync.controller.ts (3)
apps/backend/src/services/logging.service.ts (1)
  • loggingService (256-256)
apps/backend/src/services/sync.service.ts (1)
  • syncService (611-611)
apps/backend/src/config/supabase.ts (1)
  • supabase (205-217)
apps/backend/src/index.ts (2)
apps/backend/src/services/cleanup-schedular.ts (2)
  • runInitialCleanup (29-37)
  • startCleanupScheduler (5-18)
apps/backend/src/services/sync.service.ts (1)
  • syncService (611-611)
apps/backend/src/services/sync.service.ts (2)
apps/backend/src/config/supabase.ts (1)
  • supabase (205-217)
apps/backend/src/services/logging.service.ts (1)
  • loggingService (256-256)
🔇 Additional comments (4)
apps/backend/src/index.ts (1)

74-84: LGTM! Service initialization properly handled

The async initialization is now properly awaited with comprehensive error handling that logs failures while allowing the server to continue running with degraded functionality.

apps/backend/src/routes/sync.routes.ts (3)

38-96: LGTM! Comprehensive admin verification implementation

The admin check implementation is well-structured with multiple authorization methods, clear logging, and proper environment-based configuration. The TODO comment for production user_roles table implementation is appropriately placed.


98-184: LGTM! Robust admin authentication middleware

The middleware implementation includes proper JWT validation via Supabase, comprehensive error handling, security headers, and detailed logging. The integration with the checkUserAdminStatus function provides flexible admin verification.


186-366: LGTM! Excellent pagination validation with security considerations

The pagination validation middleware is exceptionally well-implemented with:

  • Comprehensive parameter validation
  • SQL injection pattern detection
  • Abuse pattern monitoring
  • Detailed error responses
  • Security logging

Comment on lines +10 to +22
async startSync(req: Request, res: Response): Promise<void> {
try {
const logId = await loggingService.logBlockchainOperation('startSync', {});

await syncService.start();

await loggingService.logBlockchainSuccess(logId, { action: 'sync_started' });

res.json({
success: true,
message: 'Synchronization service started successfully',
status: syncService.getStatus(),
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify idempotency and request coalescing for start/stop/manual sync

To avoid races and duplicate work under concurrent admin requests:

  • Ensure start/stop are idempotent and protected by a lock/mutex.
  • Coalesce manual sync triggers so only one runs at a time.

If not already enforced in SyncService, consider guarding with an in-memory mutex or queue.

Run this script to check for lock/guard patterns in SyncService:

Also applies to: 36-48, 85-97, 267-279, 334-341


🏁 Script executed:

#!/bin/bash
# Look for state flags, locks, and idempotent guards in SyncService
rg -n -C3 -g 'apps/backend/src/**/*.ts' -P '(?s)class\s+SyncService\b.*?\{.*?\}' \
  | sed -n '1,200p'

echo -e "\n=== Look for potential guard variables or mutex usage ==="
rg -n -C2 -g 'apps/backend/src/**/*.ts' -P '\b(isRunning|isSyncing|inProgress|locked|mutex|semaphore)\b'

echo -e "\n=== start/stop/manual trigger implementations ==="
rg -n -C3 -g 'apps/backend/src/**/*.ts' -P '\b(start|stop|triggerManualSync)\s*\('

Length of output: 25150


Fix required: add a concurrency guard for polling/manual sync (start/stop are already idempotent)

Brief: The repo already uses an isRunning flag to make start()/stop() idempotent, but there is no in-flight guard for pollForEvents — manual triggers call pollForEvents() directly and can overlap with the scheduled polls.

Files/locations to update:

  • apps/backend/src/services/sync.service.ts
    • private isRunning = false (declaration around line ~64)
    • async start() — idempotent check present (lines ~196–210)
    • async stop() — idempotent check present (lines ~226–236)
    • private async pollForEvents() — currently only checks this.isRunning; no "in-progress" guard (lines ~296–300)
    • async triggerManualSync() — simply calls pollForEvents(), allowing concurrent runs (lines ~580–583)
  • apps/backend/src/controllers/sync.controller.ts — controllers call start/stop/triggerManualSync (lines ~10–22, ~82–92); safe to rely on service-level fixes.

Actionable recommendation (minimal):

  • Add an in-flight guard or async mutex around pollForEvents to prevent reentrancy/overlap. Example pattern:
    • private pollingInProgress = false;
    • at start of pollForEvents: if (!this.isRunning || this.pollingInProgress) return;
    • set pollingInProgress = true before work and clear it in finally.
  • Alternatively, serialize manual triggers with a queue or use an async-mutex library if you need queued behavior instead of simple skip.
  • Add unit tests that assert:
    • calling triggerManualSync() while a poll is running does not start a concurrent poll,
    • start()/stop() remain idempotent.
🤖 Prompt for AI Agents
In apps/backend/src/services/sync.service.ts (declare near line ~64) add a
private pollingInProgress = false flag and update private async pollForEvents()
(around lines ~296–300) to return immediately if !this.isRunning ||
this.pollingInProgress, set pollingInProgress = true before doing work and clear
it in a finally block to guarantee reset on error; ensure triggerManualSync()
(around lines ~580–583) awaits pollForEvents() (or can remain unchanged if it
already awaits) so manual triggers respect the in-flight guard; controllers in
apps/backend/src/controllers/sync.controller.ts (lines ~10–22 and ~82–92) need
no change; add unit tests asserting that concurrent triggerManualSync() calls do
not run overlapping polls and start()/stop() remain idempotent.

Comment on lines +226 to +238
const { data: recentEvents } = await supabase
.from('sync_events')
.select('*')
.order('created_at', { ascending: false })
.limit(10);

// Get recent logs
const { data: recentLogs } = await supabase
.from('sync_logs')
.select('*')
.order('created_at', { ascending: false })
.limit(10);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Harden error handling in getDashboard for recent lists and failed count

Errors from the recent events/logs queries and the failed-events count are ignored, which can mask failures and return partial/misleading data.

Apply this diff:

-      const { data: recentEvents } = await supabase
+      const { data: recentEvents, error: recentEventsError } = await supabase
         .from('sync_events')
         .select('*')
         .order('created_at', { ascending: false })
         .limit(10);
+      if (recentEventsError) {
+        throw recentEventsError;
+      }
-      const { data: recentLogs } = await supabase
+      const { data: recentLogs, error: recentLogsError } = await supabase
         .from('sync_logs')
         .select('*')
         .order('created_at', { ascending: false })
         .limit(10);
+      if (recentLogsError) {
+        throw recentLogsError;
+      }
-      const { count: failedEventsCount } = await supabase
+      const { count: failedEventsCount, error: failedEventsError } = await supabase
         .from('sync_events')
         .select('*', { count: 'exact', head: true })
         .eq('processed', false)
         .not('error', 'is', null);
+      if (failedEventsError) {
+        throw failedEventsError;
+      }

Also applies to: 240-245

🤖 Prompt for AI Agents
In apps/backend/src/controllers/sync.controller.ts around lines 226-238 (and
also where similar logic exists at 240-245), the Supabase queries for
recentEvents, recentLogs and the failed-events count ignore the returned error,
which can hide failures and return misleading data; update the code to check
each query result for an error (or wrap the calls in try/catch), call
processLogger.error with the error details, and return/throw an appropriate
500-level response (or set safe default values) instead of proceeding with
undefined/partial data so callers receive a clear failure signal.

@respp respp merged commit 06287e6 into Stellar-Rent:main Aug 19, 2025
5 of 8 checks passed
@coderabbitai coderabbitai bot mentioned this pull request Sep 27, 2025
4 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants