Feat: Real-time Blockchain-Database Synchronization System#114
Feat: Real-time Blockchain-Database Synchronization System#114respp merged 14 commits intoStellar-Rent:mainfrom
Conversation
WalkthroughAdds a blockchain synchronization subsystem: DB migration (sync tables, view, RLS), an event listener and SyncService for Soroban, persisted async logging to Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Controller
participant SyncService
participant EventListener
participant SorobanRPC
participant SupabaseDB
participant LoggingService
Client->>Controller: POST /api/sync/start|/stop|/trigger|/status
Controller->>SyncService: start()/stop()/triggerManualSync()/getStatus()
SyncService->>EventListener: startPolling()
loop every poll interval
EventListener->>SorobanRPC: fetch latest ledger / events
SorobanRPC-->>EventListener: events[]
EventListener->>SyncService: parsed events
SyncService->>SupabaseDB: insert `sync_events`, upsert `sync_state`, update domain tables
SyncService->>LoggingService: log operation/result/error
LoggingService->>SupabaseDB: insert `sync_logs`
end
Controller->>SupabaseDB: query `sync_events`/`sync_logs`/`sync_dashboard`
SupabaseDB-->>Controller: results
Controller-->>Client: HTTP response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related issues
Possibly related PRs
Suggested reviewers
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. 📜 Recent review detailsConfiguration used: CodeRabbit UI 💡 Knowledge Base configuration:
You can enable these sources in your CodeRabbit configuration. 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (2)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 25
🧹 Nitpick comments (10)
apps/backend/env.example (2)
2-4: Add security warning for sensitive data.Consider adding comments to warn developers about securing these sensitive values in production environments.
# Database Configuration +# WARNING: Replace with actual values and keep secure in production SUPABASE_URL=your_supabase_url SUPABASE_ANON_KEY=your_supabase_anon_key SUPABASE_SERVICE_ROLE_KEY=your_supabase_service_role_key
22-24: Consider more granular sync configuration.The sync service configuration might benefit from additional tuning parameters for production environments.
# Sync Service Configuration SYNC_POLL_INTERVAL=5000 SYNC_MAX_RETRIES=3 SYNC_RETRY_DELAY=1000 +# Maximum events to process per batch +SYNC_BATCH_SIZE=100 +# Timeout for blockchain requests (ms) +SYNC_REQUEST_TIMEOUT=30000apps/backend/database/migrations/001_create_sync_tables.sql (1)
6-6: Consider removing hardcoded primary key default.Using a hardcoded default value for the primary key could cause issues if multiple sync states need to be tracked in the future.
CREATE TABLE IF NOT EXISTS public.sync_state ( - id INTEGER PRIMARY KEY DEFAULT 1, + id SERIAL PRIMARY KEY, last_processed_block BIGINT NOT NULL DEFAULT 0,apps/backend/src/routes/sync.routes.ts (1)
15-18: Consider adding input validation middleware.Sync operations could benefit from request validation to prevent malformed requests from causing issues.
Add validation middleware for sensitive operations:
+import { body, query } from 'express-validator'; +import { validationMiddleware } from '../middleware/validation.middleware'; +// Validation for manual sync trigger +const validateSyncTrigger = [ + body('fromBlock').optional().isNumeric(), + body('toBlock').optional().isNumeric(), + validationMiddleware +]; // Sync service management router.post('/start', requireAdmin, syncController.startSync.bind(syncController)); router.post('/stop', requireAdmin, syncController.stopSync.bind(syncController)); router.get('/status', requireAdmin, syncController.getStatus.bind(syncController)); -router.post('/trigger', requireAdmin, syncController.triggerManualSync.bind(syncController)); +router.post('/trigger', requireAdmin, validateSyncTrigger, syncController.triggerManualSync.bind(syncController));apps/backend/src/tests/mockTypes.ts (2)
2-14: Consider adding return type information to mock methods.The current mock interface doesn't specify return types, which could lead to less precise type checking in tests.
export interface MockSupabaseQueryBuilder { - select: jest.Mock; - insert: jest.Mock; - update: jest.Mock; - delete: jest.Mock; - upsert: jest.Mock; - eq: jest.Mock; - not: jest.Mock; - order: jest.Mock; - limit: jest.Mock; - range: jest.Mock; - single: jest.Mock; + select: jest.Mock<MockSupabaseQueryBuilder, [string?]>; + insert: jest.Mock<MockSupabaseQueryBuilder, [Record<string, unknown> | Record<string, unknown>[]]>; + update: jest.Mock<MockSupabaseQueryBuilder, [Record<string, unknown>]>; + delete: jest.Mock<MockSupabaseQueryBuilder, []>; + upsert: jest.Mock<MockSupabaseQueryBuilder, [Record<string, unknown> | Record<string, unknown>[]]>; + eq: jest.Mock<MockSupabaseQueryBuilder, [string, unknown]>; + not: jest.Mock<MockSupabaseQueryBuilder, [string, unknown]>; + order: jest.Mock<MockSupabaseQueryBuilder, [string, { ascending?: boolean }?]>; + limit: jest.Mock<MockSupabaseQueryBuilder, [number]>; + range: jest.Mock<MockSupabaseQueryBuilder, [number, number]>; + single: jest.Mock<Promise<{ data: unknown; error: unknown }>, []>; }
16-18: Add method chaining support to mock client.Consider whether the mock client should support method chaining patterns commonly used with Supabase.
export interface MockSupabaseClient { from: jest.Mock<MockSupabaseQueryBuilder>; + // Add other client methods if needed for comprehensive mocking + // auth?: jest.Mock; + // storage?: jest.Mock; }apps/backend/src/tests/sync.service.test.ts (1)
66-111: Extract repeated mock setup to reduce duplicationThe Supabase mock setup is repeated across multiple tests. Consider extracting it to a helper function.
Add a helper function at the top of the describe block:
describe('Service Lifecycle', () => { const setupSupabaseMock = () => { const mockSupabase = supabase as jest.Mocked<typeof supabase>; mockSupabase.from.mockReturnValue({ select: jest.fn(() => ({ single: jest.fn(() => Promise.resolve({ data: null, error: null })), })), } as unknown as ReturnType<typeof supabase.from>); return mockSupabase; }; it('should start and stop service correctly', async () => { setupSupabaseMock(); // ... rest of the test }); // Apply to other tests as well });apps/backend/src/services/sync.service.ts (1)
223-242: Missing critical implementation for event fetchingThe
getContractEventsmethod is not implemented and always returns an empty array, making the sync service non-functional.This is a critical missing piece. Would you like me to help implement the actual event querying from the Stellar network or create an issue to track this implementation?
apps/backend/src/blockchain/eventListener.ts (2)
1-1: Remove unused importThe
xdrimport from Stellar SDK is not used in the code.Apply this diff:
-import { Contract, Networks, xdr } from '@stellar/stellar-sdk'; +import { Contract, Networks } from '@stellar/stellar-sdk';
154-165: Consider implementing exponential backoff for retriesThe current retry mechanism uses a fixed delay. For better resilience, consider implementing exponential backoff.
Apply this diff to implement exponential backoff:
} else { // Wait before retrying - await new Promise((resolve) => setTimeout(resolve, this.config.retryDelay)); + const backoffDelay = this.config.retryDelay * Math.pow(2, this.retryCount - 1); + await new Promise((resolve) => setTimeout(resolve, Math.min(backoffDelay, 60000))); // Cap at 60 seconds }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
apps/backend/database/migrations/001_create_sync_tables.sql(1 hunks)apps/backend/env.example(1 hunks)apps/backend/src/blockchain/eventListener.ts(1 hunks)apps/backend/src/controllers/sync.controller.ts(1 hunks)apps/backend/src/index.ts(3 hunks)apps/backend/src/routes/sync.routes.ts(1 hunks)apps/backend/src/services/logging.service.ts(1 hunks)apps/backend/src/services/sync.service.ts(1 hunks)apps/backend/src/tests/mockTypes.ts(1 hunks)apps/backend/src/tests/sync.service.test.ts(1 hunks)
🔇 Additional comments (5)
apps/backend/env.example (1)
34-35: Verify rate limiting defaults for production readiness.The current rate limiting configuration allows 100 requests per 15 minutes, which might be restrictive for a production blockchain synchronization service.
Please verify these rate limiting values are appropriate for the expected API usage patterns, particularly for admin sync operations that might require higher throughput.
apps/backend/database/migrations/001_create_sync_tables.sql (1)
81-88: Ensureauth.role()is availableThe RLS policies in your migrations reference
auth.role(), but we don’t define that function anywhere in our SQL. Supabase providesauth.role()out of the box, so please confirm it’s present in your target database. If you run these scripts locally or outside of Supabase, you may need to create a stub or include theauthschema androle()function yourself.Affected files:
- apps/backend/database/setup.sql
- apps/backend/database/migrations/00002_storage_and_rls.sql
- apps/backend/database/migrations/00004_create_wallet_auth_tables.sql
- apps/backend/database/migrations/00005_create_profile_table.sql
- apps/backend/database/migrations/001_create_sync_tables.sql
apps/backend/src/index.ts (1)
13-13: LGTM: Clean integration of sync functionality.The integration of sync routes and service initialization follows established patterns and maintains consistency with the existing codebase structure.
Also applies to: 15-15, 63-63
apps/backend/src/tests/mockTypes.ts (1)
21-25: LGTM: Clean test access interface design.The
SyncServiceTestAccessinterface provides a clean way to test private methods while maintaining type safety. This is a good pattern for comprehensive unit testing.apps/backend/src/controllers/sync.controller.ts (1)
129-131: Fix incorrect undefined check for processed parameterThe condition
processed !== undefinedwill always be true becausereq.query.processedreturns a string or undefined. The check should be for the string value.Apply this diff:
- if (processed !== undefined) { + if (processed) { query = query.eq('processed', processed === 'true'); }Likely an incorrect or invalid review comment.
| CREATE TABLE IF NOT EXISTS public.sync_events ( | ||
| id UUID PRIMARY KEY DEFAULT gen_random_uuid(), | ||
| event_id TEXT NOT NULL UNIQUE, | ||
| event_type TEXT NOT NULL CHECK (event_type IN ('booking_created', 'booking_updated', 'booking_cancelled', 'payment_confirmed')), |
There was a problem hiding this comment.
Remove duplicate constraint definition.
The event_type constraint is defined twice - once inline and once as a separate constraint.
event_type TEXT NOT NULL CHECK (event_type IN ('booking_created', 'booking_updated', 'booking_cancelled', 'payment_confirmed')),
booking_id TEXT,
property_id TEXT,
user_id TEXT,
event_data JSONB NOT NULL,
processed BOOLEAN NOT NULL DEFAULT FALSE,
error TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT timezone('utc'::text, now()),
processed_at TIMESTAMPTZ,
- CONSTRAINT valid_event_type CHECK (event_type IN ('booking_created', 'booking_updated', 'booking_cancelled', 'payment_confirmed'))Also applies to: 29-29
🤖 Prompt for AI Agents
In apps/backend/database/migrations/001_create_sync_tables.sql at lines 19 and
29, the CHECK constraint for the event_type column is defined twice, once inline
and once as a separate constraint. Remove one of these duplicate constraint
definitions to avoid redundancy and potential conflicts, keeping only a single
CHECK constraint for event_type.
| COUNT(CASE WHEN se.processed = false AND se.error IS NOT NULL THEN 1 END) as failed_events_count, | ||
| MAX(se.created_at) as last_event_time | ||
| FROM public.sync_state s | ||
| LEFT JOIN public.sync_events se ON true |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Optimize dashboard view join condition.
The LEFT JOIN with ON true creates a Cartesian product, which could be inefficient for large datasets.
FROM public.sync_state s
-LEFT JOIN public.sync_events se ON true
+LEFT JOIN public.sync_events se ON s.id IS NOT NULLOr consider restructuring the view to use subqueries for better performance:
CREATE OR REPLACE VIEW public.sync_dashboard AS
SELECT
s.last_processed_block,
s.total_events_processed,
s.failed_events,
s.last_sync_time,
s.updated_at as last_updated,
(SELECT COUNT(*) FROM public.sync_events) as total_events,
(SELECT COUNT(*) FROM public.sync_events WHERE processed = false) as pending_events,
(SELECT COUNT(*) FROM public.sync_events WHERE processed = false AND error IS NOT NULL) as failed_events_count,
(SELECT MAX(created_at) FROM public.sync_events) as last_event_time
FROM public.sync_state s;🤖 Prompt for AI Agents
In apps/backend/database/migrations/001_create_sync_tables.sql at line 103, the
LEFT JOIN with ON true causes a Cartesian product, leading to inefficiency.
Replace the join with subqueries in the SELECT clause to aggregate the needed
sync_events data, as shown in the suggested view definition, to optimize
performance and avoid unnecessary row multiplication.
| GRANT SELECT ON public.sync_dashboard TO authenticated; | ||
| GRANT ALL ON public.sync_state TO authenticated; | ||
| GRANT ALL ON public.sync_events TO authenticated; | ||
| GRANT ALL ON public.sync_logs TO authenticated; No newline at end of file |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Reconsider broad permissions for authenticated users.
Granting ALL permissions to all authenticated users on sync tables might be too permissive for a production environment.
-- Grant permissions
GRANT SELECT ON public.sync_dashboard TO authenticated;
-GRANT ALL ON public.sync_state TO authenticated;
-GRANT ALL ON public.sync_events TO authenticated;
-GRANT ALL ON public.sync_logs TO authenticated;
+-- Grant specific permissions based on actual service needs
+GRANT SELECT, INSERT, UPDATE ON public.sync_state TO service_role;
+GRANT SELECT, INSERT, UPDATE ON public.sync_events TO service_role;
+GRANT SELECT, INSERT ON public.sync_logs TO service_role;🤖 Prompt for AI Agents
In apps/backend/database/migrations/001_create_sync_tables.sql around lines 107
to 110, the GRANT statements give ALL permissions on sync_state, sync_events,
and sync_logs tables to all authenticated users, which is too broad. Modify
these statements to grant only the necessary minimal permissions (e.g., SELECT,
INSERT, UPDATE) instead of ALL, based on the actual access requirements for
authenticated users in production.
| SOROBAN_NETWORK_PASSPHRASE=Test SDF Network ; September 2015 | ||
| STELLAR_SECRET_KEY=your_stellar_secret_key |
There was a problem hiding this comment.
Fix duplicate network passphrase configuration.
The STELLAR_NETWORK_PASSPHRASE appears in both Soroban and legacy configurations with the same value, which could lead to confusion and maintenance issues.
SOROBAN_NETWORK_PASSPHRASE=Test SDF Network ; September 2015
STELLAR_SECRET_KEY=your_stellar_secret_key
# Legacy Stellar Configuration (for backward compatibility)
STELLAR_RPC_URL=https://horizon-testnet.stellar.org
BOOKING_CONTRACT_ADDRESS=your_booking_contract_address
STELLAR_SOURCE_ACCOUNT=your_stellar_source_account
-STELLAR_NETWORK_PASSPHRASE=Test SDF Network ; September 2015
+# STELLAR_NETWORK_PASSPHRASE uses SOROBAN_NETWORK_PASSPHRASE for consistencyAlso applies to: 19-19
🤖 Prompt for AI Agents
In apps/backend/env.example around lines 12 to 13 and line 19, there is a
duplicate configuration for the network passphrase under different names
(SOROBAN_NETWORK_PASSPHRASE and STELLAR_NETWORK_PASSPHRASE) with the same value.
Remove one of these duplicate entries to avoid confusion and maintain a single
source of truth for the network passphrase configuration.
| private async handleBookingCreated(event: Record<string, unknown>): Promise<void> { | ||
| const eventData = event.data as BlockchainEventData; | ||
| const { data: existingBooking } = await supabase | ||
| .from('bookings') | ||
| .select('*') | ||
| .eq('escrow_address', eventData.escrow_id || '') | ||
| .single(); | ||
|
|
||
| if (existingBooking) { | ||
| // Update existing booking with blockchain data | ||
| await supabase | ||
| .from('bookings') | ||
| .update({ | ||
| status: this.mapBlockchainStatus(eventData.status || ''), | ||
| updated_at: new Date().toISOString(), | ||
| }) | ||
| .eq('id', existingBooking.id); | ||
| } else { | ||
| // Create new booking record | ||
| await supabase.from('bookings').insert({ | ||
| property_id: eventData.property_id || '', | ||
| user_id: eventData.user_id || '', | ||
| dates: { | ||
| from: new Date((eventData.start_date || 0) * 1000).toISOString(), | ||
| to: new Date((eventData.end_date || 0) * 1000).toISOString(), | ||
| }, | ||
| guests: eventData.guests || 1, | ||
| total: eventData.total_price || 0, | ||
| deposit: eventData.deposit || 0, | ||
| escrow_address: eventData.escrow_id || '', | ||
| status: this.mapBlockchainStatus(eventData.status || ''), | ||
| }); | ||
| } | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Add data validation and improve date handling
The method lacks validation for required fields and the Unix timestamp conversion doesn't specify timezone handling.
Apply this diff to add validation:
private async handleBookingCreated(event: Record<string, unknown>): Promise<void> {
const eventData = event.data as BlockchainEventData;
+
+ // Validate required fields
+ if (!eventData.escrow_id || !eventData.property_id || !eventData.user_id) {
+ throw new Error('Missing required fields in booking created event');
+ }
+
const { data: existingBooking } = await supabase
.from('bookings')
.select('*')
- .eq('escrow_address', eventData.escrow_id || '')
+ .eq('escrow_address', eventData.escrow_id)
.single();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private async handleBookingCreated(event: Record<string, unknown>): Promise<void> { | |
| const eventData = event.data as BlockchainEventData; | |
| const { data: existingBooking } = await supabase | |
| .from('bookings') | |
| .select('*') | |
| .eq('escrow_address', eventData.escrow_id || '') | |
| .single(); | |
| if (existingBooking) { | |
| // Update existing booking with blockchain data | |
| await supabase | |
| .from('bookings') | |
| .update({ | |
| status: this.mapBlockchainStatus(eventData.status || ''), | |
| updated_at: new Date().toISOString(), | |
| }) | |
| .eq('id', existingBooking.id); | |
| } else { | |
| // Create new booking record | |
| await supabase.from('bookings').insert({ | |
| property_id: eventData.property_id || '', | |
| user_id: eventData.user_id || '', | |
| dates: { | |
| from: new Date((eventData.start_date || 0) * 1000).toISOString(), | |
| to: new Date((eventData.end_date || 0) * 1000).toISOString(), | |
| }, | |
| guests: eventData.guests || 1, | |
| total: eventData.total_price || 0, | |
| deposit: eventData.deposit || 0, | |
| escrow_address: eventData.escrow_id || '', | |
| status: this.mapBlockchainStatus(eventData.status || ''), | |
| }); | |
| } | |
| } | |
| private async handleBookingCreated(event: Record<string, unknown>): Promise<void> { | |
| const eventData = event.data as BlockchainEventData; | |
| // Validate required fields | |
| if (!eventData.escrow_id || !eventData.property_id || !eventData.user_id) { | |
| throw new Error('Missing required fields in booking created event'); | |
| } | |
| const { data: existingBooking } = await supabase | |
| .from('bookings') | |
| .select('*') | |
| .eq('escrow_address', eventData.escrow_id) | |
| .single(); | |
| if (existingBooking) { | |
| // Update existing booking with blockchain data | |
| await supabase | |
| .from('bookings') | |
| .update({ | |
| status: this.mapBlockchainStatus(eventData.status || ''), | |
| updated_at: new Date().toISOString(), | |
| }) | |
| .eq('id', existingBooking.id); | |
| } else { | |
| // Create new booking record | |
| await supabase.from('bookings').insert({ | |
| property_id: eventData.property_id || '', | |
| user_id: eventData.user_id || '', | |
| dates: { | |
| from: new Date((eventData.start_date || 0) * 1000).toISOString(), | |
| to: new Date((eventData.end_date || 0) * 1000).toISOString(), | |
| }, | |
| guests: eventData.guests || 1, | |
| total: eventData.total_price || 0, | |
| deposit: eventData.deposit || 0, | |
| escrow_address: eventData.escrow_id || '', | |
| status: this.mapBlockchainStatus(eventData.status || ''), | |
| }); | |
| } | |
| } |
🤖 Prompt for AI Agents
In apps/backend/src/services/sync.service.ts around lines 286 to 319, the
handleBookingCreated method lacks validation for required fields and does not
explicitly handle timezone when converting Unix timestamps to ISO strings. Add
checks to ensure essential fields like escrow_id, property_id, and user_id are
present before proceeding. For date conversion, explicitly convert Unix
timestamps to UTC ISO strings by using appropriate Date methods or libraries
that handle timezone correctly. This will prevent invalid data insertion and
ensure consistent date handling.
| describe('Error Handling', () => { | ||
| it('should handle network errors gracefully', async () => { | ||
| // Mock network error | ||
| global.fetch = jest.fn(() => | ||
| Promise.reject(new Error('Network error')) | ||
| ) as unknown as typeof fetch; | ||
|
|
||
| const mockSupabase = supabase as jest.Mocked<typeof supabase>; | ||
| mockSupabase.from.mockReturnValue({ | ||
| select: jest.fn(() => ({ | ||
| single: jest.fn(() => Promise.resolve({ data: null, error: null })), | ||
| })), | ||
| upsert: jest.fn(() => Promise.resolve({ data: null, error: null })), | ||
| } as unknown as ReturnType<typeof supabase.from>); | ||
|
|
||
| await syncService.start(); | ||
|
|
||
| // Should not crash the service | ||
| expect(syncService.getStatus().isRunning).toBe(true); | ||
| }); |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Clean up global fetch mock to prevent test pollution
Mocking global fetch without restoring it can affect other tests. Ensure proper cleanup.
Apply this diff:
it('should handle network errors gracefully', async () => {
+ const originalFetch = global.fetch;
// Mock network error
global.fetch = jest.fn(() =>
Promise.reject(new Error('Network error'))
) as unknown as typeof fetch;
const mockSupabase = supabase as jest.Mocked<typeof supabase>;
// ... rest of the test
// Should not crash the service
expect(syncService.getStatus().isRunning).toBe(true);
+
+ // Restore original fetch
+ global.fetch = originalFetch;
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| describe('Error Handling', () => { | |
| it('should handle network errors gracefully', async () => { | |
| // Mock network error | |
| global.fetch = jest.fn(() => | |
| Promise.reject(new Error('Network error')) | |
| ) as unknown as typeof fetch; | |
| const mockSupabase = supabase as jest.Mocked<typeof supabase>; | |
| mockSupabase.from.mockReturnValue({ | |
| select: jest.fn(() => ({ | |
| single: jest.fn(() => Promise.resolve({ data: null, error: null })), | |
| })), | |
| upsert: jest.fn(() => Promise.resolve({ data: null, error: null })), | |
| } as unknown as ReturnType<typeof supabase.from>); | |
| await syncService.start(); | |
| // Should not crash the service | |
| expect(syncService.getStatus().isRunning).toBe(true); | |
| }); | |
| describe('Error Handling', () => { | |
| it('should handle network errors gracefully', async () => { | |
| // Save and mock fetch | |
| const originalFetch = global.fetch; | |
| global.fetch = jest.fn(() => | |
| Promise.reject(new Error('Network error')) | |
| ) as unknown as typeof fetch; | |
| const mockSupabase = supabase as jest.Mocked<typeof supabase>; | |
| mockSupabase.from.mockReturnValue({ | |
| select: jest.fn(() => ({ | |
| single: jest.fn(() => Promise.resolve({ data: null, error: null })), | |
| })), | |
| upsert: jest.fn(() => Promise.resolve({ data: null, error: null })), | |
| } as unknown as ReturnType<typeof supabase.from>); | |
| await syncService.start(); | |
| // Should not crash the service | |
| expect(syncService.getStatus().isRunning).toBe(true); | |
| // Restore original fetch | |
| global.fetch = originalFetch; | |
| }); | |
| }); |
🤖 Prompt for AI Agents
In apps/backend/src/tests/sync.service.test.ts around lines 290 to 309, the
global fetch is mocked but not restored, which can cause side effects in other
tests. Fix this by saving the original global fetch before mocking it and
restoring it after the test completes, using afterEach or finally block to
ensure cleanup regardless of test outcome.
There was a problem hiding this comment.
Hi @MPSxDev , great implementation! Just a few adjustments left: you only need to address the valid suggestions from CodeRabbit, which are essentially the following: when mocking global.fetch, make sure to restore it at the end of the test to avoid test contamination. All of this aligns well with the current structure of the SyncService, its endpoints, and the tests included in the PR.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
apps/backend/src/index.ts (2)
25-31: Don’t swallow blockchain sync startup failures — fail fast or rethrow.If the sync service fails to start, the app continues running in a degraded state. This mirrors prior feedback.
Apply this diff to escalate the failure (exit in production, rethrow otherwise):
// Initialize blockchain sync service try { await syncService.start(); console.log('✅ Blockchain synchronization service started'); } catch (error) { console.error('❌ Failed to start blockchain sync service:', error); - } + if (process.env.NODE_ENV === 'production') { + process.exit(1); + } + throw error; + }
75-76: Await initializeServices; only log success on resolution.Currently the success log prints immediately, even if initialization fails. This was previously flagged; please handle the promise and exit in production on failure.
- initializeServices(); - console.log('✅ All services initialized successfully'); + initializeServices() + .then(() => { + console.log('✅ All services initialized successfully'); + }) + .catch((error) => { + console.error('❌ Failed to initialize services:', error); + if (process.env.NODE_ENV === 'production') process.exit(1); + });
🧹 Nitpick comments (4)
apps/backend/src/index.ts (4)
15-15: Ensure syncService lifecycle is well-managed (idempotent start, graceful shutdown).The singleton import is fine (see services/sync.service.ts export). Please verify:
- syncService.start() is idempotent if called more than once (e.g., hot reloads).
- A corresponding stop()/dispose() is available and called on process termination.
If missing, I can help add a stop method and signal handlers.
20-24: Init sequence is reasonable; consider initializing services before accepting requests.Starting cleanup and other services after the server begins accepting requests can expose endpoints before dependencies are ready. Prefer initializing services first, then calling app.listen, or gate readiness via a health/readiness endpoint until initialization completes.
63-63: Secure /api/sync for admin-only access.These endpoints manage synchronization. Ensure the route file enforces appropriate auth/role guards (e.g., admin-only) in addition to the global rate limiter.
73-81: Add graceful shutdown to cleanly stop sync and schedulers.To avoid orphaned jobs or dangling connections on SIGINT/SIGTERM, consider adding:
// Place near startup code function setupGracefulShutdown() { const shutdown = async (signal: string) => { try { console.log(`\nReceived ${signal}, shutting down gracefully...`); // If available: // await syncService.stop(); // await stopCleanupScheduler(); } catch (e) { console.error('Error during shutdown:', e); } finally { process.exit(0); } }; ['SIGINT', 'SIGTERM'].forEach(sig => process.on(sig as NodeJS.Signals, () => void shutdown(sig))); } setupGracefulShutdown();If stopCleanupScheduler() isn’t exported from cleanup-schedular, I can add it.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
apps/backend/src/index.ts(3 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
apps/backend/src/index.ts (2)
apps/backend/src/services/cleanup-schedular.ts (2)
runInitialCleanup(29-37)startCleanupScheduler(5-18)apps/backend/src/services/sync.service.ts (1)
syncService(471-471)
🔇 Additional comments (1)
apps/backend/src/index.ts (1)
13-13: Wire up /api/sync routes — LGTMThe route registration for the sync admin endpoints is correctly namespaced under /api/sync.
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (11)
apps/backend/src/services/logging.service.ts (1)
245-253: Fix inconsistent logBlockchainError signatureThe
logBlockchainErrormethod signature changed to acceptoperationanderrorparameters directly, but this breaks consistency withlogBlockchainSuccesswhich accepts aTransactionLogparameter. This makes the API inconsistent and potentially breaks the intended workflow of updating an existing log entry to failed status.apps/backend/src/services/sync.service.ts (3)
334-336: Fix incorrect RPC endpoint URL constructionThe URL construction for fetching the latest ledger is incorrect. Stellar RPC uses JSON-RPC protocol, not REST endpoints.
416-419: Reconsider error propagation strategyThe method marks the event as failed but then re-throws the error, which could halt the entire sync process for a single failed event.
Apply this diff to allow the sync process to continue with other events:
} catch (error) { console.error(`Error processing event ${event.id as string}:`, error); await this.markEventFailed(event.id as string, error as Error); - throw error; + await loggingService.logBlockchainError('processEvent', error); + // Don't throw - allow sync to continue with other events }
426-459: Add data validation for required fieldsThe method should validate that essential fields are present before processing to prevent invalid data insertion.
Apply this diff to add validation:
private async handleBookingCreated(event: Record<string, unknown>): Promise<void> { const eventData = event.data as BlockchainEventData; + + // Validate required fields + if (!eventData.escrow_id || !eventData.property_id || !eventData.user_id) { + throw new Error('Missing required fields in booking created event'); + } + const { data: existingBooking } = await supabase .from('bookings') .select('*') - .eq('escrow_address', eventData.escrow_id || '') + .eq('escrow_address', eventData.escrow_id) .single();apps/backend/src/tests/sync.service.test.ts (3)
458-467: Test relies on private method accessAccessing private methods through type casting breaks encapsulation and makes tests fragile. The status mapping logic should be tested through the public API.
Consider extracting the status mapping logic to a separate utility function that can be tested independently, or test it indirectly through the event processing flow.
234-241: Test the actual implementation instead of mocking the method under testThe test uses type casting to access a private method directly, which breaks encapsulation and makes tests brittle. Private methods should be tested through the public API.
Consider refactoring to test through the public API that internally uses this method. If direct testing is needed, consider making the method protected or extracting the logic to a separate testable unit.
471-489: Clean up global fetch mock after testThe test modifies global.fetch but doesn't restore it, which could affect other tests.
Apply this diff to restore the original fetch:
it('should handle network errors gracefully', async () => { + const originalFetch = global.fetch; // Mock network error global.fetch = jest.fn(() => Promise.reject(new Error('Network error')) ) as unknown as typeof fetch; - const mockSupabase = supabase as jest.Mocked<typeof supabase>; - mockSupabase.from.mockReturnValue({ - select: jest.fn(() => ({ - single: jest.fn(() => Promise.resolve({ data: null, error: null })), - })), - upsert: jest.fn(() => Promise.resolve({ data: null, error: null })), - } as unknown as ReturnType<typeof supabase.from>); + try { + const mockSupabase = supabase as jest.Mocked<typeof supabase>; + mockSupabase.from.mockReturnValue({ + select: jest.fn(() => ({ + single: jest.fn(() => Promise.resolve({ data: null, error: null })), + })), + upsert: jest.fn(() => Promise.resolve({ data: null, error: null })), + } as unknown as ReturnType<typeof supabase.from>); - await syncService.start(); + await syncService.start(); - // Should not crash the service - expect(syncService.getStatus().isRunning).toBe(true); + // Should not crash the service + expect(syncService.getStatus().isRunning).toBe(true); + } finally { + // Restore original fetch + global.fetch = originalFetch; + } });apps/backend/src/index.ts (1)
20-32: Add production exit strategy for critical service failuresThe error handling logs failures but allows the application to continue running, which could lead to an incomplete system state in production.
Apply this diff to add proper production error handling:
async function initializeServices() { // Initialize cleanup scheduler await runInitialCleanup(); startCleanupScheduler(); // Initialize blockchain sync service try { await syncService.start(); console.log('✅ Blockchain synchronization service started'); } catch (error) { console.error('❌ Failed to start blockchain sync service:', error); + // Exit on critical service failure in production + if (process.env.NODE_ENV === 'production') { + console.error('💀 Exiting due to critical service failure'); + process.exit(1); + } } }apps/backend/src/controllers/sync.controller.ts (3)
12-17: Good fix: awaited async logging calls across endpointsThanks for addressing the earlier feedback by awaiting logBlockchainSuccess/logBlockchainOperation where appropriate. This prevents lost logs and unhandled rejections.
Also applies to: 38-43, 87-92, 308-313, 425-433
282-306: Batch failed-event retry and trigger a single manual syncCurrent loop triggers a manual sync per event, which is inefficient and risks overload. Batch the updates and call syncService.triggerManualSync() once.
Apply this diff:
- let retriedCount = 0; - let successCount = 0; - - for (const event of failedEvents || []) { - try { - retriedCount++; - - // Clear error and mark as unprocessed - await supabase - .from('sync_events') - .update({ - error: null, - processed: false, - processed_at: null, - }) - .eq('id', event.id); - - // Trigger manual sync to reprocess - await syncService.triggerManualSync(); - - successCount++; - } catch (retryError) { - console.error(`Failed to retry event ${event.id}:`, retryError); - } - } + const ids = (failedEvents ?? []).map((e) => e.id); + let retriedCount = ids.length; + let successCount = 0; + if (ids.length > 0) { + const { data: updated, error: updateError } = await supabase + .from('sync_events') + .update({ + error: null, + processed: false, + processed_at: null, + }) + .in('id', ids) + .select('id'); + if (updateError) { + throw updateError; + } + successCount = updated?.length || 0; + // Trigger manual sync once after updating all events + await syncService.triggerManualSync(); + }
342-433: Make clearOldData atomic via a Postgres RPC (transaction); current code is only “best-effort”The comment suggests transactionality, but Supabase JS doesn’t support multi-statement transactions; the current two deletes can partially succeed. Move deletion into a SQL function and call it via supabase.rpc to guarantee atomicity.
Apply this diff to the controller:
- // Use Supabase's transaction support to ensure atomicity - // First, get the count of records to be deleted for validation - const { count: eventsCount, error: countError } = await supabase - .from('sync_events') - .select('*', { count: 'exact', head: true }) - .lt('created_at', cutoffDate.toISOString()); - - if (countError) { - throw new Error(`Failed to count old events: ${countError.message}`); - } - - const { count: logsCount, error: logsCountError } = await supabase - .from('sync_logs') - .select('*', { count: 'exact', head: true }) - .lt('created_at', cutoffDate.toISOString()); - - if (logsCountError) { - throw new Error(`Failed to count old logs: ${logsCountError.message}`); - } - - // Perform both deletions in sequence, but with proper error handling - // If either fails, we'll handle it gracefully - let eventsDeleted = 0; - let logsDeleted = 0; - let partialFailure = false; - let failureDetails = ''; - - try { - // Delete old events - const { data: eventsResult, error: eventsError } = await supabase - .from('sync_events') - .delete() - .lt('created_at', cutoffDate.toISOString()) - .select('id'); - - if (eventsError) { - throw new Error(`Failed to delete old events: ${eventsError.message}`); - } - eventsDeleted = eventsResult?.length || 0; - - // Delete old logs - const { data: logsResult, error: logsError } = await supabase - .from('sync_logs') - .delete() - .lt('created_at', cutoffDate.toISOString()) - .select('id'); - - if (logsError) { - throw new Error(`Failed to delete old logs: ${logsError.message}`); - } - logsDeleted = logsResult?.length || 0; - } catch (deletionError) { - // If deletion fails, we have a partial failure scenario - partialFailure = true; - failureDetails = - deletionError instanceof Error ? deletionError.message : 'Unknown deletion error'; - - // Log the partial failure - await loggingService.logBlockchainOperation('clearOldData_partial_failure', { - days, - cutoffDate: cutoffDate.toISOString(), - eventsDeleted, - logsDeleted, - failureDetails, - expectedEventsCount: eventsCount || 0, - expectedLogsCount: logsCount || 0, - }); - - // Return partial success response - res.json({ - success: true, - message: `Partially cleared sync data older than ${days} days. Events deleted: ${eventsDeleted}, Logs deleted: ${logsDeleted}. Some deletions failed.`, - cutoffDate: cutoffDate.toISOString(), - eventsDeleted, - logsDeleted, - partialFailure: true, - failureDetails, - expectedEventsCount: eventsCount || 0, - expectedLogsCount: logsCount || 0, - }); - return; - } - - // Both operations succeeded - await loggingService.logBlockchainSuccess(logId, { - action: 'old_data_cleared', - cutoffDate: cutoffDate.toISOString(), - eventsDeleted, - logsDeleted, - expectedEventsCount: eventsCount || 0, - expectedLogsCount: logsCount || 0, - }); - - res.json({ - success: true, - message: `Cleared sync data older than ${days} days`, - cutoffDate: cutoffDate.toISOString(), - eventsDeleted, - logsDeleted, - partialFailure: false, - expectedEventsCount: eventsCount || 0, - expectedLogsCount: logsCount || 0, - }); + // Perform both deletions atomically via a Postgres function (transaction) + const { data: cleared, error: rpcError } = await supabase.rpc('clear_old_sync_data', { + p_cutoff: cutoffDate.toISOString(), + }); + if (rpcError) { + throw rpcError; + } + const eventsDeleted = cleared?.events_deleted ?? 0; + const logsDeleted = cleared?.logs_deleted ?? 0; + + await loggingService.logBlockchainSuccess(logId, { + action: 'old_data_cleared', + cutoffDate: cutoffDate.toISOString(), + eventsDeleted, + logsDeleted, + }); + + res.json({ + success: true, + message: `Cleared sync data older than ${days} days`, + cutoffDate: cutoffDate.toISOString(), + eventsDeleted, + logsDeleted, + partialFailure: false, + });Add this SQL function in a migration to ensure transactionality:
-- apps/backend/database/migrations/00x_clear_old_sync_data_function.sql create or replace function public.clear_old_sync_data(p_cutoff timestamptz) returns table (events_deleted bigint, logs_deleted bigint) language plpgsql security definer set search_path = public as $$ declare begin -- run both deletes in a single transaction with deleted_events as ( delete from public.sync_events where created_at < p_cutoff returning 1 ), deleted_logs as ( delete from public.sync_logs where created_at < p_cutoff returning 1 ) select (select count(*) from deleted_events) as events_deleted, (select count(*) from deleted_logs) as logs_deleted into events_deleted, logs_deleted; return next; end; $$;Ensure the service role key is used (already configured) and permissions allow executing this function.
🧹 Nitpick comments (4)
apps/backend/src/tests/sync.service.test.ts (3)
308-314: Mock should return more realistic responsesThe mock fetch response returns an empty events array, but the comment indicates it should simulate actual events. This inconsistency could hide issues with event processing.
Consider making the mock return actual test events to better validate the sync process:
.mockResolvedValueOnce({ - json: () => Promise.resolve({ events: [] }), - } as Response); // getContractEvents response (empty for this test) + json: () => Promise.resolve({ + events: [ + { + id: 'test-event-1', + type: 'booking_created', + // ... other event properties + } + ] + }), + } as Response); // getContractEvents response with test events
508-531: Remove duplicate test caseThis test is identical to the test already defined at lines 81-104. Having duplicate tests adds no value and increases maintenance burden.
Apply this diff to remove the duplicate test:
- it('should use custom polling interval from environment variable', () => { - // Save original environment variables - const originalPollingInterval = process.env.SYNC_POLL_INTERVAL; - const originalRpcUrl = process.env.SOROBAN_RPC_URL; - const originalContractId = process.env.SOROBAN_CONTRACT_ID; - const originalNetworkPassphrase = process.env.SOROBAN_NETWORK_PASSPHRASE; - - try { - // Set custom polling interval - process.env.SYNC_POLL_INTERVAL = '10000'; - process.env.SOROBAN_RPC_URL = 'https://test-rpc.stellar.org'; - process.env.SOROBAN_CONTRACT_ID = 'test-contract-id'; - process.env.SOROBAN_NETWORK_PASSPHRASE = 'Test SDF Network ; September 2015'; - - const customSyncService = new SyncService(); - expect(customSyncService.getPollingIntervalMs()).toBe(10000); - } finally { - // Restore original environment variables - process.env.SYNC_POLL_INTERVAL = originalPollingInterval; - process.env.SOROBAN_RPC_URL = originalRpcUrl; - process.env.SOROBAN_CONTRACT_ID = originalContractId; - process.env.SOROBAN_NETWORK_PASSPHRASE = originalNetworkPassphrase; - } - });
533-556: Remove second duplicate test caseThis test is identical to the test already defined at lines 106-129. Having duplicate tests adds no value.
Apply this diff to remove the duplicate test:
- it('should fallback to default polling interval for invalid environment value', () => { - // Save original environment variables - const originalPollingInterval = process.env.SYNC_POLL_INTERVAL; - const originalRpcUrl = process.env.SOROBAN_RPC_URL; - const originalContractId = process.env.SOROBAN_CONTRACT_ID; - const originalNetworkPassphrase = process.env.SOROBAN_NETWORK_PASSPHRASE; - - try { - // Set invalid polling interval - process.env.SYNC_POLL_INTERVAL = 'invalid'; - process.env.SOROBAN_RPC_URL = 'https://test-rpc.stellar.org'; - process.env.SOROBAN_CONTRACT_ID = 'test-contract-id'; - process.env.SOROBAN_NETWORK_PASSPHRASE = 'Test SDF Network ; September 2015'; - - const fallbackSyncService = new SyncService(); - expect(fallbackSyncService.getPollingIntervalMs()).toBe(5000); // Default fallback - } finally { - // Restore original environment variables - process.env.SYNC_POLL_INTERVAL = originalPollingInterval; - process.env.SOROBAN_RPC_URL = originalRpcUrl; - process.env.SOROBAN_CONTRACT_ID = originalContractId; - process.env.SOROBAN_NETWORK_PASSPHRASE = originalNetworkPassphrase; - } - });apps/backend/src/routes/sync.routes.ts (1)
292-312: Consider using a more robust SQL injection detection approachWhile the current SQL injection detection is good, consider using a proper input sanitization library or parameterized queries instead of pattern matching, which can be bypassed.
Consider using a library like
validatororexpress-validatorfor more robust input validation:+import validator from 'validator'; // Security: Check for potential SQL injection patterns in string parameters const stringParams = ['eventType', 'status', 'operation']; for (const param of stringParams) { if (otherParams[param] && typeof otherParams[param] === 'string') { const value = otherParams[param] as string; - // Check for common SQL injection patterns - if ( - value.includes(';') || - value.includes('--') || - value.includes('/*') || - value.includes('*/') || - value.includes('union') || - value.includes('select') - ) { + // Use validator for more robust SQL injection detection + if (!validator.isAlphanumeric(value.replace(/_/g, ''))) { console.warn( `⚠️ Potential SQL injection attempt detected in parameter ${param}: ${value} by user ${req.user?.id}` ); validationErrors.push(`Invalid characters detected in ${param} parameter`); } } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (7)
apps/backend/src/blockchain/eventListener.ts(1 hunks)apps/backend/src/controllers/sync.controller.ts(1 hunks)apps/backend/src/index.ts(3 hunks)apps/backend/src/routes/sync.routes.ts(1 hunks)apps/backend/src/services/logging.service.ts(1 hunks)apps/backend/src/services/sync.service.ts(1 hunks)apps/backend/src/tests/sync.service.test.ts(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/backend/src/blockchain/eventListener.ts
🧰 Additional context used
🧬 Code Graph Analysis (6)
apps/backend/src/routes/sync.routes.ts (3)
apps/backend/src/types/auth.types.ts (1)
AuthRequest(5-7)apps/backend/src/config/supabase.ts (1)
supabase(205-217)apps/backend/src/controllers/sync.controller.ts (1)
syncController(456-456)
apps/backend/src/services/logging.service.ts (2)
apps/backend/src/types/common.types.ts (1)
TransactionLog(31-37)apps/backend/src/config/supabase.ts (1)
supabase(205-217)
apps/backend/src/tests/sync.service.test.ts (2)
apps/backend/src/services/sync.service.ts (3)
syncService(611-611)SyncService(59-608)mapBlockchainStatus(506-514)apps/backend/src/config/supabase.ts (1)
supabase(205-217)
apps/backend/src/controllers/sync.controller.ts (3)
apps/backend/src/services/logging.service.ts (1)
loggingService(256-256)apps/backend/src/services/sync.service.ts (1)
syncService(611-611)apps/backend/src/config/supabase.ts (1)
supabase(205-217)
apps/backend/src/index.ts (2)
apps/backend/src/services/cleanup-schedular.ts (2)
runInitialCleanup(29-37)startCleanupScheduler(5-18)apps/backend/src/services/sync.service.ts (1)
syncService(611-611)
apps/backend/src/services/sync.service.ts (2)
apps/backend/src/config/supabase.ts (1)
supabase(205-217)apps/backend/src/services/logging.service.ts (1)
loggingService(256-256)
🔇 Additional comments (4)
apps/backend/src/index.ts (1)
74-84: LGTM! Service initialization properly handledThe async initialization is now properly awaited with comprehensive error handling that logs failures while allowing the server to continue running with degraded functionality.
apps/backend/src/routes/sync.routes.ts (3)
38-96: LGTM! Comprehensive admin verification implementationThe admin check implementation is well-structured with multiple authorization methods, clear logging, and proper environment-based configuration. The TODO comment for production user_roles table implementation is appropriately placed.
98-184: LGTM! Robust admin authentication middlewareThe middleware implementation includes proper JWT validation via Supabase, comprehensive error handling, security headers, and detailed logging. The integration with the checkUserAdminStatus function provides flexible admin verification.
186-366: LGTM! Excellent pagination validation with security considerationsThe pagination validation middleware is exceptionally well-implemented with:
- Comprehensive parameter validation
- SQL injection pattern detection
- Abuse pattern monitoring
- Detailed error responses
- Security logging
| async startSync(req: Request, res: Response): Promise<void> { | ||
| try { | ||
| const logId = await loggingService.logBlockchainOperation('startSync', {}); | ||
|
|
||
| await syncService.start(); | ||
|
|
||
| await loggingService.logBlockchainSuccess(logId, { action: 'sync_started' }); | ||
|
|
||
| res.json({ | ||
| success: true, | ||
| message: 'Synchronization service started successfully', | ||
| status: syncService.getStatus(), | ||
| }); |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Verify idempotency and request coalescing for start/stop/manual sync
To avoid races and duplicate work under concurrent admin requests:
- Ensure start/stop are idempotent and protected by a lock/mutex.
- Coalesce manual sync triggers so only one runs at a time.
If not already enforced in SyncService, consider guarding with an in-memory mutex or queue.
Run this script to check for lock/guard patterns in SyncService:
Also applies to: 36-48, 85-97, 267-279, 334-341
🏁 Script executed:
#!/bin/bash
# Look for state flags, locks, and idempotent guards in SyncService
rg -n -C3 -g 'apps/backend/src/**/*.ts' -P '(?s)class\s+SyncService\b.*?\{.*?\}' \
| sed -n '1,200p'
echo -e "\n=== Look for potential guard variables or mutex usage ==="
rg -n -C2 -g 'apps/backend/src/**/*.ts' -P '\b(isRunning|isSyncing|inProgress|locked|mutex|semaphore)\b'
echo -e "\n=== start/stop/manual trigger implementations ==="
rg -n -C3 -g 'apps/backend/src/**/*.ts' -P '\b(start|stop|triggerManualSync)\s*\('Length of output: 25150
Fix required: add a concurrency guard for polling/manual sync (start/stop are already idempotent)
Brief: The repo already uses an isRunning flag to make start()/stop() idempotent, but there is no in-flight guard for pollForEvents — manual triggers call pollForEvents() directly and can overlap with the scheduled polls.
Files/locations to update:
- apps/backend/src/services/sync.service.ts
- private isRunning = false (declaration around line ~64)
- async start() — idempotent check present (lines ~196–210)
- async stop() — idempotent check present (lines ~226–236)
- private async pollForEvents() — currently only checks this.isRunning; no "in-progress" guard (lines ~296–300)
- async triggerManualSync() — simply calls pollForEvents(), allowing concurrent runs (lines ~580–583)
- apps/backend/src/controllers/sync.controller.ts — controllers call start/stop/triggerManualSync (lines ~10–22, ~82–92); safe to rely on service-level fixes.
Actionable recommendation (minimal):
- Add an in-flight guard or async mutex around pollForEvents to prevent reentrancy/overlap. Example pattern:
- private pollingInProgress = false;
- at start of pollForEvents: if (!this.isRunning || this.pollingInProgress) return;
- set pollingInProgress = true before work and clear it in finally.
- Alternatively, serialize manual triggers with a queue or use an async-mutex library if you need queued behavior instead of simple skip.
- Add unit tests that assert:
- calling triggerManualSync() while a poll is running does not start a concurrent poll,
- start()/stop() remain idempotent.
🤖 Prompt for AI Agents
In apps/backend/src/services/sync.service.ts (declare near line ~64) add a
private pollingInProgress = false flag and update private async pollForEvents()
(around lines ~296–300) to return immediately if !this.isRunning ||
this.pollingInProgress, set pollingInProgress = true before doing work and clear
it in a finally block to guarantee reset on error; ensure triggerManualSync()
(around lines ~580–583) awaits pollForEvents() (or can remain unchanged if it
already awaits) so manual triggers respect the in-flight guard; controllers in
apps/backend/src/controllers/sync.controller.ts (lines ~10–22 and ~82–92) need
no change; add unit tests asserting that concurrent triggerManualSync() calls do
not run overlapping polls and start()/stop() remain idempotent.
| const { data: recentEvents } = await supabase | ||
| .from('sync_events') | ||
| .select('*') | ||
| .order('created_at', { ascending: false }) | ||
| .limit(10); | ||
|
|
||
| // Get recent logs | ||
| const { data: recentLogs } = await supabase | ||
| .from('sync_logs') | ||
| .select('*') | ||
| .order('created_at', { ascending: false }) | ||
| .limit(10); | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Harden error handling in getDashboard for recent lists and failed count
Errors from the recent events/logs queries and the failed-events count are ignored, which can mask failures and return partial/misleading data.
Apply this diff:
- const { data: recentEvents } = await supabase
+ const { data: recentEvents, error: recentEventsError } = await supabase
.from('sync_events')
.select('*')
.order('created_at', { ascending: false })
.limit(10);
+ if (recentEventsError) {
+ throw recentEventsError;
+ }- const { data: recentLogs } = await supabase
+ const { data: recentLogs, error: recentLogsError } = await supabase
.from('sync_logs')
.select('*')
.order('created_at', { ascending: false })
.limit(10);
+ if (recentLogsError) {
+ throw recentLogsError;
+ }- const { count: failedEventsCount } = await supabase
+ const { count: failedEventsCount, error: failedEventsError } = await supabase
.from('sync_events')
.select('*', { count: 'exact', head: true })
.eq('processed', false)
.not('error', 'is', null);
+ if (failedEventsError) {
+ throw failedEventsError;
+ }Also applies to: 240-245
🤖 Prompt for AI Agents
In apps/backend/src/controllers/sync.controller.ts around lines 226-238 (and
also where similar logic exists at 240-245), the Supabase queries for
recentEvents, recentLogs and the failed-events count ignore the returned error,
which can hide failures and return misleading data; update the code to check
each query result for an error (or wrap the calls in try/catch), call
processLogger.error with the error details, and return/throw an appropriate
500-level response (or set safe default values) instead of proceeding with
undefined/partial data so callers receive a clear failure signal.
Pull Request | StellarRent
📝 Summary
This PR implements a comprehensive real-time blockchain synchronization system between Stellar blockchain and Supabase database, ensuring data consistency across the StellarRent platform.
🔗 Related Issues
Closes **#102.
🔄 Changes Made
Implemented a complete blockchain synchronization architecture including:
Core Services:
SyncService: Main orchestration service for blockchain event polling and processingBlockchainEventListener: Real-time event monitoring for Stellar contract eventsSyncController: Admin API endpoints for sync management and monitoringDatabase Schema:
sync_state: Tracks synchronization progress and statesync_events: Stores all blockchain events for tracking and debuggingsync_logs: Detailed logging for all sync operationsEvent Processing:
Admin Tools:
Testing & Type Safety:
anytypes��️ Current Output
No visual changes - backend service implementation with comprehensive API endpoints for sync management.
🧪 Testing
Comprehensive testing implemented including:
✅ Testing Checklist
🚀 Next Steps & Improvements
This change lays a solid foundation for further optimizations. Some areas that could benefit from future improvements include:
💬 Comments
This implementation provides a robust, type-safe foundation for blockchain synchronization that can scale with the platform's growth. The comprehensive testing ensures reliability, while the admin tools provide full visibility and control over the sync process.
Summary by CodeRabbit
New Features
API
Database
Configuration
Logging
Tests