Skip to content

Comments

feat: prevent multiple watchers on the same dir#83

Open
joeldierkes wants to merge 1 commit intomainfrom
feat/prevent-multiple-watches-on-same-dir
Open

feat: prevent multiple watchers on the same dir#83
joeldierkes wants to merge 1 commit intomainfrom
feat/prevent-multiple-watches-on-same-dir

Conversation

@joeldierkes
Copy link
Contributor

@joeldierkes joeldierkes commented Dec 6, 2025

Note

Introduces a PID-based lock to ensure only one watcher performs initial sync per directory, with proper lock cleanup on exit.

  • Watch process (src/commands/watch.ts):
    • Use isLocked/acquireLock to detect and enforce a single active sync per directory; skip initial sync if locked.
    • Run initial sync only when lock is acquired; add cleanup on exit/SIGINT/SIGTERM to releaseLock.
    • Ensure cleanup on errors; keep JWT refresh and file watching behavior unchanged.
  • Locking utility (src/lib/lock.ts):
    • New PID-based file lock in /tmp with base64-encoded path key: acquireLock, releaseLock, isLocked.

Written by Cursor Bugbot for commit 4ac0c96. This will update automatically on new commits. Configure here.

process.kill(pid, 0);
return false;
} catch {
fs.unlinkSync(lockFile);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Stale lock cleanup can throw unhandled ENOENT error

When multiple processes detect a stale lock file simultaneously, they may both attempt to call fs.unlinkSync(lockFile). The first process succeeds, but the second throws an ENOENT error because the file no longer exists. This error is not caught by the inner try-catch (which only handles process.kill errors) and propagates to the outer catch which only handles EEXIST. The unhandled ENOENT gets re-thrown, causing the process to crash instead of gracefully failing to acquire the lock.

Fix in Cursor Fix in Web

} catch {
fs.unlinkSync(lockFile);
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Corrupt lock file permanently blocks lock acquisition

When a lock file exists but contains invalid content (e.g., empty or non-numeric), Number.parseInt returns NaN, causing the !Number.isNaN(pid) condition to be false. The code skips the cleanup block entirely and attempts to create the lock file with the wx flag, which fails with EEXIST. This results in acquireLock returning false even though no valid process holds the lock, permanently blocking lock acquisition until the corrupt file is manually removed.

Fix in Cursor Fix in Web

@theglove44
Copy link

Real-world issue confirmation: Multiple agents spawning independent mgrep watch processes

I've discovered this PR addresses a critical blocker for using mgrep with multiple coding agents simultaneously.

Current Situation

I installed mgrep on multiple agents (Codex, Claude Code, and OpenCode) and immediately ran into the exact problem this PR solves. Without the PID-based lock mechanism, each agent spawns its own independent mgrep watch process.

System Impact (M1 MacBook Air)

  • Process Count: 4 simultaneous node processes
    • Memory Usage:
    • PID 64197: 2.23 GB
    • PID 63965: 2.12 GB
    • PID 64381: 2.01 GB
    • PID 63783: 1.92 GB
    • Total: ~8.3 GB RAM consumed
      All processes are consuming significant resources for identical work—syncing the same files to the same Mixedbread store.

Why This is Critical

  1. Multi-agent support is broken without this fix: Users can't safely install mgrep on multiple agents without running into catastrophic resource usage
    1. Silent resource drain: There's no user-facing warning that this is happening—it just silently spawns multiple processes
    1. Workflow blocker: This prevents the intended use case of having multiple AI agents access the same indexed codebase

Next Steps

This PR's locking mechanism will allow:

  • ✅ Multiple agents to be installed on the same project safely
    • ✅ Only one mgrep watch process to perform the sync
    • ✅ All other instances to gracefully skip redundant indexing
    • ✅ Users to benefit from mgrep's memory reduction without resource overhead
      Marking this as high priority for adoption of mgrep in multi-agent workflows.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants