Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better usage example #7

Open
pbedat opened this issue Feb 15, 2025 · 1 comment
Open

Better usage example #7

pbedat opened this issue Feb 15, 2025 · 1 comment

Comments

@pbedat
Copy link
Contributor

pbedat commented Feb 15, 2025

Continuing our discussion from ThreeDotsLabs/watermill-sql#31

Aren't you tired to be forced to write a different code for sqlite3 because you can't use it with multiple goroutines?

Can you provide an example, where concurrent (multiple goroutines) sqlite3 code is truly problematic?

I modified your example, to only use the stdlib and go-sqlite3, but I cannot spot any concurrency problems:

package main

import (
	"context"
	"database/sql"
	"errors"
	"fmt"
	"sync"
	"time"

	"math/rand/v2"

	_ "github.com/mattn/go-sqlite3"
)

func main() {

	//db, err := sql.Open("sqlite3", "file:test.db?_journal=wal&_sync=normal")
	db, err := sql.Open("sqlite3", ":memory:")
	db.SetMaxOpenConns(1)
	if err != nil {
		panic(err)
	}

	if _, err := db.Exec("CREATE TABLE IF NOT EXISTS users (id INTEGER PRIMARY KEY, name TEXT)"); err != nil {
		panic(err)
	}

	var wg sync.WaitGroup

	wg.Add(10000)
	go func() {
		for range 10000 {
			defer wg.Done()
			_, err := db.Exec("INSERT INTO users (name) VALUES (?)", fmt.Sprintf("user%d", 1))
			if err != nil {
				panic(err)
			}
			// simulate random insert with a random sleep
			time.Sleep(time.Duration(rand.IntN(5)) * time.Millisecond)
		}
	}()

	// Let's measure how many records per second
	metrics := []int{}
	ticker := time.NewTicker(1 * time.Second)
	ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
	defer cancel()

	go func() {
		wg.Wait()

		ticker.Stop()
		cancel()
	}()

	for {
		select {
		case <-ctx.Done():
			goto RESULT
		case <-ticker.C:

			row := db.QueryRowContext(ctx, "SELECT COUNT(*) FROM users;")
			if errors.Is(row.Err(), context.DeadlineExceeded) {
				goto RESULT
			}
			if row.Err() != nil {
				panic(err)
			}

			var result int
			err = row.Scan(&result)
			if err != nil {
				panic(err)
			}

			metrics = append(metrics, result)
		}
	}
RESULT:
	total := 0
	for _, value := range metrics {
		total += value
	}
	average := float64(total) / float64(len(metrics))
	fmt.Printf("Average: %.2f\n", average)
}

When I modify your example to run the inserts truly concurrent, the throughput increases by factor 4x:

	wg.Add(10000)
	for range 10000 {
		go func() {
			defer wg.Done()
			_, err := db.Exec("INSERT INTO users (name) VALUES (?)", fmt.Sprintf("user%d", 1))
			if err != nil {
				panic(err)
			}
		}()
	}
@davidroman0O
Copy link
Owner

Hey, thanks for the feedback.

I know your example shows that if you configure go-sqlite3 correctly (with things like SetMaxOpenConns(1) and WAL mode), you can squeeze out better throughput in controlled scenarios. In my day-to-day, though, I’ve often run into intermittent “database is locked” errors when juggling multiple goroutines in a more complex setup—especially in the pub/sub workflows I use with Watermill.

The goal of comfylite3 isn’t to magically give SQLite full multi-writer capabilities. Rather, it’s a convenience wrapper that automatically handles those occasional lock errors by queuing and retrying operations. This means you don’t have to sprinkle your code with manual retry logic every time you hit a lock, which can be a real nuisance in side projects and tests.

I totally agree that for a production system with heavy concurrent writes, relying solely on SQLite (or abstracting its limitations) isn’t ideal. In those cases, using patterns like the outbox or even switching to PostgreSQL is the way to go. My PR is aimed at scenarios where SQLite’s simplicity is a plus and where you’d prefer not to write extra boilerplate to handle its quirks.

I’ll definitely beef up the docs to clearly state these trade-offs. Let me know if you have any more thoughts or questions!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants