From 18835fa022a4f91f6220accfdf999fc22f7c70e7 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 29 Oct 2025 02:42:35 +0000 Subject: [PATCH 1/3] Initial plan From 574a9e00d9d7f29ba61861af2475ef6b11f09429 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 29 Oct 2025 02:49:20 +0000 Subject: [PATCH 2/3] Enhance AGENTS.md with comprehensive Copilot coding agent instructions Co-authored-by: ianlintner <500914+ianlintner@users.noreply.github.com> --- AGENTS.md | 444 ++++++++++++++++++++++-- flask_app/app.py | 24 +- flask_app/docs_server.py | 3 + flask_app/visualizations/array_viz.py | 16 +- flask_app/visualizations/graph_viz.py | 12 +- flask_app/visualizations/mst_viz.py | 8 +- flask_app/visualizations/nn_viz.py | 10 +- flask_app/visualizations/path_viz.py | 36 +- flask_app/visualizations/sorting_viz.py | 8 +- flask_app/visualizations/topo_viz.py | 3 +- 10 files changed, 455 insertions(+), 109 deletions(-) diff --git a/AGENTS.md b/AGENTS.md index 8ccb8fd..7afd89b 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -1,21 +1,236 @@ # AGENTS Guide -This document captures **non-obvious, project-specific rules** and conventions for working with this codebase. It is distinct from general Python guidelines or standard tooling docs. +## Purpose + +This document provides custom instructions and guidance for AI coding agents (including GitHub Copilot) working with this repository. It captures **non-obvious, project-specific rules** and conventions for building, testing, and validating changes in this codebase. + +**Repository Overview:** +- **Name:** Python Interview Algorithms Workbook +- **Description:** Clean, idiomatic Python implementations for senior/staff-level interview prep with complexity notes, pitfalls, demos, and tests +- **Technologies:** Python 3.9+, Flask 3.0+, pytest 8.0+, Ruff (linter/formatter) +- **Purpose:** Educational resource for algorithm implementations and interview preparation --- -## Testing -- All tests use **pytest**. -- Activate shell python with source /Users/ianlintner/python_dsa/.venv/bin/activate -- Test discovery depends on `tests/conftest.py` ensuring the project root is on `sys.path`. -- Run a single test with: +## Project Structure & Organization - ```bash - pytest tests/test_demos.py::test_run_all_demos_headless -v - ``` +The repository follows a `src/` layout: + +``` +python_dsa/ +├── src/ +│ ├── interview_workbook/ # Main package +│ │ ├── algorithms/ # Sorting, searching algorithms +│ │ ├── data_structures/ # DSU, Trie, LRU/LFU, Fenwick, Segment trees +│ │ ├── graphs/ # BFS, DFS, Dijkstra, MST, SCC +│ │ ├── dp/ # Dynamic programming solutions +│ │ ├── strings/ # String algorithms (KMP, Z-algo, etc.) +│ │ ├── math_utils/ # Number theory utilities +│ │ ├── patterns/ # Common coding patterns +│ │ ├── systems/ # Systems design concepts +│ │ ├── concurrency/ # Concurrency patterns +│ │ └── nlp/ # NLP utilities +│ └── main.py # CLI demo launcher +├── flask_app/ # Web UI for demos +├── tests/ # Pytest test suite +├── docs/ # Documentation (MkDocs) +└── scripts/ # Utility scripts +``` + +**Key Principles:** +- All imports use the package name: `from interview_workbook.algorithms.sorting.merge_sort import merge_sort` +- Tests import directly from `src/` via `tests/conftest.py` which adds src to `sys.path` +- Each algorithm module includes implementation, complexity analysis, pitfalls, and a `demo()` function + +--- + +## Build, Test, and Development Commands + +### Initial Setup +```bash +# Install package in editable mode with dev dependencies +python -m pip install -U pip +python -m pip install -e ".[dev]" + +# Set up pre-commit hooks (optional but recommended) +pre-commit install +``` + +### Running Tests +```bash +# Run all tests with coverage +pytest -q + +# Run specific test file +pytest tests/test_sorting.py -v + +# Run single test +pytest tests/test_demos.py::test_run_all_demos_headless -v + +# Run with verbose output +pytest -v +``` + +### Linting and Formatting +```bash +# Format code (Ruff formatter) +ruff format . + +# Lint code (Ruff rules E,F,I,B,UP etc.) +ruff check . + +# Auto-fix simple issues +ruff check --fix + +# Run all pre-commit hooks +pre-commit run --all-files --show-diff-on-failure --color=always +``` + +### Running Demos +```bash +# List all available demos +python src/main.py --list + +# Run specific demo +python src/main.py --demo sorting.merge_sort +python src/main.py --demo searching.binary_search +python src/main.py --demo dp.lcs +python src/main.py --demo graphs.scc +``` + +### Documentation +```bash +# Build documentation (requires Docker) +make docs + +# Serve documentation locally +make serve-docs + +# Clean built documentation +make clean-docs +``` + +--- + +## Code Style and Conventions + +### General Guidelines +- **Python Version:** Target Python 3.9+ (as specified in pyproject.toml) +- **Line Length:** Maximum 100 characters +- **Formatting:** Use Ruff formatter (double quotes, 4-space indentation) +- **Import Order:** Ruff handles import sorting automatically +- **Type Hints:** Encouraged but not required for this educational repo +- **Docstrings:** Required for all public functions and classes + +### Naming Conventions +- Functions: `snake_case` +- Classes: `PascalCase` +- Constants: `UPPER_SNAKE_CASE` +- Private members: prefix with `_` + +### Algorithm Implementation Standards +Every algorithm module should include: +1. **Implementation function(s)** with clear parameter names +2. **Docstring** explaining: + - What the algorithm does + - Time and space complexity + - Parameters and return values + - Example usage +3. **Common pitfalls** and edge cases as comments +4. **A `demo()` function** that demonstrates the algorithm with sample inputs + +### Example Structure +```python +def algorithm_name(input_data, param=default): + """ + Brief description of what the algorithm does. + + Time complexity: O(n log n) + Space complexity: O(n) + + Args: + input_data: Description + param: Description (default: value) + + Returns: + Description of return value + + Common pitfalls: + - Edge case 1 + - Edge case 2 + """ + # Implementation here + pass + +def demo(): + """Demonstrate the algorithm with examples.""" + # Demo code here + pass +``` + +--- + +## Architecture and Design Patterns + +### Educational Focus +This repository is designed for **learning and interview preparation**, not production use. Code prioritizes: +- **Clarity** over performance optimizations +- **Correctness** with comprehensive edge case handling +- **Educational value** with detailed comments on complexity and pitfalls + +### Demo System Architecture +- **Discovery:** `flask_app.app.discover_demos()` dynamically finds all demo functions +- **Execution:** `run_demo(module_id)` imports and executes the `demo()` function +- **Testing:** All demos are tested to ensure they run without exceptions +- **Headless Operation:** Demos must not require GUI or interactive prompts + +### Pattern Categories +The codebase is organized by common interview patterns: +- **Two Pointers:** Problems solvable with two-pointer technique +- **Sliding Window:** Fixed or variable window problems +- **Binary Search on Answer:** When answer space is searchable +- **Backtracking:** Problems requiring exhaustive search with pruning +- **Meet in the Middle:** Problems with exponential search space reducible by splitting + +--- + +## Testing Guidelines + +### Test Framework and Configuration +- **Framework:** All tests use **pytest** (version 8.0+) +- **Configuration:** Test settings in `pyproject.toml` and `pytest.ini` +- **Coverage:** Tests include coverage reporting via pytest-cov +- **Test Discovery:** Tests in `tests/` directory, files matching `test_*.py` pattern +- **Module Path:** `tests/conftest.py` ensures the project root is on `sys.path` + +### Running Tests +```bash +# Run all tests +pytest -q -- The primary test suite drives `discover_demos()` and `run_demo()` from `flask_app.app`. -- All discovered demos must implement a `demo()` function. Tests assert its existence and require it to run without exceptions. +# Run specific test file +pytest tests/test_sorting.py -v + +# Run single test +pytest tests/test_demos.py::test_run_all_demos_headless -v + +# Run with coverage report +pytest --cov=src --cov-report=term-missing +``` + +### Test Requirements +- The primary test suite drives `discover_demos()` and `run_demo()` from `flask_app.app` +- All discovered demos **must implement a `demo()` function** +- Tests assert the existence of `demo()` and require it to run without exceptions +- All tests must be **deterministic** and reproducible + +### Writing Tests +When adding new algorithms or features: +1. Add tests to the appropriate `tests/test_*.py` file +2. Test edge cases: empty input, single element, sorted/reverse sorted, duplicates +3. Test with random data when applicable +4. Verify time/space complexity claims with larger inputs +5. Ensure tests run quickly (< 1 second each when possible) --- @@ -33,24 +248,207 @@ This document captures **non-obvious, project-specific rules** and conventions f --- -## Randomness Seeding -- Test reliability depends on deterministic seeding: - - `random.seed(0)` - - `numpy.random.seed(0)` if NumPy is installed +## Randomness and Determinism + +### Seeding Requirements +Test reliability depends on deterministic seeding: +- Use `random.seed(0)` for Python's random module +- Use `numpy.random.seed(0)` if NumPy is installed +- Always seed before generating random test data + +### In Algorithm Implementations +- Ensure demos respect these randomness seeds when introducing stochastic behaviors +- For randomized algorithms (e.g., quicksort with random pivot), document the randomness +- Provide deterministic variants when possible for testing + +### Test Reproducibility +- All test results must be reproducible under fixed seeds +- Results must be identical across multiple test runs +- Document any inherent non-determinism (e.g., hash table iteration order in Python 3.6+) + +--- + +## Code Quality and Conventions + +### Core Principles +- **Determinism:** Results must be reproducible under fixed seeds +- **No Exceptions:** Demo output can be empty but must not raise exceptions +- **Headless Operation:** All demos must run **headless**—no GUI or interactive prompts +- **Clean Code:** Maintain readability and follow idiomatic Python patterns + +### File Organization +- All core algorithms live under `src/interview_workbook/` +- Follow the existing categorization: `two_pointers/`, `sorting/`, `graphs/`, etc. +- Keep related functionality together (e.g., all binary search variants in one module) + +### Special Scripts +- **Fix-up utilities:** `fix_leetcode_syntax_corruption.py`, `fix_comprehensive_leetcode_corruption.py` +- These scripts enforce **consistency of the LeetCode-style notebooks** +- **Do not modify them casually**—they maintain notebook formatting standards +- Run these scripts after making changes to notebook files + +--- + +## Security Considerations + +### General Security Guidelines +- **No Hardcoded Secrets:** Never commit API keys, passwords, or tokens +- **Input Validation:** All public functions should validate inputs appropriately +- **Safe Imports:** Only import from trusted sources +- **Dependency Management:** Keep dependencies up to date via `pyproject.toml` + +### Security Best Practices for This Repository +- This is an **educational repository** without external network access in core algorithms +- Flask app is for **local development only**—not intended for production deployment +- No user authentication or sensitive data handling +- All demos run in a sandboxed environment + +### When Adding Dependencies +- Check dependency security advisories before adding new packages +- Prefer well-maintained packages with active communities +- Document why new dependencies are needed +- Keep the dependency list minimal + +--- + +## Dependencies and Tools + +### Core Dependencies +- **Python:** 3.9 or higher +- **Flask:** 3.0.0+ (for web UI) +- **pytest:** 8.0.0+ (testing framework) +- **pytest-cov:** 4.1.0+ (coverage reporting) +- **Ruff:** 0.5.6+ (linter and formatter) +- **pre-commit:** 3.6.0+ (git hooks) + +### Optional Dependencies +- **NumPy:** For numerical algorithms (not a hard requirement) +- **MkDocs:** For documentation generation (Docker-based) -Ensure demos respect these randomness seeds when introducing stochastic behaviors. +### Development Workflow +1. Install with: `python -m pip install -e ".[dev]"` +2. Set up pre-commit hooks: `pre-commit install` +3. Before committing: + - Run `ruff format .` to format code + - Run `ruff check --fix` to fix linting issues + - Run `pytest -q` to ensure tests pass +4. Pre-commit hooks will run automatically on `git commit` + +### CI/CD Pipeline +GitHub Actions CI workflow: +1. Installs dependencies with `pip install -e ".[dev]"` +2. Runs pre-commit hooks (format + lint + misc checks) +3. Runs tests via `pytest -q` +4. Reports coverage --- -## Code Conventions -- Maintain determinism: results must be reproducible under fixed seeds. -- Demo output can be empty but must not raise. -- Any new demos should be designed to run **headless**—no GUI or interactive prompts. +## Working with Demos + +### Demo Function Requirements +Each demo module must: +- Be importable by its module ID (e.g., `sorting.merge_sort`) +- Expose a callable `demo()` function with no parameters +- Return a string (empty string is valid) +- Run without raising exceptions +- Complete execution in reasonable time (< 5 seconds) + +### Demo Discovery Process +- `discover_demos()` returns categories of demos as a nested dictionary +- Tests flatten categories into a list of metadata dicts: `{"id": module_path, "name": ..., ...}` +- `run_demo(module_id)` dynamically imports and executes the demo function + +### Adding New Demos +1. Create algorithm implementation with `demo()` function +2. Place in appropriate category under `src/interview_workbook/` +3. Ensure `demo()` function exists and runs without errors +4. Test with: `python src/main.py --demo category.module_name` +5. Verify it appears in: `python src/main.py --list` --- -## Hidden Rules -- Fix-up utilities (`fix_leetcode_syntax_corruption.py`, `fix_comprehensive_leetcode_corruption.py`) are not just scripts, they enforce **consistency of the LeetCode-style notebooks**. Do not modify them casually. -- All core algorithms live under `src/interview_workbook/`. Follow the existing categorization (`two_pointers/`, etc.). +## PR and Change Guidelines + +### Before Submitting Changes +1. **Run tests:** `pytest -q` must pass +2. **Format code:** `ruff format .` +3. **Fix linting:** `ruff check --fix` +4. **Check coverage:** Maintain or improve test coverage +5. **Update docs:** If adding new features, update relevant documentation + +### PR Acceptance Criteria +- All tests pass +- Code is properly formatted (Ruff) +- No new linting errors +- Test coverage maintained or improved +- Documentation updated if needed +- Demo functions work correctly +- Changes are minimal and focused + +### Review Checklist +- [ ] Implementation is correct and handles edge cases +- [ ] Time/space complexity is documented +- [ ] Tests cover new functionality +- [ ] Code follows existing style conventions +- [ ] Demo function exists and works +- [ ] No breaking changes to existing APIs --- + +## Common Pitfalls and Tips + +### For AI Coding Agents +1. **Always run tests** after making changes: `pytest -q` +2. **Format before committing:** `ruff format .` and `ruff check --fix` +3. **Check demo functions:** Ensure they run with `python src/main.py --demo module.name` +4. **Maintain determinism:** Use proper seeding for random operations +5. **Don't break existing code:** This is an educational repo—preserve working implementations +6. **Keep it simple:** Prioritize clarity over clever optimizations + +### Common Issues +- **Import errors:** Ensure you're running from the repo root +- **Test failures:** Check that random seeds are set correctly +- **Demo not found:** Verify the module path matches the file structure +- **Linting errors:** Run `ruff check --fix` to auto-fix most issues + +### Quick Reference Commands +```bash +# Full development cycle +python -m pip install -e ".[dev]" # Install with dev deps +pytest -q # Run tests +ruff format . # Format code +ruff check --fix # Fix linting +python src/main.py --demo module.name # Test demo + +# Quick fixes +ruff check --fix # Auto-fix linting issues +pytest tests/test_file.py -v # Debug specific test +python src/main.py --list # List all demos +``` + +--- + +## Summary for AI Agents + +**What to do when making changes:** +1. Install dependencies: `python -m pip install -e ".[dev]"` +2. Make minimal, focused changes +3. Run tests: `pytest -q` +4. Format: `ruff format .` +5. Lint: `ruff check --fix` +6. Test demos: `python src/main.py --demo module.name` +7. Verify everything works before submitting + +**Key principles:** +- Maintain determinism (seed random operations) +- Keep demos headless (no GUI) +- Follow existing code structure and patterns +- Document time/space complexity +- Test edge cases thoroughly +- Don't break existing functionality + +**When in doubt:** +- Check existing implementations for examples +- Run the full test suite +- Verify demos still work +- Keep changes minimal and surgical diff --git a/flask_app/app.py b/flask_app/app.py index 182938b..3c3b545 100644 --- a/flask_app/app.py +++ b/flask_app/app.py @@ -344,7 +344,9 @@ def run_demo(module_name: str) -> str: try: mod = importlib.import_module(module_name) except ModuleNotFoundError as e: - raise ModuleNotFoundError(f"Could not import module {module_name!r}. Ensure it is a valid demo id.") from e + raise ModuleNotFoundError( + f"Could not import module {module_name!r}. Ensure it is a valid demo id." + ) from e demo_fn = getattr(mod, "demo", None) if not callable(demo_fn): @@ -474,9 +476,7 @@ def viz_sorting(): try: from flask_app.visualizations import sorting_viz as s_viz # type: ignore - algorithms = [ - {"key": k, "name": v["name"]} for k, v in s_viz.ALGORITHMS.items() - ] + algorithms = [{"key": k, "name": v["name"]} for k, v in s_viz.ALGORITHMS.items()] except Exception: algorithms = [ {"key": "quick", "name": "Quick Sort"}, @@ -573,9 +573,7 @@ def viz_path(): try: from flask_app.visualizations import path_viz as p_viz # type: ignore - algorithms = [ - {"key": k, "name": v["name"]} for k, v in p_viz.ALGORITHMS.items() - ] + algorithms = [{"key": k, "name": v["name"]} for k, v in p_viz.ALGORITHMS.items()] except Exception: algorithms = [ {"key": "astar", "name": "A* (Manhattan)"}, @@ -629,9 +627,7 @@ def viz_arrays(): try: from flask_app.visualizations import array_viz as a_viz # type: ignore - algorithms = [ - {"key": k, "name": v["name"]} for k, v in a_viz.ALGORITHMS.items() - ] + algorithms = [{"key": k, "name": v["name"]} for k, v in a_viz.ALGORITHMS.items()] except Exception: algorithms = [ {"key": "binary_search", "name": "Binary Search"}, @@ -690,9 +686,7 @@ def viz_mst(): try: from flask_app.visualizations import mst_viz as m_viz # type: ignore - algorithms = [ - {"key": k, "name": v["name"]} for k, v in m_viz.ALGORITHMS.items() - ] + algorithms = [{"key": k, "name": v["name"]} for k, v in m_viz.ALGORITHMS.items()] except Exception: algorithms = [ {"key": "kruskal", "name": "Minimum Spanning Tree (Kruskal)"}, @@ -738,9 +732,7 @@ def viz_topo(): try: from flask_app.visualizations import topo_viz as t_viz # type: ignore - algorithms = [ - {"key": k, "name": v["name"]} for k, v in t_viz.ALGORITHMS.items() - ] + algorithms = [{"key": k, "name": v["name"]} for k, v in t_viz.ALGORITHMS.items()] except Exception: algorithms = [ {"key": "kahn", "name": "Topological Sort (Kahn's Algorithm)"}, diff --git a/flask_app/docs_server.py b/flask_app/docs_server.py index d99a041..2f464e3 100644 --- a/flask_app/docs_server.py +++ b/flask_app/docs_server.py @@ -1,4 +1,5 @@ import os + from flask import Flask, send_from_directory app = Flask(__name__) @@ -6,6 +7,7 @@ # Path to built MkDocs site DOCS_BUILD_DIR = os.path.join(os.path.dirname(__file__), "..", "site") + @app.route("/docs/") @app.route("/docs/") def serve_docs(filename="index.html"): @@ -14,5 +16,6 @@ def serve_docs(filename="index.html"): """ return send_from_directory(DOCS_BUILD_DIR, filename) + if __name__ == "__main__": app.run(host="127.0.0.1", port=5003, debug=True) diff --git a/flask_app/visualizations/array_viz.py b/flask_app/visualizations/array_viz.py index e76ee8d..cf13bd7 100644 --- a/flask_app/visualizations/array_viz.py +++ b/flask_app/visualizations/array_viz.py @@ -34,9 +34,7 @@ def binary_search_frames( arr: list[int], target: int, max_steps: int = 20000 ) -> list[dict[str, Any]]: a = arr[:] - frames: list[dict[str, Any]] = [ - _snap(a, "init", lo=0, hi=len(a) - 1, mid=None, found=False) - ] + frames: list[dict[str, Any]] = [_snap(a, "init", lo=0, hi=len(a) - 1, mid=None, found=False)] lo, hi = 0, len(a) - 1 steps = 0 while lo <= hi and steps < max_steps: @@ -71,14 +69,10 @@ def two_pointers_sum_frames( return frames if s < target: left += 1 - frames.append( - _snap(a, "move-left", l=left, r=right, sum=None, target=target) - ) + frames.append(_snap(a, "move-left", l=left, r=right, sum=None, target=target)) else: right -= 1 - frames.append( - _snap(a, "move-right", l=left, r=right, sum=None, target=target) - ) + frames.append(_snap(a, "move-right", l=left, r=right, sum=None, target=target)) steps += 1 frames.append(_snap(a, "not-found", l=left, r=right, sum=None, target=target)) return frames @@ -93,9 +87,7 @@ def sliding_window_min_len_geq_frames( """ a = arr[:] frames: list[dict[str, Any]] = [ - _snap( - a, "init", win_l=0, win_r=-1, best_l=None, best_r=None, s=0, target=target - ) + _snap(a, "init", win_l=0, win_r=-1, best_l=None, best_r=None, s=0, target=target) ] n = len(a) s = 0 diff --git a/flask_app/visualizations/graph_viz.py b/flask_app/visualizations/graph_viz.py index 0f77120..390e904 100644 --- a/flask_app/visualizations/graph_viz.py +++ b/flask_app/visualizations/graph_viz.py @@ -53,9 +53,7 @@ def union(a: int, b: int) -> None: edges.add(e) -def generate_graph( - n: int = 12, p: float = 0.25, seed: int | None = None -) -> dict[str, Any]: +def generate_graph(n: int = 12, p: float = 0.25, seed: int | None = None) -> dict[str, Any]: """ Generate an undirected simple graph with n nodes. - Start with no edges, add each possible edge with probability p @@ -85,9 +83,7 @@ def _frame(state: dict[str, Any]) -> dict[str, Any]: } -def bfs_frames( - g: dict[str, Any], start: int = 0, max_steps: int = 20000 -) -> list[dict[str, Any]]: +def bfs_frames(g: dict[str, Any], start: int = 0, max_steps: int = 20000) -> list[dict[str, Any]]: n = g["n"] adj: list[list[int]] = [[] for _ in range(n)] for u, v in g["edges"]: @@ -152,9 +148,7 @@ def bfs_frames( return frames -def dfs_frames( - g: dict[str, Any], start: int = 0, max_steps: int = 20000 -) -> list[dict[str, Any]]: +def dfs_frames(g: dict[str, Any], start: int = 0, max_steps: int = 20000) -> list[dict[str, Any]]: n = g["n"] adj: list[list[int]] = [[] for _ in range(n)] for u, v in g["edges"]: diff --git a/flask_app/visualizations/mst_viz.py b/flask_app/visualizations/mst_viz.py index eab901b..36269e3 100644 --- a/flask_app/visualizations/mst_viz.py +++ b/flask_app/visualizations/mst_viz.py @@ -9,9 +9,7 @@ Edge = tuple[int, int, float] -def _circle_layout( - n: int, jitter: float = 0.0, rng: random.Random | None = None -) -> list[Coord]: +def _circle_layout(n: int, jitter: float = 0.0, rng: random.Random | None = None) -> list[Coord]: pts: list[Coord] = [] rng = rng or random.Random() for i in range(n): @@ -131,9 +129,7 @@ def union(a: int, b: int) -> bool: return frames -def prim_frames( - g: dict[str, Any], start: int = 0, max_steps: int = 50000 -) -> list[dict[str, Any]]: +def prim_frames(g: dict[str, Any], start: int = 0, max_steps: int = 50000) -> list[dict[str, Any]]: n: int = g["n"] edges: list[Edge] = g["edges"] # Build adjacency diff --git a/flask_app/visualizations/nn_viz.py b/flask_app/visualizations/nn_viz.py index e2e0d58..b3f128d 100644 --- a/flask_app/visualizations/nn_viz.py +++ b/flask_app/visualizations/nn_viz.py @@ -101,10 +101,7 @@ def __init__(self, hidden: int, lr: float, seed: int | None = None) -> None: def forward(self, x: Point) -> tuple[list[float], float]: # x: (2,) # z1 = W1 x + b1 - z1 = [ - self.W1[i][0] * x[0] + self.W1[i][1] * x[1] + self.b1[i] - for i in range(self.h) - ] + z1 = [self.W1[i][0] * x[0] + self.W1[i][1] * x[1] + self.b1[i] for i in range(self.h)] a1 = [_tanh(z) for z in z1] # z2 = W2 a1 + b2 z2 = sum(self.W2[i] * a1[i] for i in range(self.h)) + self.b2 @@ -137,10 +134,7 @@ def backward_update(self, x: Point, y: int) -> float: # loss for monitoring (BCE) eps = 1e-9 - loss = -( - float(y) * math.log(yhat + eps) - + (1.0 - float(y)) * math.log(1.0 - yhat + eps) - ) + loss = -(float(y) * math.log(yhat + eps) + (1.0 - float(y)) * math.log(1.0 - yhat + eps)) return loss def predict_proba(self, x: Point) -> float: diff --git a/flask_app/visualizations/path_viz.py b/flask_app/visualizations/path_viz.py index 94855a2..23fe466 100644 --- a/flask_app/visualizations/path_viz.py +++ b/flask_app/visualizations/path_viz.py @@ -92,9 +92,7 @@ def a_star_frames(grid: dict[str, Any], max_steps: int = 50000) -> list[dict[str came_from: dict[Coord, Coord] = {} g_score: dict[Coord, int] = {start: 0} - frames: list[dict[str, Any]] = [ - _frame(None, list(open_set), list(closed_set), [], "init") - ] + frames: list[dict[str, Any]] = [_frame(None, list(open_set), list(closed_set), [], "init")] while open_heap and len(frames) < max_steps: _, _, current = heapq.heappop(open_heap) @@ -105,9 +103,7 @@ def a_star_frames(grid: dict[str, Any], max_steps: int = 50000) -> list[dict[str if current == goal: path = _reconstruct_path(came_from, current) - frames.append( - _frame(current, list(open_set), list(closed_set), path, "done") - ) + frames.append(_frame(current, list(open_set), list(closed_set), path, "done")) return frames closed_set.add(current) @@ -125,18 +121,14 @@ def a_star_frames(grid: dict[str, Any], max_steps: int = 50000) -> list[dict[str heapq.heappush(open_heap, (f, tie, nbr)) open_set.add(nbr) p = _reconstruct_path(came_from, current) - frames.append( - _frame(nbr, list(open_set), list(closed_set), p, "push/update") - ) + frames.append(_frame(nbr, list(open_set), list(closed_set), p, "push/update")) # No path found frames.append(_frame(None, list(open_set), list(closed_set), [], "no-path")) return frames -def dijkstra_frames( - grid: dict[str, Any], max_steps: int = 50000 -) -> list[dict[str, Any]]: +def dijkstra_frames(grid: dict[str, Any], max_steps: int = 50000) -> list[dict[str, Any]]: rows, cols = grid["rows"], grid["cols"] walls = set(map(tuple, grid["walls"])) start: Coord = tuple(grid["start"]) # type: ignore @@ -151,9 +143,7 @@ def dijkstra_frames( came_from: dict[Coord, Coord] = {} dist: dict[Coord, int] = {start: 0} - frames: list[dict[str, Any]] = [ - _frame(None, list(open_set), list(closed_set), [], "init") - ] + frames: list[dict[str, Any]] = [_frame(None, list(open_set), list(closed_set), [], "init")] while open_heap and len(frames) < max_steps: _, _, current = heapq.heappop(open_heap) @@ -164,9 +154,7 @@ def dijkstra_frames( if current == goal: path = _reconstruct_path(came_from, current) - frames.append( - _frame(current, list(open_set), list(closed_set), path, "done") - ) + frames.append(_frame(current, list(open_set), list(closed_set), path, "done")) return frames closed_set.add(current) @@ -183,9 +171,7 @@ def dijkstra_frames( heapq.heappush(open_heap, (nd, tie, nbr)) open_set.add(nbr) p = _reconstruct_path(came_from, current) - frames.append( - _frame(nbr, list(open_set), list(closed_set), p, "push/update") - ) + frames.append(_frame(nbr, list(open_set), list(closed_set), p, "push/update")) frames.append(_frame(None, list(open_set), list(closed_set), [], "no-path")) return frames @@ -246,9 +232,7 @@ def gbfs_frames(grid: dict[str, Any], max_steps: int = 50000) -> list[dict[str, closed_set: set[Coord] = set() came_from: dict[Coord, Coord] = {} - frames: list[dict[str, Any]] = [ - _frame(None, list(open_set), list(closed_set), [], "init") - ] + frames: list[dict[str, Any]] = [_frame(None, list(open_set), list(closed_set), [], "init")] while open_heap and len(frames) < max_steps: _, _, current = heapq.heappop(open_heap) @@ -259,9 +243,7 @@ def gbfs_frames(grid: dict[str, Any], max_steps: int = 50000) -> list[dict[str, if current == goal: path = _reconstruct_path(came_from, current) - frames.append( - _frame(current, list(open_set), list(closed_set), path, "done") - ) + frames.append(_frame(current, list(open_set), list(closed_set), path, "done")) return frames closed_set.add(current) diff --git a/flask_app/visualizations/sorting_viz.py b/flask_app/visualizations/sorting_viz.py index 56c37e3..5b6833f 100644 --- a/flask_app/visualizations/sorting_viz.py +++ b/flask_app/visualizations/sorting_viz.py @@ -11,9 +11,7 @@ def _snap( return {"arr": arr[:], "a": a, "b": b, "op": op} -def generate_array( - n: int = 30, seed: int | None = None, unique: bool = True -) -> list[int]: +def generate_array(n: int = 30, seed: int | None = None, unique: bool = True) -> list[int]: """ Generate a random array for visualization. - If unique: values are 1..n shuffled @@ -47,9 +45,7 @@ def bubble_sort_frames(arr: list[int], max_steps: int = 20000) -> list[dict[str, return frames -def insertion_sort_frames( - arr: list[int], max_steps: int = 20000 -) -> list[dict[str, Any]]: +def insertion_sort_frames(arr: list[int], max_steps: int = 20000) -> list[dict[str, Any]]: a = arr[:] frames: list[dict[str, Any]] = [_snap(a)] for i in range(1, len(a)): diff --git a/flask_app/visualizations/topo_viz.py b/flask_app/visualizations/topo_viz.py index c2f9b2b..21bff12 100644 --- a/flask_app/visualizations/topo_viz.py +++ b/flask_app/visualizations/topo_viz.py @@ -51,8 +51,7 @@ def generate_dag( rng = random.Random(seed) coords = _layer_layout(n, layers) nodes = [ - {"id": i, "x": coords[i][0], "y": coords[i][1], "layer": coords[i][2]} - for i in range(n) + {"id": i, "x": coords[i][0], "y": coords[i][1], "layer": coords[i][2]} for i in range(n) ] # group node ids by layer From 8781a8b7e00b45faa4311cf8f4fae48d30434344 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Wed, 29 Oct 2025 02:51:25 +0000 Subject: [PATCH 3/3] Address code review feedback - clarify test configuration Co-authored-by: ianlintner <500914+ianlintner@users.noreply.github.com> --- AGENTS.md | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/AGENTS.md b/AGENTS.md index 7afd89b..40676bb 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -198,14 +198,14 @@ The codebase is organized by common interview patterns: ### Test Framework and Configuration - **Framework:** All tests use **pytest** (version 8.0+) -- **Configuration:** Test settings in `pyproject.toml` and `pytest.ini` -- **Coverage:** Tests include coverage reporting via pytest-cov +- **Configuration:** Test settings in `pyproject.toml` under `[tool.pytest.ini_options]` +- **Coverage:** Tests include coverage reporting via pytest-cov (enabled by default) - **Test Discovery:** Tests in `tests/` directory, files matching `test_*.py` pattern - **Module Path:** `tests/conftest.py` ensures the project root is on `sys.path` ### Running Tests ```bash -# Run all tests +# Run all tests (includes coverage by default per pyproject.toml) pytest -q # Run specific test file @@ -214,8 +214,11 @@ pytest tests/test_sorting.py -v # Run single test pytest tests/test_demos.py::test_run_all_demos_headless -v -# Run with coverage report +# Run with detailed coverage report pytest --cov=src --cov-report=term-missing + +# Run without coverage (faster for quick checks) +pytest -q --no-cov ``` ### Test Requirements