English | Русский
Important
Format all Rust code using nightly rustfmt.
More information
cargo +nightly fmt --Why is this important?
Consistent formatting enhances readability, reduces merge conflicts, and makes code reviews smoother. It ensures that every team member’s code adheres to a unified standard.
Examples & Further Explanation
For instance, a well-formatted codebase allows new team members to quickly understand the project structure and logic. Automated formatting saves time and minimizes stylistic debates during code reviews.
Tip
Use the following .rustfmt.toml configuration to ensure consistent formatting across the project.
Configuration
# Do not add trailing commas if there is only one element
trailing_comma = "Never"
# Keep braces on the same line where possible
brace_style = "SameLineWhere"
# Align struct fields if their length is below the threshold
struct_field_align_threshold = 20
# Format comments inside documentation
wrap_comments = true
format_code_in_doc_comments = true
# Do not collapse struct literals into a single line
struct_lit_single_line = false
# Maximum line width
max_width = 99
# Grouping imports
imports_granularity = "Crate" # Group imports by crate
group_imports = "StdExternalCrate" # Separate groups: std, external crates, local
reorder_imports = true # Sort imports within groups
# Enable unstable features (nightly only)
unstable_features = trueWhy is this important?
This configuration enforces clarity and consistency. It reduces unnecessary diffs in pull requests, makes code reviews easier, and ensures that both style and readability remain predictable across the team.
Examples & Further Explanation
use std::fmt; use std::io; use serde::Serialize;
struct Person {name:String,age:u32}
impl Person{
pub fn new(name:String,age:u32)->Self{
Self{name,age}
}
}use std::{fmt, io};
use serde::Serialize;
struct Person {
name: String,
age: u32
}
impl Person {
pub fn new(name: String, age: u32) -> Self {
Self {
name,
age
}
}
}Notice how the imports are grouped and sorted, struct fields are aligned for readability, and braces are consistently placed on the same line. This reduces noise in diffs and makes the codebase approachable for both newcomers and experienced contributors.
Important
Use clear, descriptive names that reflect purpose. Follow snake_case for variables/functions, PascalCase for types, and SCREAMING_SNAKE_CASE for constants.
More information
Descriptive Names:
create_user_handler– OKcreate_user_service– OKcreate– NOcreate_user– NOFollow Rust's snake_case for variables and functions.
Use PascalCase for structs and enums (e.g.,
TransactionStatus).Constants should be in SCREAMING_SNAKE_CASE.
Why Descriptive Naming?
Descriptive names reduce ambiguity, facilitate easier onboarding, and improve maintainability. Clear names make it evident what a function or variable does, avoiding misunderstandings and conflicts.
Examples & Further Explanation
For example,
create_user_handlerindicates that the function is responsible for handling user creation in a web context, whereas a generic name likecreategives no context.
Important
Write clean, maintainable code. Avoid unnecessary complexity, panics, and cloning. Minimize global state and restrict use of :: to import statements. Do not use mod.rs files.
More information
- Write clean and maintainable code.
- Avoid unnecessary complexity.
- Avoid unnecessary
unwrap()andclone().- Minimize global state and side effects.
- Use
::only in import statements.- Do not use
mod.rsfiles.Examples & Further Explanation
Instead of writing
some_option.unwrap(), prefer:let value = some_option.ok_or("Expected a value, but found None")?;This propagates errors properly and avoids crashing the application. Similarly, favor organizing modules in separate
module_name.rsfiles rather than using legacymod.rsfiles, which simplifies project structure and improves module discoverability.
Note
Each branch, commit, and PR must correspond directly to a GitHub Issue number. This ensures automatic linking, clean history, and full traceability.
More information
Create a Branch Named Only by Issue Number:
Each branch name must be exactly the Issue number.
Example:git checkout -b 123Use Auto-Linking in Commits:
To make GitHub automatically link commits to Issues, always start the commit message with#followed by the Issue number and a space.
Example:#123 implement login session restore #123 fix null pointer in user handlerPull Request Title = Branch Name:
The PR title must be the same as the branch name (just the Issue number).
Example:123Add an Auto-Close Reference:
In the PR description, always include:Closes #123This automatically closes the Issue when the PR is merged.
Clean Up After Merge:
Enable “Delete branch on merge” in repository settings, so merged branches are automatically removed.
The chain Issue → Branch → Commits → PR → Merge remains fully linked.Keep the Repository Clean:
Every branch must correspond to an active Issue.
No orphaned or experimental branches should remain after merge.Real-World Example & Explanation
Suppose you are assigned Issue
#123to fix a login session bug. You create a branch named123and start committing with messages like:#123 implement login session restore #123 add retry logic for session token refreshThen you open a PR titled
123with the description:Closes #123When the PR is merged, GitHub automatically closes Issue #123, deletes the branch, and shows all related commits in the Issue timeline. This creates a perfectly traceable and automated workflow with minimal manual steps.
Tip
Follow best practices to maintain high code quality.
More information
- Use
cargo clippyfor linting.- Handle errors gracefully using
ResultandOption.- Avoid unnecessary panics.
Examples & Further Explanation
Instead of writing:
let value = some_option.unwrap();use:
let value = some_option.ok_or("Expected a value, but found None")?;This pattern ensures errors are propagated and handled appropriately, increasing the robustness of your application.
Important
Avoid panics in production; use proper error handling with Result and the ? operator.
More information
- Avoid panics in production code.
- Discouraged: Avoid using
unwrap()andexpect()unless absolutely certain that an error cannot occur.- Preferred: Use proper error handling with
Resultand the?operator.Examples & Further Explanation
For example, instead of:
let config = Config::from_file("config.toml").unwrap();use:
let config = Config::from_file("config.toml") .map_err(|e| format!("Failed to load config: {}", e))?;This approach logs detailed error messages and gracefully propagates errors up the call stack, leading to a more robust and maintainable system.
Real-World Incident: Cloudflare Outage (November 2025)
On November 18, 2025, a single
.unwrap()call in Rust code caused a massive outage across Cloudflare's 330+ datacenters. Services like ChatGPT, X, Canva, and many others went offline for approximately 3 hours.The root cause: a configuration change caused a features file to contain more entries than expected. The Rust code checked a limit but used
unwrap()on an error path instead of handling it gracefully. When the limit was exceeded, the code panicked with:"thread fl2_worker_thread panicked: called Result::unwrap() on an Err value"Lesson: The
.unwrap()had been in the codebase for a long time but was never triggered until unexpected input reached that code path. This is why production code must handle all error cases explicitly.
Important
All commits must pass pre-commit checks. Tests, formatting, linting, and security scans are enforced both locally (via pre-commit hooks) and remotely (via CI).
More information
Pre-commit Hooks
- Installed via:
cargo make install-hooks- Automatically run before each commit:
cargo +nightly fmt --cargo clippy -D warningscargo test --all- Prevent committing unformatted code, warnings, or failing tests.
Unit Tests
- Cover public functions and error cases.
- Tests must not rely on
unwrap()orexpect().Integration Tests
- Cover public API, placed in the
tests/directory.Doctests
- All
///examples must compile and pass withcargo test --doc.Coverage (cargo-llvm-cov + Codecov)
- Install:
cargo install cargo-llvm-cov- Run locally:
cargo llvm-cov --all-features --workspace --html- CI configuration:
- name: Install cargo-llvm-cov uses: taiki-e/install-action@cargo-llvm-cov - name: Generate code coverage run: cargo llvm-cov --all-features --workspace --codecov --output-path codecov.json - name: Upload coverage to Codecov uses: codecov/codecov-action@v5 with: token: ${{ secrets.CODECOV_TOKEN }} files: codecov.json fail_ci_if_error: trueWhy cargo-llvm-cov + Codecov?
- Precision: LLVM-based instrumentation provides accurate line and branch coverage, more reliable than source-based tools
- Speed: Significantly faster than tarpaulin, especially on large codebases with many dependencies
- Native format: Direct codecov.json output without intermediate conversion steps
- Visualization: Codecov dashboard shows coverage trends over time, PR coverage diffs, and interactive sunburst charts
- PR integration: Automatic coverage reports as PR comments, showing exactly which lines are covered/uncovered
- Branch protection: Configure minimum coverage thresholds to fail CI when coverage drops
- Rust toolchain: Uses rustc's built-in instrumentation, ensuring compatibility with all Rust features
Examples & Further Explanation
#[cfg(test)] mod tests { use super::*; #[test] fn test_basic_math() { assert_eq!(2 + 2, 4); } }// tests/config_tests.rs use my_crate::load_config; #[test] fn load_valid_config() { let result = load_config("tests/data/valid.toml"); assert!(result.is_ok()); }This workflow enforces correctness at every step: developers cannot commit broken code, and CI ensures nothing slips through at merge time.
Note
No inline code comments. All explanations live in docblocks attached to modules, structs, enums, traits, functions, and methods.
More information
No line comments in code:
Avoid// ...and/* ... */for explanations of behavior, intent, or invariants. Keep code clean and self-explanatory.Use Rust doc comments consistently:
- Crate/module: use
//!at the top oflib.rsor module files for module-level docs.- Items (structs, enums, traits, fns, methods): use
///on the item.Structure docblocks for IDEs and LSPs:
Use headings Rustdoc understands so hovers and Treesitter outlines are stable and rich:
# Overviewshort purpose# Examplesminimal, compilable samples# Errorsprecise failure modes forResult# Panicsonly if unavoidable (should be rare)# Safetyifunsafeis used (shouldn’t be)# Performanceif complexity or allocations matterWrite for other engineers:
Be explicit about contracts, inputs, outputs, invariants, and edge cases. Keep examples runnable. Prefer clarity over cleverness.Keep docs close to code:
Update docblocks with code changes in the same PR. Out-of-date docs are worse than none.Correct vs Incorrect (Rust)
Incorrect (inline comments that won’t surface in hovers):
// Calculates checksum and validates header // Returns Err if invalid pub fn verify(pkt: &Packet) -> Result<(), VerifyError> { // fast path if pkt.header.len() < MIN { return Err(VerifyError::TooShort); } // slow path... Ok(()) }Correct (docblocks; IDE hover shows the contract):
/// # Overview /// Verifies packet header and payload consistency. /// /// # Examples /// ``` /// # use mynet::{Packet, verify}; /// # fn demo(mut p: Packet) { /// # // prepare p... /// # let _ = verify(&p).unwrap(); /// # } /// ``` /// /// # Errors /// - `VerifyError::TooShort` when header is smaller than the required minimum. /// - `VerifyError::ChecksumMismatch` when computed checksum differs. pub fn verify(pkt: &Packet) -> Result<(), VerifyError> { if pkt.header.len() < MIN { return Err(VerifyError::TooShort); } // internal micro-notes for maintainers are allowed if they aid refactoring // (but not to explain business logic). Keep them brief. Ok(()) }Module-level docs instead of a comment banner:
//! Cryptographic key management and signing primitives. //! //! Provides deterministic ECDSA with explicit domain separation. //! //! # Examples //! ``` //! # use keys::{Keypair, Signer}; //! # fn demo() { //! # let kp = Keypair::generate(); //! # let sig = kp.sign(b"payload"); //! # assert!(kp.verify(b"payload", &sig).is_ok()); //! # } //! ``` pub mod crypto { /* ... */ }Real-World Rationale
This policy ensures stable IDE/LSP hovers, better Treesitter outlines, and reliable navigation. Engineers see contracts immediately, CI can lint docs, and examples stay compilable. Code remains clean while documentation remains discoverable and accurate.
Tip
Use the comprehensive code review methodology to find vulnerabilities, performance issues, and quality problems systematically.
More information
Available in two languages:
Quick Links:
Topic EN RU Quick Reference Cheat Sheet Шпаргалка Security Vulnerabilities Уязвимости Performance Issues Проблемы Code Quality Quality Качество Rust Patterns Specifics Специфика Examples Real Cases Примеры What's Covered
Security Vulnerabilities:
- Replay attacks and authentication bypasses
- SQL/Command injections
- Secret leaks and cryptography issues
- Input validation problems
Performance Issues:
- Inefficient allocations and unnecessary cloning
- O(n^2) algorithms where O(n) is possible
- Duplicate operations and double parsing
- Blocking operations in async code
Code Quality:
- DRY violations and code duplication
- Naming and readability
- Documentation standards
- Testing coverage
Rust-Specific:
- Ownership and borrowing patterns
- Panic vs Result handling
- Unsafe code review
- Trait bounds and generics
Quick 5-Minute Checklist
Security (2 min):
- No secrets in code
- No
unwrap()/expect()in production- Input data validated
- No SQL/Command injections
Performance (1 min):
- No obvious O(n^2)
- No duplicate operations
Vec::with_capacity()where neededQuality (2 min):
- No code duplication (> 3 times)
- Functions < 50 lines
- Tests for new logic
Important
Use cargo-chef for Docker layer caching and registry cache for CI. This dramatically reduces build times for unchanged dependencies.
More information
The Problem:
- Rust compilation is slow, especially for large dependency trees
- Docker rebuilds everything when any file changes
--mount=type=cachedoesn't persist between CI runners- Each CI run starts from scratch without proper caching
The Solution: cargo-chef + Registry Cache
- cargo-chef separates dependency compilation from source compilation
- Registry cache persists Docker layers between CI runs
- Dependencies are cached as a separate layer that only rebuilds when Cargo.toml/Cargo.lock change
Dockerfile Pattern
# syntax=docker/dockerfile:1 ARG RUST_VERSION=1.83.0 # Chef stage - install cargo-chef FROM rust:${RUST_VERSION} AS chef RUN cargo install cargo-chef --locked WORKDIR /app # Planner - create recipe from dependencies only FROM chef AS planner COPY Cargo.toml Cargo.lock ./ COPY my-crate/Cargo.toml my-crate/ COPY crates crates/ RUN cargo chef prepare --recipe-path recipe.json # Builder - build dependencies, then source FROM chef AS builder # Build dependencies (cached if recipe.json unchanged) COPY --from=planner /app/recipe.json recipe.json RUN cargo chef cook --release --recipe-path recipe.json # Build application (only this layer rebuilds on code changes) COPY . . RUN cargo build --release && strip target/release/my-binary # Runtime - minimal image FROM debian:bookworm-slim COPY --from=builder /app/target/release/my-binary /usr/local/bin/ CMD ["my-binary"]Key Points:
- Planner stage copies only Cargo.toml files (not source code)
cargo chef preparecreates recipe.json from dependenciescargo chef cookcompiles dependencies - this layer is cached- Source code is copied after dependencies are built
- Only the final
cargo buildrecompiles when code changesGitHub Actions CI Pattern
jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v5 - uses: docker/setup-buildx-action@v3 - uses: docker/login-action@v3 with: registry: ${{ env.REGISTRY }} username: ${{ secrets.REGISTRY_USERNAME }} password: ${{ secrets.REGISTRY_TOKEN }} - name: Build image uses: docker/build-push-action@v6 with: context: . file: ./Dockerfile push: false load: true tags: ${{ env.REGISTRY }}/my-image:${{ env.TAG }} cache-from: | type=registry,ref=${{ env.REGISTRY }}/my-image:cache cache-to: | type=registry,ref=${{ env.REGISTRY }}/my-image:cache,mode=maxKey Points:
cache-frompulls cached layers from registry before buildcache-topushes new cache layers after buildmode=maxcaches all intermediate layers (not just final)- Cache tag is separate from image tags (e.g.,
:cache)- Works across different CI runners and branches
What NOT to Do
Don't use
--mount=type=cachefor CI:# BAD - cache doesn't persist between CI runners RUN --mount=type=cache,target=/usr/local/cargo/registry \ cargo build --releaseDon't copy all source files before dependencies:
# BAD - any file change invalidates dependency cache COPY . . RUN cargo build --releaseDon't use GHA cache for large Rust builds:
# BAD - GHA cache has 10GB limit, Rust target/ easily exceeds it - uses: actions/cache@v4 with: path: target/ key: rust-${{ hashFiles('Cargo.lock') }}Performance Impact
Scenario Without Caching With cargo-chef + Registry Cache First build 15-30 min 15-30 min Code change only 15-30 min 2-5 min Dependency change 15-30 min 15-30 min No changes 15-30 min 30 sec - 1 min The key insight: most CI runs only change application code, not dependencies. With proper caching, these builds skip 90%+ of compilation time.
Tip
Beyond basic tests and lints, professional Rust projects should include license compliance, API stability, MSRV verification, and dependency auditing in CI.
More information
Tool Purpose When to Use cargo-denyLicense compliance, duplicate deps, security advisories Any project with dependencies cargo-semver-checksDetect breaking API changes Libraries published to crates.io MSRV check Verify minimum supported Rust version Projects with rust-versionin Cargo.tomlcargo-macheteFind unused dependencies Reduce bloat, faster builds Doctests Verify documentation examples compile Projects with ///doc commentscargo-qualityCode quality with hardcoded standards Zero-config quality enforcement rust-diff-analyzerSemantic PR size analysis Enforce reviewable PR sizes sql-query-analyzerSQL static analysis + LLM optimization Projects with SQL queries cargo-deny: License & Security
Installation:
cargo install cargo-denyConfiguration (
deny.toml):[advisories] db-path = "~/.cargo/advisory-db" vulnerability = "deny" unmaintained = "warn" yanked = "deny" [licenses] allow = ["MIT", "Apache-2.0", "BSD-3-Clause", "ISC", "Zlib"] copyleft = "deny" unlicensed = "deny" [bans] multiple-versions = "warn" wildcards = "deny" [sources] unknown-registry = "deny" unknown-git = "deny"CI Integration:
- name: Check licenses and advisories run: cargo deny checkWhy it matters:
- Prevents accidental GPL/AGPL dependencies in MIT projects
- Catches known security vulnerabilities (RustSec)
- Warns about duplicate dependency versions (bloat)
cargo-semver-checks: API Stability
Installation:
cargo install cargo-semver-checksUsage:
# Compare against last published version cargo semver-checks check-release # Compare against specific version cargo semver-checks check-release --baseline-version 1.2.0CI Integration:
- name: Check semver compliance if: github.event_name == 'pull_request' run: | cargo install cargo-semver-checks cargo semver-checks check-releaseWhat it catches:
- Removing public functions/types (breaking)
- Changing function signatures (breaking)
- Adding required fields to structs (breaking)
- Changing enum variants (breaking)
When to use: Any library published to crates.io where users depend on your API.
MSRV Check: Minimum Supported Rust Version
In Cargo.toml:
[package] rust-version = "1.83" # MSRV edition = "2024"CI Integration:
jobs: msrv: runs-on: ubuntu-latest steps: - uses: actions/checkout@v5 - name: Extract MSRV id: msrv run: | MSRV=$(grep '^rust-version' Cargo.toml | sed 's/.*"\(.*\)"/\1/') echo "version=$MSRV" >> $GITHUB_OUTPUT - uses: dtolnay/rust-toolchain@master with: toolchain: ${{ steps.msrv.outputs.version }} - run: cargo check --all-featuresWhy it matters:
- Edition 2024 requires Rust 1.85+
- Users on older Rust versions get clear errors
- Prevents accidental use of newer features
cargo-machete: Unused Dependencies
Installation:
cargo install cargo-macheteUsage:
cargo macheteCI Integration:
- name: Check for unused dependencies run: | cargo install cargo-machete cargo macheteBenefits:
- Faster compile times
- Smaller binary size
- Reduced attack surface
- Cleaner dependency tree
Doctests: Documentation Examples
Run doctests explicitly:
cargo test --docCI Integration:
- name: Run doctests run: cargo test --doc --all-featuresExample doctest:
/// Calculates the sum of two numbers. /// /// # Examples /// /// ``` /// use mylib::add; /// assert_eq!(add(2, 3), 5); /// ``` pub fn add(a: i32, b: i32) -> i32 { a + b }Why separate doctests:
cargo testruns unit + integration + doc tests together- Doctests often need different feature flags
- Faster feedback when docs change but code doesn't
cargo-quality: Zero-Config Quality Enforcement
The Problem:
- Teams scatter
.rustfmt.toml,.clippy.tomlacross every repository- Different projects have different standards
- New developers don't know which rules apply
The Solution: All standards hardcoded into a single binary. Install once, use everywhere.
Installation:
cargo install cargo-qualityCommands:
cargo qual check src/ # Analyze without changes cargo qual fix --dry-run # Preview fixes cargo qual fix # Apply fixes cargo qual fmt # Format (max_width: 99)Four Analyzers:
Analyzer Detects Auto-fix path_importDirect module paths that should be imports Yes format_argsPositional args in format macros Yes empty_linesEmpty lines in functions (complexity smell) Yes inline_commentsComments that should be doc blocks No CI Integration:
- uses: RAprogramm/cargo-quality@v0 with: path: 'src/' fail_on_issues: 'true' post_comment: 'true'Why cargo-quality:
- Single source of truth for all repositories
- Catches patterns rustfmt/clippy miss (architectural issues)
- 86% test coverage, benchmarked performance
rust-diff-analyzer: Semantic PR Analysis
The Problem:
- Line count limits are meaningless (500 lines of tests ≠ 500 lines of prod)
- Large PRs hide bugs and slow reviews
- Test code shouldn't count toward PR size
The Solution: AST-based analysis that understands Rust code semantics.
Installation:
cargo install rust-diff-analyzerUsage:
git diff main | rust-diff-analyzer rust-diff-analyzer --diff-file changes.diff --max-units 50Weighted Scoring:
Unit Type Public Private Function 3 1 Struct 3 1 Trait 4 4 Impl Block 2 2 Smart Classification:
tests/,benches/,examples/→ test code (excluded)#[test],#[cfg(test)]→ test code (excluded)- Everything else → production code (counts toward limits)
CI Integration:
- uses: RAprogramm/rust-prod-diff-checker@v1 with: max_prod_units: 30 max_weighted_score: 100 fail_on_exceed: 'true' post_comment: 'true'Why semantic analysis:
- 100 lines of tests ≠ 100 lines of business logic
- Public API changes need more review than private helpers
- Data-driven PR size governance
sql-query-analyzer: SQL Static Analysis
The Problem:
- SQL bugs discovered in production (missing indexes, N+1 queries)
- Security issues (UPDATE without WHERE) slip through review
- No schema-aware analysis in existing tools
The Solution: 18 deterministic rules + optional LLM-powered optimization.
Installation:
cargo install sql-query-analyzerUsage:
# Static analysis (instant, no API key) sql-query-analyzer analyze -s schema.sql -q queries.sql # SARIF for GitHub Code Scanning sql-query-analyzer analyze -s schema.sql -q queries.sql -f sarif > results.sarif18 Built-in Rules:
Category Rules Examples Performance (11) PERF001-011 Unbounded SELECT, leading wildcards, N+1 Security (2) SEC001-002 UPDATE/DELETE without WHERE Style (2) STYLE001-002 SELECT *, missing table aliases Schema (3) SCHEMA001-003 Missing indexes, invalid columns CI Integration:
- uses: RAprogramm/sql-query-analyzer@v1 with: schema: db/schema.sql queries: db/queries.sql upload-sarif: 'true' post-comment: 'true'Why sql-query-analyzer:
- Schema-aware (knows your indexes and columns)
- Catches N+1 patterns before production
- ~1000 queries in <100ms (rayon parallelism)
Links: GitHub
Complete CI Quality Gate
jobs: quality: runs-on: ubuntu-latest steps: - uses: actions/checkout@v5 - uses: dtolnay/rust-toolchain@stable with: components: clippy, rustfmt - name: Format check run: cargo +nightly fmt -- --check - name: Clippy run: cargo clippy --all-targets -- -D warnings - name: Tests run: cargo test --all-features - name: Doctests run: cargo test --doc --all-features - name: Unused dependencies run: | cargo install cargo-machete cargo machete - name: License & security run: | cargo install cargo-deny cargo deny check # Code quality (architectural patterns) - uses: RAprogramm/cargo-quality@v0 with: fail_on_issues: 'true' post_comment: 'true' msrv: runs-on: ubuntu-latest steps: - uses: actions/checkout@v5 - uses: dtolnay/rust-toolchain@1.83.0 - run: cargo check --all-features semver: if: github.event_name == 'pull_request' runs-on: ubuntu-latest steps: - uses: actions/checkout@v5 - uses: dtolnay/rust-toolchain@stable - run: | cargo install cargo-semver-checks cargo semver-checks check-release pr-size: if: github.event_name == 'pull_request' runs-on: ubuntu-latest steps: - uses: actions/checkout@v5 with: fetch-depth: 0 - uses: RAprogramm/rust-prod-diff-checker@v1 with: max_prod_units: 30 max_weighted_score: 100 fail_on_exceed: 'true' post_comment: 'true' sql-analysis: runs-on: ubuntu-latest steps: - uses: actions/checkout@v5 - uses: RAprogramm/sql-query-analyzer@v1 with: schema: db/schema.sql queries: db/queries/ upload-sarif: 'true'
Important
Use a single CI workflow file with multiple jobs instead of multiple separate workflow files. This provides better control, visibility, and resource management.
More information
The Problem with Multiple Workflows:
- No way to synchronize jobs between different workflows
- Cannot define dependencies (Job C runs after Job A and B)
- Harder to manage concurrency and cancellation
- Duplicated trigger configuration across files
- Scattered CI logic makes debugging difficult
- Multiple workflow runs for same commit consume more resources
The Solution: Single Workflow with Multiple Jobs
- One workflow file contains all CI/CD logic
- Jobs handle different tasks (test, build, deploy)
needskeyword defines job dependencies- Reusable workflows (
_*.yml) extract common patterns- Concurrency groups prevent duplicate runs
Architecture Pattern
.github/workflows/ ├── ci.yml # Main CI workflow (triggers on push/PR) ├── _build-service.yml # Reusable: build Docker image ├── _deploy-service.yml # Reusable: deploy to k8s └── _quality-check.yml # Reusable: run tests/lintsKey principle: Files starting with
_are reusable workflows called viauses:. Onlyci.ymldefines triggers.Job Dependencies with `needs`
jobs: detect-changes: runs-on: ubuntu-latest outputs: api: ${{ steps.filter.outputs.api }} client: ${{ steps.filter.outputs.client }} quality-check: needs: [detect-changes] if: needs.detect-changes.outputs.api == 'true' build-api: needs: [detect-changes, quality-check] if: | always() && needs.detect-changes.outputs.api == 'true' && needs.quality-check.result == 'success' deploy-api: needs: [build-api] if: needs.build-api.result == 'success'Key Points:
needscreates dependency chain- Jobs run in parallel unless
needsenforces order- Use
if: always()to run even if dependencies were skipped- Check
needs.<job>.resultfor conditional executionConcurrency Control
name: CI/CD Pipeline on: push: branches: [main] pull_request: concurrency: group: ci-${{ github.ref }} cancel-in-progress: trueWhat this does:
- Groups runs by branch/PR (
github.ref)- New push cancels previous running workflow
- Prevents wasted resources on outdated commits
- Only one active run per branch at a time
Reusable Workflows
Main workflow calls reusable:
# ci.yml jobs: build-api: uses: ./.github/workflows/_build-service.yml with: service_name: api-server dockerfile: ./api-server/Dockerfile secrets: registry_token: ${{ secrets.REGISTRY_TOKEN }}Reusable workflow definition:
# _build-service.yml name: Build Service on: workflow_call: inputs: service_name: required: true type: string dockerfile: required: true type: string secrets: registry_token: required: true outputs: image_tag: value: ${{ jobs.build.outputs.tag }} jobs: build: runs-on: ubuntu-latest outputs: tag: ${{ steps.meta.outputs.tag }} steps: # ... build logicBenefits:
- DRY: same build logic for all services
- Inputs/outputs for configuration
- Secrets passed explicitly (security)
- Easy to update in one place
Independent Service Builds
jobs: build-api: needs: [detect-changes, quality-api] if: needs.detect-changes.outputs.api == 'true' uses: ./.github/workflows/_build-service.yml build-client: needs: [detect-changes, quality-client] if: needs.detect-changes.outputs.client == 'true' uses: ./.github/workflows/_build-service.yml deploy-api: needs: [build-api] # Only depends on its own build if: needs.build-api.result == 'success' deploy-client: needs: [build-client] # Independent from api if: needs.build-client.result == 'success'Key principle: Each service's deploy depends only on its own build, not on other services. If api-server build fails, client can still deploy.
What NOT to Do
Don't create separate workflow files for each task:
# BAD - no synchronization possible .github/workflows/ ├── test.yml ├── build-api.yml ├── build-client.yml ├── deploy-api.yml ├── deploy-client.yml └── cleanup.ymlDon't make all deploys depend on all builds:
# BAD - client waits for api even if unrelated deploy-client: needs: [build-api, build-client, build-worker]Don't skip concurrency control:
# BAD - multiple runs waste resources on: push: branches: [main] # Missing: concurrency groupComplete Example Structure
name: CI/CD Pipeline on: push: branches: [main] workflow_dispatch: inputs: deploy_all: type: boolean default: false concurrency: group: ci-${{ github.ref }} cancel-in-progress: true jobs: # 1. Detect what changed detect-changes: runs-on: ubuntu-latest outputs: api: ${{ steps.filter.outputs.api }} client: ${{ steps.filter.outputs.client }} steps: - uses: dorny/paths-filter@v3 id: filter with: filters: | api: - 'api-server/**' client: - 'client/**' # 2. Quality gates (parallel) quality-api: needs: [detect-changes] if: needs.detect-changes.outputs.api == 'true' uses: ./.github/workflows/_quality-check.yml quality-client: needs: [detect-changes] if: needs.detect-changes.outputs.client == 'true' uses: ./.github/workflows/_quality-check.yml # 3. Build (after quality) build-api: needs: [detect-changes, quality-api] if: needs.quality-api.result == 'success' uses: ./.github/workflows/_build-service.yml build-client: needs: [detect-changes, quality-client] if: needs.quality-client.result == 'success' uses: ./.github/workflows/_build-service.yml # 4. Deploy (independent per service) deploy-api: needs: [build-api] if: needs.build-api.result == 'success' uses: ./.github/workflows/_deploy-service.yml deploy-client: needs: [build-client] if: needs.build-client.result == 'success' uses: ./.github/workflows/_deploy-service.yml
Following these guidelines ensures that our Rust code is high-quality, maintainable, and scalable.