Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: added initial live DAST server implementation #5772

Open
wants to merge 29 commits into
base: dev
Choose a base branch
from

Conversation

Ice3man543
Copy link
Member

@Ice3man543 Ice3man543 commented Oct 26, 2024

Proposed changes

Added initial DAST live server implementation (experimental)

Checklist

  • Pull request is created against the dev branch
  • All checks passed (lint, unit/integration/regression tests etc.) with my changes
  • I have added tests that prove my fix is effective or that my feature works
  • I have added necessary documentation (if appropriate)

Summary by CodeRabbit

  • New Features

    • Introduced Dynamic Application Security Testing (DAST) capabilities, including server mode and reporting options.
    • Added deduplication functionality for HTTP requests.
    • Enhanced fuzzing options with in-scope and out-of-scope URL regex patterns.
    • Added a new HTML template for displaying DAST scan reports.
    • Implemented a statistics tracking module for fuzzing events.
  • Bug Fixes

    • Improved error handling and validation for various configuration options.
  • Documentation

    • Updated documentation to reflect new features and enhancements.
  • Tests

    • Expanded test coverage for timing dependency detection and other features.
  • Chores

    • Updated dependencies and cleaned up unused code.

@Ice3man543 Ice3man543 marked this pull request as ready for review November 13, 2024 18:03
@auto-assign auto-assign bot requested a review from dwisiswant0 November 13, 2024 18:03
@ehsandeep ehsandeep marked this pull request as draft November 19, 2024 16:36
@Ice3man543 Ice3man543 marked this pull request as ready for review November 26, 2024 11:09
Copy link

coderabbitai bot commented Nov 26, 2024

Walkthrough

This pull request introduces significant enhancements to the Nuclei framework, focusing on Dynamic Application Security Testing (DAST) capabilities. The changes span multiple packages and introduce new features for fuzzing, server-side operations, and statistics tracking. Key additions include a DAST server mode, improved fuzzing scope management, enhanced statistics tracking, and more flexible input handling across various formats. The modifications aim to provide more granular control over security testing processes and improve the overall extensibility of the Nuclei tool.

Changes

File/Group Change Summary
cmd/nuclei/main.go Added DAST server and fuzzing-related flags, integrated name generator
go.mod Updated dependencies, added new packages for DAST and name generation
internal/runner/* Enhanced configuration handling, added DAST option validation
internal/server/* Introduced DAST server components, request handling, and scope management
pkg/fuzz/* Improved fuzzing statistics tracking, analyzers, and component handling
pkg/input/formats/* Modified input parsing to support io.Reader for more flexible input
pkg/protocols/* Added fuzzing statistics tracking, updated HTTP client configurations
pkg/types/types.go Added new options for DAST server and fuzzing scope

Sequence Diagram

sequenceDiagram
    participant User
    participant NucleiCLI
    participant DASTServer
    participant ScopeManager
    participant FuzzStatistics
    participant HTTPExecutor

    User->>NucleiCLI: Start with DAST server options
    NucleiCLI->>DASTServer: Initialize server
    DASTServer->>ScopeManager: Create scope rules
    User->>DASTServer: Submit HTTP requests
    DASTServer->>ScopeManager: Validate request scope
    alt Request in scope
        DASTServer->>HTTPExecutor: Execute request
        HTTPExecutor->>FuzzStatistics: Record component events
        HTTPExecutor->>FuzzStatistics: Record result events
    else Request out of scope
        DASTServer->>User: Skip request
    end
    DASTServer->>User: Provide scan results
Loading

Poem

🐰 Fuzzy Rabbit's DAST Delight

In Nuclei's realm, a new light shines bright,
DAST server dancing with scopes so tight,
Payloads hopping, statistics take flight,
Testing web paths with algorithmic might,
Security leaping to a whole new height! 🕵️‍♀️🔍


🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 32

🧹 Outside diff range and nitpick comments (55)
pkg/input/formats/yaml/multidoc_test.go (1)

21-24: LGTM with a minor suggestion for error message.

The file handling implementation follows Go best practices with proper error checking and resource cleanup.

Consider simplifying the error message format:

-require.Nilf(t, err, "error opening proxify input file: %v", err)
+require.Nilf(t, err, "failed to open proxify input file: %v", err)
pkg/input/formats/burp/burp_test.go (1)

22-25: Consider enhancing error handling after Parse.

While the changes look good, consider using require.NoError instead of the current if-fatal pattern for consistency with the error handling style used above.

-	err = format.Parse(file, func(request *types.RequestResponse) bool {
-		gotMethodsToURLs = append(gotMethodsToURLs, request.URL.String())
-		return false
-	}, proxifyInputFile)
-	if err != nil {
-		t.Fatal(err)
-	}
+	err = format.Parse(file, func(request *types.RequestResponse) bool {
+		gotMethodsToURLs = append(gotMethodsToURLs, request.URL.String())
+		return false
+	}, proxifyInputFile)
+	require.NoError(t, err, "failed to parse burp file")
internal/server/scope/scope_test.go (1)

10-11: Consider enhancing test structure and naming.

While the test follows Go testing best practices, consider making the test cases more explicit:

  1. Rename the subtest from "url" to something more specific like "should_allow_matching_paths_and_reject_excluded_patterns"
  2. Consider adding more test cases for edge cases
 func TestManagerValidate(t *testing.T) {
-	t.Run("url", func(t *testing.T) {
+	t.Run("should_allow_matching_paths_and_reject_excluded_patterns", func(t *testing.T) {
pkg/input/formats/swagger/swagger_test.go (1)

18-20: LGTM! Consider simplifying the error message.

The file handling implementation follows Go best practices with proper resource management and error handling.

Consider simplifying the error message as the error is already included:

-require.Nilf(t, err, "error opening proxify input file: %v", err)
+require.Nilf(t, err, "failed to open proxify input file")
pkg/authprovider/authx/strategy.go (1)

35-37: Ensure consistent behavior with Apply method

The ApplyOnRR implementation mirrors Apply but has the same potential issues. Additionally, since these methods share similar logic, consider extracting the common pattern.

Consider creating a helper method to reduce code duplication:

+func (d *DynamicAuthStrategy) applyStrategies(apply func(AuthStrategy)) {
+	strategies := d.Dynamic.GetStrategies()
+	if strategies == nil {
+		return
+	}
+	for _, s := range strategies {
+		if s == nil {
+			continue
+		}
+		apply(s)
+	}
+}

 func (d *DynamicAuthStrategy) Apply(req *http.Request) {
-	strategy := d.Dynamic.GetStrategies()
-	for _, s := range strategy {
-		s.Apply(req)
-	}
+	d.applyStrategies(func(s AuthStrategy) {
+		s.Apply(req)
+	})
 }

 func (d *DynamicAuthStrategy) ApplyOnRR(req *retryablehttp.Request) {
-	strategy := d.Dynamic.GetStrategies()
-	for _, s := range strategy {
-		s.ApplyOnRR(req)
-	}
+	d.applyStrategies(func(s AuthStrategy) {
+		s.ApplyOnRR(req)
+	})
 }
pkg/input/formats/openapi/openapi.go (1)

Line range hint 36-40: Consider enhancing error handling for io.Reader specific failures

While the implementation correctly uses LoadFromIoReader, consider adding specific error handling for io.Reader failures (e.g., network timeouts, connection issues) to provide better error context in a live DAST environment.

 func (j *OpenAPIFormat) Parse(input io.Reader, resultsCb formats.ParseReqRespCallback, filePath string) error {
 	loader := openapi3.NewLoader()
-	schema, err := loader.LoadFromIoReader(input)
+	data, err := io.ReadAll(input)
+	if err != nil {
+		return errors.Wrap(err, "could not read openapi schema")
+	}
+	schema, err := loader.LoadFromData(data)
 	if err != nil {
 		return errors.Wrap(err, "could not decode openapi 3.0 schema")
 	}
pkg/fuzz/stats/db_test.go (2)

13-22: Enhance test coverage for inserted fields

While the test verifies the insertion succeeds, it doesn't validate that all fields were correctly stored in the database.

Consider adding verification for all fields:

 require.NoError(t, err)
+
+ // Verify all fields were stored correctly
+ var event FuzzingEvent
+ err = db.db.QueryRow(`
+     SELECT url, site_name, template_id, component_type, 
+            component_name, payload_sent, status_code 
+     FROM components WHERE id = 1
+ `).Scan(&event.URL, &event.SiteName, &event.TemplateID,
+         &event.ComponentType, &event.ComponentName,
+         &event.PayloadSent, &event.StatusCode)
+ require.NoError(t, err)
+ require.Equal(t, "http://localhost:8080/login", event.URL)
+ require.Equal(t, "localhost:8080", event.SiteName)
+ require.Equal(t, "apache-struts2-001", event.TemplateID)
+ require.Equal(t, "path", event.ComponentType)
+ require.Equal(t, "/login", event.ComponentName)
+ require.Equal(t, "/login'\"><", event.PayloadSent)
+ require.Equal(t, 401, event.StatusCode)

29-39: Refactor duplicate test data and verify matched status

The test has two issues:

  1. Test data is duplicated from the first insertion
  2. The matched status isn't verified

Consider these improvements:

+    // Define test data once
+    testEvent := FuzzingEvent{
+        URL:           "http://localhost:8080/login",
+        SiteName:      "localhost:8080",
+        TemplateID:    "apache-struts2-001",
+        ComponentType: "path",
+        ComponentName: "/login",
+        PayloadSent:   "/login'\"><",
+        StatusCode:    401,
+    }
+
+    // Test unmatched insertion
+    err = db.InsertComponent(testEvent)
+    require.NoError(t, err)
+
+    // Test matched insertion
+    testEvent.Matched = true
+    err = db.InsertMatchedRecord(testEvent)
+    require.NoError(t, err)
+
+    // Verify matched status
+    var matched bool
+    err = db.db.QueryRow("SELECT matched FROM matched_records WHERE id = 1").Scan(&matched)
+    require.NoError(t, err)
+    require.True(t, matched)
internal/server/scope/extensions.go (1)

13-33: Improve maintainability with structured extension groups.

The extension map would be more maintainable if organized using constants or variables for each category.

Consider restructuring like this:

+// Common file extensions that are typically not interesting for security scanning
+var (
+	imageExtensions = map[string]struct{}{
+		".jpg": {}, ".jpeg": {}, ".png": {}, ".gif": {}, ".bmp": {}, ".tiff": {}, ".ico": {},
+	}
+	mediaExtensions = map[string]struct{}{
+		".mp4": {}, ".avi": {}, ".mov": {}, ".wmv": {}, ".flv": {}, ".mkv": {}, ".webm": {},
+	}
+	// ... other categories
+)
+
+// excludedExtensions combines all extension categories
+var excludedExtensions = make(map[string]struct{})
+
+func init() {
+	// Populate excludedExtensions from categories
+	for ext := range imageExtensions {
+		excludedExtensions[ext] = struct{}{}
+	}
+	// ... add other categories
+}
pkg/input/formats/burp/burp.go (3)

39-42: Enhance error context with file information

When wrapping the error, consider including the format name to provide better context for debugging.

 items, err := burpxml.Parse(input, true)
 if err != nil {
-    return errors.Wrap(err, "could not decode burp xml schema")
+    return errors.Wrap(err, "burp: could not decode xml schema")
 }

Line range hint 56-56: Address the TODO comment regarding callback handling

The comment indicates that the callback's return value should be handled, but it's currently being ignored. This could lead to issues if the callback is designed to signal processing errors or early termination.

Here's a suggested implementation:

-        resultsCb(rawRequest) // TODO: Handle false and true from callback
+        if !resultsCb(rawRequest) {
+            // Stop processing if callback returns false
+            return nil
+        }

Would you like me to help implement proper callback handling or create an issue to track this?


Line range hint 44-56: Consider adding error context for base64 decoding failures

When base64 decoding fails, it would be helpful to know which item in the burp file caused the failure.

 binx, err := base64.StdEncoding.DecodeString(item.Request.Raw)
 if err != nil {
-    return errors.Wrap(err, "could not decode base64")
+    return errors.Wrapf(err, "could not decode base64 for request at URL: %s", item.Url)
 }
pkg/input/formats/openapi/openapi_test.go (1)

45-48: Simplify error message in require.Nilf

The error message includes the error value twice: once in the format string and once as an argument. Consider simplifying it.

-require.Nilf(t, err, "error opening proxify input file: %v", err)
+require.Nilf(t, err, "error opening proxify input file")
pkg/input/formats/json/json.go (1)

48-48: Consider documenting or removing the unused filePath parameter.

The filePath parameter is currently unused in the implementation. Either:

  1. Document its intended future use with a TODO comment, or
  2. Remove it if it's not needed
-func (j *JSONFormat) Parse(input io.Reader, resultsCb formats.ParseReqRespCallback, filePath string) error {
+func (j *JSONFormat) Parse(input io.Reader, resultsCb formats.ParseReqRespCallback) error {
pkg/input/formats/swagger/swagger.go (1)

43-44: Add validation for empty filePath parameter

The extension checking logic assumes filePath will always have an extension. Consider adding validation to handle cases where filePath might be empty or without an extension.

+if filePath == "" {
+    return errors.New("filePath parameter is required")
+}
 ext := path.Ext(filePath)
+if ext == "" {
+    return errors.New("could not determine file format: missing file extension")
+}

Also applies to: 46-47

pkg/input/formats/yaml/multidoc.go (2)

48-48: Remove unused parameter

The filePath parameter is not used in the implementation. Consider either:

  1. Removing it if not needed
  2. Adding a comment explaining why it's required for interface compatibility

Line range hint 49-61: Fix incorrect error message in YAML parser

The error message mentions "json file" but this is a YAML parser.

Apply this fix:

- return errors.Wrap(err, "could not decode json file")
+ return errors.Wrap(err, "could not decode yaml file")
pkg/input/formats/json/json_test.go (1)

50-53: Consider standardizing error handling approach

For consistency with the error handling style used above, consider replacing the manual error check with require.NoError.

-	if err != nil {
-		t.Fatal(err)
-	}
+	require.NoError(t, err)

Also applies to: 55-57

internal/server/scope/scope.go (4)

1-8: Improve attribution comment

The comment "From Katana" should be more specific, including the source repository URL and license information if applicable.

-// From Katana
+// Adapted from Katana (https://github.com/projectdiscovery/katana)
+// Licensed under MIT License

10-15: Enhance type documentation

While the struct is well-designed, the documentation could be more detailed to explain the purpose of each field.

-// Manager manages scope for crawling process
+// Manager manages URL scope validation for DAST crawling process.
+// It maintains lists of regular expressions for in-scope and out-of-scope URLs,
+// with a flag to indicate if scope checking is disabled.
 type Manager struct {
+	// inScope contains compiled regex patterns for URLs that should be included
 	inScope    []*regexp.Regexp
+	// outOfScope contains compiled regex patterns for URLs that should be excluded
 	outOfScope []*regexp.Regexp
+	// noScope indicates whether scope checking is disabled (true when no patterns are defined)
 	noScope    bool
 }

17-39: Refactor duplicate regex compilation logic

The regex compilation logic is duplicated for inScope and outOfScope. Consider extracting this into a helper function.

+func compilePatterns(patterns []string) ([]*regexp.Regexp, error) {
+	var compiled []*regexp.Regexp
+	for _, regex := range patterns {
+		pattern, err := regexp.Compile(regex)
+		if err != nil {
+			return nil, fmt.Errorf("could not compile regex %s: %s", regex, err)
+		}
+		compiled = append(compiled, pattern)
+	}
+	return compiled, nil
+}

 func NewManager(inScope, outOfScope []string) (*Manager, error) {
 	manager := &Manager{}
 
-	for _, regex := range inScope {
-		if compiled, err := regexp.Compile(regex); err != nil {
-			return nil, fmt.Errorf("could not compile regex %s: %s", regex, err)
-		} else {
-			manager.inScope = append(manager.inScope, compiled)
-		}
+	var err error
+	manager.inScope, err = compilePatterns(inScope)
+	if err != nil {
+		return nil, err
 	}
-	for _, regex := range outOfScope {
-		if compiled, err := regexp.Compile(regex); err != nil {
-			return nil, fmt.Errorf("could not compile regex %s: %s", regex, err)
-		} else {
-			manager.outOfScope = append(manager.outOfScope, compiled)
-		}
+	
+	manager.outOfScope, err = compilePatterns(outOfScope)
+	if err != nil {
+		return nil, err
 	}

59-77: Optimize validation logic with early returns

The validation logic can be made more efficient with early returns and avoiding unnecessary variable declaration.

 func (m *Manager) validateURL(URL string) (bool, error) {
 	for _, item := range m.outOfScope {
 		if item.MatchString(URL) {
 			return false, nil
 		}
 	}
 	if len(m.inScope) == 0 {
 		return true, nil
 	}
 
-	var inScopeMatched bool
 	for _, item := range m.inScope {
 		if item.MatchString(URL) {
-			inScopeMatched = true
-			break
+			return true, nil
 		}
 	}
-	return inScopeMatched, nil
+	return false, nil
 }
pkg/fuzz/stats/stats.go (3)

41-53: Add field documentation to FuzzingEvent struct

Consider adding documentation for each field to improve code maintainability.

 type FuzzingEvent struct {
+    // URL is the target URL being fuzzed
     URL           string
+    // ComponentType represents the type of component being tested
     ComponentType string
+    // ComponentName is the identifier of the component
     ComponentName string
     // ... (continue for other fields)
 }

65-81: Enhance URL parsing robustness

The current implementation has a few areas for improvement:

  1. Silent error handling might hide issues
  2. No input validation for empty/malformed URLs

Consider this enhanced implementation:

-func getCorrectSiteName(originalURL string) string {
+func getCorrectSiteName(originalURL string) (string, error) {
+    if originalURL == "" {
+        return "", fmt.Errorf("empty URL provided")
+    }
+
     parsed, err := url.Parse(originalURL)
     if err != nil {
-        return ""
+        return "", fmt.Errorf("failed to parse URL: %w", err)
     }
 
     // Site is the host:port combo
     siteName := parsed.Host
+    if siteName == "" {
+        return "", fmt.Errorf("no host found in URL")
+    }
+
     if parsed.Port() == "" {
         if parsed.Scheme == "https" {
             siteName = fmt.Sprintf("%s:443", siteName)
         } else if parsed.Scheme == "http" {
             siteName = fmt.Sprintf("%s:80", siteName)
         }
     }
-    return siteName
+    return siteName, nil
 }

1-81: Consider adding metrics and monitoring capabilities

While the current implementation provides basic statistics tracking, consider enhancing it with:

  1. Metrics collection for monitoring fuzzing performance
  2. Rate limiting or throttling mechanisms
  3. Periodic statistics aggregation for long-running fuzzing sessions

These additions would make the system more observable and maintainable in production environments.

🧰 Tools
🪛 GitHub Check: Lint

[failure] 57-57:
Error return value of t.database.InsertMatchedRecord is not checked (errcheck)


[failure] 62-62:
Error return value of t.database.InsertComponent is not checked (errcheck)

pkg/fuzz/component/cookie.go (1)

Line range hint 86-95: Potential improvement: Consider cookie attributes

The cookie rebuilding logic only preserves the Name and Value attributes. Consider preserving other security-relevant attributes like Secure, HttpOnly, SameSite, etc.

 cookie := &http.Cookie{
   Name:  key,
   Value: fmt.Sprint(value),
+  Secure: true,  // If the original cookie was secure
+  HttpOnly: true,  // If the original cookie was HttpOnly
+  SameSite: http.SameSiteStrictMode,  // Based on original cookie
 }
pkg/fuzz/stats/schema.sql (5)

1-5: Consider adding audit fields to the sites table

For better data management and auditing capabilities, consider adding the following fields:

  • created_at DATETIME DEFAULT CURRENT_TIMESTAMP
  • updated_at DATETIME
  • is_deleted BOOLEAN DEFAULT FALSE
CREATE TABLE IF NOT EXISTS sites (
    site_id INTEGER PRIMARY KEY AUTOINCREMENT,
    site_name TEXT UNIQUE NOT NULL,
+   created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
+   updated_at DATETIME,
+   is_deleted BOOLEAN DEFAULT FALSE
);

21-25: Add template metadata fields

Consider adding fields to track template metadata and lifecycle:

  • Audit timestamps
  • Template version/hash for tracking changes
  • Template status (active/deprecated)
CREATE TABLE IF NOT EXISTS templates (
    template_id INTEGER PRIMARY KEY AUTOINCREMENT,
-   template_name TEXT UNIQUE NOT NULL
+   template_name TEXT UNIQUE NOT NULL,
+   template_version TEXT,
+   template_hash TEXT,
+   is_active BOOLEAN DEFAULT TRUE,
+   created_at DATETIME DEFAULT CURRENT_TIMESTAMP,
+   updated_at DATETIME
);

51-64: Add error handling to the trigger

Consider adding checks to handle edge cases:

  1. Validate that the component exists before updating
  2. Add error handling for the case where template_id doesn't exist
CREATE TRIGGER IF NOT EXISTS update_component_stats
AFTER INSERT ON fuzzing_results
BEGIN
+   SELECT RAISE(ROLLBACK, 'Component not found')
+   WHERE NOT EXISTS (
+       SELECT 1 FROM components WHERE component_id = NEW.component_id
+   );
+   
+   SELECT RAISE(ROLLBACK, 'Template not found')
+   WHERE NOT EXISTS (
+       SELECT 1 FROM templates WHERE template_id = NEW.template_id
+   );

    UPDATE components
    SET last_fuzzed = NEW.timestamp,
        total_fuzz_count = total_fuzz_count + 1
    WHERE component_id = NEW.component_id;

    INSERT INTO component_templates (component_id, template_id, times_applied)
    VALUES (NEW.component_id, NEW.template_id, 1)
    ON CONFLICT(component_id, template_id) DO UPDATE SET
        times_applied = times_applied + 1;
END;

67-71: Implement data management strategies

Consider the following improvements for the fuzzing_request_response table:

  1. Add size limits or use BLOB with compression for raw_request and raw_response
  2. Implement a retention policy for old data
  3. Add timestamp for data lifecycle management
CREATE TABLE IF NOT EXISTS fuzzing_request_response (
    request_id INTEGER PRIMARY KEY AUTOINCREMENT,
-   raw_request TEXT NOT NULL,
-   raw_response TEXT NOT NULL
+   raw_request BLOB NOT NULL,
+   raw_response BLOB NOT NULL,
+   created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);

+-- Consider adding a periodic cleanup job or implementing partitioning
+-- CREATE INDEX idx_fuzzing_request_response_created_at ON fuzzing_request_response(created_at);

1-71: Add database configuration and documentation

Consider adding:

  1. Database configuration pragmas for SQLite optimization
  2. Schema version tracking
  3. Documentation for each table's purpose

Add at the beginning of the file:

-- Schema version: 1.0.0
PRAGMA foreign_keys = ON;
PRAGMA journal_mode = WAL;
PRAGMA synchronous = NORMAL;

-- Table documentation
-- sites: Stores target sites for fuzzing
-- components: Stores components (paths, headers, etc.) identified for fuzzing
-- templates: Stores fuzzing templates
-- component_templates: Tracks template execution against components
-- fuzzing_results: Stores results of fuzzing operations
-- fuzzing_request_response: Stores raw HTTP request/response data
pkg/input/formats/formats.go (1)

39-39: Consider enhancing error context for DAST server use case

Since this is part of a DAST server implementation, consider enriching error returns with additional context (like file position for parsing errors) to help with debugging and error reporting in the server environment.

Example enhancement:

// Consider defining a richer error type
type ParseError struct {
    FilePath string
    Position int64
    Err      error
}

func (e *ParseError) Error() string {
    return fmt.Sprintf("parse error in %s at position %d: %v", e.FilePath, e.Position, e.Err)
}
internal/runner/lazy.go (1)

Line range hint 138-146: Enhance error handling specificity

The current error handling uses generic errors with string messages. Consider creating specific error types for different scenarios (no match, no values, extraction failure) to enable better error handling by callers.

Consider creating specific error types:

+type ExtractionError struct {
+    Template string
+    Reason   string
+}
+
+func (e *ExtractionError) Error() string {
+    return fmt.Sprintf("%s: %s", e.Template, e.Reason)
+}

 if len(data) == 0 {
     if e.OperatorsResult.Matched {
-        finalErr = fmt.Errorf("match found but no (dynamic/extracted) values found for template: %s", d.TemplatePath)
+        finalErr = &ExtractionError{
+            Template: d.TemplatePath,
+            Reason:   "match found but no dynamic or extracted values",
+        }
     } else {
-        finalErr = fmt.Errorf("no match or (dynamic/extracted) values found for template: %s", d.TemplatePath)
+        finalErr = &ExtractionError{
+            Template: d.TemplatePath,
+            Reason:   "no match or values found",
+        }
     }
 }
pkg/authprovider/file.go (2)

Line range hint 90-105: Consider memory management for compiled regex patterns

The compiled regex patterns are stored in the f.compiled map without any cleanup mechanism. For long-running DAST servers, this could lead to memory growth if many dynamic secrets are processed over time.

Consider implementing one of these solutions:

  1. Add a cleanup mechanism for unused patterns
  2. Implement a cache with expiration for compiled patterns
  3. Move regex compilation to a separate goroutine with proper lifecycle management

Line range hint 106-120: Enhance domain normalization for comprehensive coverage

The current domain normalization is limited and might miss edge cases. This could lead to authentication bypass in certain scenarios.

Consider these improvements:

 domain = strings.TrimSpace(domain)
+domain = strings.ToLower(domain) // Normalize case
+if strings.HasPrefix(domain, "[") && strings.HasSuffix(domain, "]") {
+    // Handle IPv6 addresses
+    domain = strings.TrimPrefix(domain, "[")
+    domain = strings.TrimSuffix(domain, "]")
+}
 domain = strings.TrimSuffix(domain, ":80")
 domain = strings.TrimSuffix(domain, ":443")
+// Handle IDN domains
+if punycodeHost, err := idna.ToASCII(domain); err == nil {
+    domain = punycodeHost
+}

Additionally, consider using a proper URL parsing library to handle all edge cases consistently.

pkg/protocols/protocols.go (1)

103-104: Consider adding documentation for the new field.

The field addition is well-placed near other fuzzing-related fields. Consider adding a more detailed comment to document:

  • The purpose of this tracker
  • Whether it's optional or required for DAST operations
  • Any initialization requirements

Example documentation:

+	// FuzzStatsDB tracks fuzzing statistics in a SQLite database for DAST operations.
+	// It records component events and results during fuzzing operations.
 	FuzzStatsDB *stats.Tracker
pkg/catalog/loader/loader.go (1)

512-513: Consider extracting the complex condition into a method

The condition combines multiple checks which could be made more readable by extracting it into a dedicated method.

Consider refactoring to:

+func (t *Template) HasGlobalMatchers() bool {
+    return t.Options != nil && t.Options.GlobalMatchers != nil && t.Options.GlobalMatchers.HasMatchers()
+}

-if parsed.IsFuzzing() || parsed.Options.GlobalMatchers != nil && parsed.Options.GlobalMatchers.HasMatchers() {
+if parsed.IsFuzzing() || parsed.HasGlobalMatchers() {
pkg/types/types.go (1)

416-423: Consider adding validation and defaults for DAST server configuration.

While the DAST server fields are well-documented, consider the following improvements:

  1. Add validation for DASTServerAddress to ensure it's a valid URL/address format
  2. Implement a default value mechanism for DASTScanName to auto-generate unique scan names
 // DASTServerAddress is the address for the dast server
 DASTServerAddress string
+// DefaultDASTServerAddress is the default address for the dast server
+DefaultDASTServerAddress string = "localhost:8080"
pkg/protocols/http/request.go (2)

933-943: LGTM! Consider adding error handling for database operations.

The implementation for recording fuzzing component events is well-structured and captures all relevant metadata. However, since this involves database operations, it would be good to handle potential errors.

Consider adding error handling:

 if request.options.FuzzStatsDB != nil && generatedRequest.fuzzGeneratedRequest.Request != nil {
-    request.options.FuzzStatsDB.RecordComponentEvent(fuzzStats.FuzzingEvent{
+    if err := request.options.FuzzStatsDB.RecordComponentEvent(fuzzStats.FuzzingEvent{
         URL:           input.MetaInput.Target(),
         SiteName:      hostname,
         TemplateID:    request.options.TemplateID,
         ComponentType: generatedRequest.fuzzGeneratedRequest.Component.Name(),
         ComponentName: generatedRequest.fuzzGeneratedRequest.Parameter,
         PayloadSent:   generatedRequest.fuzzGeneratedRequest.Value,
         StatusCode:    respChain.Response().StatusCode,
-    })
+    }); err != nil {
+        gologger.Warning().Msgf("Could not record fuzzing component event: %v", err)
+    }
 }

1038-1051: LGTM! Be mindful of memory usage with raw request/response storage.

The implementation for recording fuzzing results is correct and comprehensive. However, storing raw request and response data could consume significant memory/storage space over time.

Consider implementing one or more of these optimizations:

  1. Add configurable size limits for raw data storage
  2. Implement data rotation/cleanup policies
  3. Use compression for raw data storage
  4. Consider storing large raw data in a separate table/collection with TTL
internal/server/requests_worker.go (1)

14-14: Remove redundant newline characters from log messages

The gologger methods automatically append newlines to log messages. Including \n in the messages can lead to extra blank lines in the output.

Remove the \n from Msgf log messages:

-gologger.Warning().Msgf("Could not parse raw request: %s\n", err)
+gologger.Warning().Msgf("Could not parse raw request: %s", err)

...

-gologger.Warning().Msgf("Uninteresting path: %s\n", parsedReq.URL.Path)
+gologger.Warning().Msgf("Uninteresting path: %s", parsedReq.URL.Path)

...

-gologger.Warning().Msgf("Could not validate scope: %s\n", err)
+gologger.Warning().Msgf("Could not validate scope: %s", err)

...

-gologger.Warning().Msgf("Request is out of scope: %s %s\n", parsedReq.Request.Method, parsedReq.URL.String())
+gologger.Warning().Msgf("Request is out of scope: %s %s", parsedReq.Request.Method, parsedReq.URL.String())

...

-gologger.Warning().Msgf("Duplicate request detected: %s %s\n", parsedReq.Request.Method, parsedReq.URL.String())
+gologger.Warning().Msgf("Duplicate request detected: %s %s", parsedReq.Request.Method, parsedReq.URL.String())

...

-gologger.Warning().Msgf("Could not run nuclei: %s\n", err)
+gologger.Warning().Msgf("Could not run nuclei: %s", err)

Also applies to: 25-25, 31-31, 35-35, 40-40, 53-53

internal/server/dedupe.go (2)

15-32: Review the selection of dynamic headers for completeness.

The dynamicHeaders map includes headers that are typically dynamic and should be excluded from deduplication. However, consider whether headers like Accept, Accept-Encoding, and Content-Type should also be included in this list, as they can affect the response and may lead to different outcomes even if the URL and other parameters are the same.


110-122: Optimize header processing to improve performance.

In sortedNonDynamicHeaders, you're appending headers to a slice and then sorting them. For better performance, especially with a large number of headers, consider preallocating the slice with the known capacity.

Here's how you can preallocate the slice:

 func sortedNonDynamicHeaders(headers mapsutil.OrderedMap[string, string]) []header {
-    var result []header
+    result := make([]header, 0, headers.Len())
     headers.Iterate(func(k, v string) bool {
         if !dynamicHeaders[strings.ToLower(k)] {
             result = append(result, header{Key: k, Value: v})
         }
         return true
     })
     sort.Slice(result, func(i, j int) bool {
         return result[i].Key < result[j].Key
     })
     return result
 }
pkg/input/provider/http/multiformat.go (1)

30-31: Update comment to accurately describe 'InputContents'

The comment currently states // optional input reader, but the field added is InputContents string, which represents the input content as a string, not a reader. Consider updating the comment to // optional input contents to better reflect its purpose.

internal/server/server.go (5)

58-62: Avoid hardcoding default values for executor options

The condition checks for specific default values (25) to adjust BulkSize and TemplateThreads. Hardcoding these values can lead to maintenance issues if defaults change.

Consider referencing the default values from the nuclei configuration or defining constants to represent them:

-// Disable bulk mode and single threaded execution
// by auto adjusting in case of default values
-if options.NucleiExecutorOptions.Options.BulkSize == 25 && options.NucleiExecutorOptions.Options.TemplateThreads == 25 {
+defaultBulkSize := nuclei.DefaultBulkSize
+defaultTemplateThreads := nuclei.DefaultTemplateThreads
+if options.NucleiExecutorOptions.Options.BulkSize == defaultBulkSize && options.NucleiExecutorOptions.Options.TemplateThreads == defaultTemplateThreads {
	options.NucleiExecutorOptions.Options.BulkSize = 1
	options.NucleiExecutorOptions.Options.TemplateThreads = 1
}

Replace nuclei.DefaultBulkSize and nuclei.DefaultTemplateThreads with the actual constants from the nuclei package.


170-173: Handle task submission errors in handleRequest

When submitting tasks to the tasksPool, errors may occur if the pool is stopped or overloaded. Currently, errors are not being handled, which could lead to unacknowledged failures.

Modify the code to handle potential errors:

-	s.tasksPool.Submit(func() {
+	if err := s.tasksPool.Submit(func() {
		s.consumeTaskRequest(req)
-	})
+	}); err != nil {
+		return c.JSON(500, map[string]string{"error": "failed to submit task"})
+	}
	return c.NoContent(200)
}

This ensures that the client is informed if the task could not be submitted.


166-168: Enhance input validation for handleRequest

Currently, the validation only checks for empty fields. Additional validation can prevent processing invalid data.

Consider validating that req.URL is a well-formed URL and that req.RawHTTP contains valid HTTP request data. For example:

 import "net/url"
 
 // Validate the request
 if req.RawHTTP == "" || req.URL == "" {
 	return c.JSON(400, map[string]string{"error": "missing required fields"})
 }
+if _, err := url.ParseRequestURI(req.URL); err != nil {
+	return c.JSON(400, map[string]string{"error": "invalid URL format"})
+}

This will help prevent errors during task processing due to invalid inputs.


127-129: Reconsider skipping authentication for /stats endpoint

The Skipper function excludes the /stats endpoint from authentication. If the statistics contain sensitive information, it may be prudent to require authentication.

Evaluate whether the data returned by /stats should be publicly accessible. If not, modify the Skipper function:

 Skipper: func(c echo.Context) bool {
-	return c.Path() == "/stats"
+	return false
},

This change will enforce authentication on all endpoints.


139-141: Serve UI files securely

The /ui endpoint serves a static HTML file from internal/server/ui/index.html. Ensure that the file path is secure to prevent directory traversal attacks.

[security_issue]

Use echo.Static or ensure the file path is a constant:

 e.GET("/ui", func(c echo.Context) error {
-	return c.File("internal/server/ui/index.html")
+	return c.File(filepath.Join("internal", "server", "ui", "index.html"))
 })

Also, consider using Embed to embed static files within the binary for added security.

pkg/authprovider/authx/dynamic.go (1)

Line range hint 53-62: Fix infinite recursion in UnmarshalJSON method

The UnmarshalJSON method calls json.Unmarshal(data, &d), which causes infinite recursion because it triggers UnmarshalJSON again. This will result in a stack overflow.

Apply the following fix using an alias type to avoid recursion:

 func (d *Dynamic) UnmarshalJSON(data []byte) error {
-	if err := json.Unmarshal(data, &d); err != nil {
+	type Alias Dynamic
+	var alias Alias
+	if err := json.Unmarshal(data, &alias); err != nil {
 		return err
 	}
 	var s Secret
 	if err := json.Unmarshal(data, &s); err != nil {
 		return err
 	}
+	*d = Dynamic(alias)
 	d.Secret = &s
 	return nil
 }
internal/server/nuclei_sdk.go (3)

173-173: Standardize error wrapping using errors.Wrap

For consistency, consider using errors.Wrap(err, ...) instead of fmt.Errorf when wrapping errors, as you've done elsewhere in the code.

Apply this diff to standardize error handling:

-		return fmt.Errorf("error marshalling yaml: %s", err)
+		return errors.Wrap(err, "error marshalling YAML")

160-169: Unused struct fields in proxifyRequest.Request

In the proxifyRequest struct, the fields Header and Body are defined but not populated when creating the payload. Consider removing these fields if they're unnecessary, or ensure they're set appropriately.

Apply this diff to remove the unused fields:

	type proxifyRequest struct {
		URL     string `json:"url"`
		Request struct {
-			Header map[string]string `json:"header"`
-			Body   string            `json:"body"`
			Raw    string            `json:"raw"`
		} `json:"request"`
	}

Alternatively, if these fields are needed, populate them when creating the payload:

	payload := proxifyRequest{
		URL: target.URL,
		Request: struct {
			Header map[string]string `json:"header"`
			Body   string            `json:"body"`
			Raw    string            `json:"raw"`
		}{
+			Header: target.Headers,
+			Body:   target.Body,
			Raw:    target.RawHTTP,
		},
	}

191-199: Handle errors returned by Close() methods

The errors returned by Close() methods are currently ignored. Handling these errors can help catch issues during resource cleanup.

Apply this diff to handle the errors:

	if n.executorOpts.FuzzStatsDB != nil {
-		n.executorOpts.FuzzStatsDB.Close()
+		if err := n.executorOpts.FuzzStatsDB.Close(); err != nil {
+			// Handle or log the error as appropriate
+		}
	}
	if n.options.Interactsh != nil {
-		_ = n.options.Interactsh.Close()
+		if err := n.options.Interactsh.Close(); err != nil {
+			// Handle or log the error as appropriate
+		}
	}
	if n.executorOpts.InputHelper != nil {
-		_ = n.executorOpts.InputHelper.Close()
+		if err := n.executorOpts.InputHelper.Close(); err != nil {
+		// Handle or log the error as appropriate
+	}
	}
internal/runner/runner.go (2)

451-473: Rename variable execurOpts to executorOpts for consistency

The variable execurOpts appears to be a typo. Renaming it to executorOpts improves readability and adheres to standard naming conventions.

Apply this diff to correct the variable name:

-        execurOpts := &server.NucleiExecutorOptions{
+        executorOpts := &server.NucleiExecutorOptions{
             Options:            r.options,
             Output:             r.output,
             Progress:           r.progress,
             Catalog:            r.catalog,
             IssuesClient:       r.issuesClient,
             RateLimiter:        r.rateLimiter,
             Interactsh:         r.interactsh,
             ProjectFile:        r.projectFile,
             Browser:            r.browser,
             Colorizer:          r.colorizer,
             Parser:             r.parser,
             TemporaryDirectory: r.tmpDir,
         }
-        dastServer, err := server.New(&server.Options{
             ...
-            NucleiExecutorOptions: execurOpts,
+            NucleiExecutorOptions: executorOpts,
         })

525-529: Avoid redeclaring the err variable to prevent shadowing

The variable err is redeclared within the function scope, which could lead to variable shadowing and potential bugs. Removing the redundant declaration ensures clarity.

Apply this diff to eliminate the unnecessary declaration:

-        var err error
         executorOpts.FuzzStatsDB, err = fuzzStats.NewTracker(r.options.DASTScanName)
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 28abf56 and 090cadb.

⛔ Files ignored due to path filters (1)
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (37)
  • cmd/nuclei/main.go (4 hunks)
  • go.mod (5 hunks)
  • internal/runner/lazy.go (1 hunks)
  • internal/runner/runner.go (7 hunks)
  • internal/server/dedupe.go (1 hunks)
  • internal/server/nuclei_sdk.go (1 hunks)
  • internal/server/requests_worker.go (1 hunks)
  • internal/server/scope/extensions.go (1 hunks)
  • internal/server/scope/scope.go (1 hunks)
  • internal/server/scope/scope_test.go (1 hunks)
  • internal/server/server.go (1 hunks)
  • pkg/authprovider/authx/cookies_auth.go (1 hunks)
  • pkg/authprovider/authx/dynamic.go (6 hunks)
  • pkg/authprovider/authx/strategy.go (1 hunks)
  • pkg/authprovider/file.go (2 hunks)
  • pkg/catalog/loader/loader.go (1 hunks)
  • pkg/fuzz/component/component.go (1 hunks)
  • pkg/fuzz/component/cookie.go (1 hunks)
  • pkg/fuzz/stats/db.go (1 hunks)
  • pkg/fuzz/stats/db_test.go (1 hunks)
  • pkg/fuzz/stats/schema.sql (1 hunks)
  • pkg/fuzz/stats/stats.go (1 hunks)
  • pkg/input/formats/burp/burp.go (2 hunks)
  • pkg/input/formats/burp/burp_test.go (2 hunks)
  • pkg/input/formats/formats.go (2 hunks)
  • pkg/input/formats/json/json.go (1 hunks)
  • pkg/input/formats/json/json_test.go (2 hunks)
  • pkg/input/formats/openapi/openapi.go (2 hunks)
  • pkg/input/formats/openapi/openapi_test.go (2 hunks)
  • pkg/input/formats/swagger/swagger.go (2 hunks)
  • pkg/input/formats/swagger/swagger_test.go (2 hunks)
  • pkg/input/formats/yaml/multidoc.go (1 hunks)
  • pkg/input/formats/yaml/multidoc_test.go (2 hunks)
  • pkg/input/provider/http/multiformat.go (4 hunks)
  • pkg/protocols/http/request.go (3 hunks)
  • pkg/protocols/protocols.go (2 hunks)
  • pkg/types/types.go (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • pkg/fuzz/component/component.go
🧰 Additional context used
🪛 GitHub Check: Lint
cmd/nuclei/main.go

[failure] 225-225:
var nameGenerator is unused (unused)

pkg/fuzz/stats/db.go

[failure] 80-80:
Error return value of tx.Rollback is not checked (errcheck)


[failure] 114-114:
Error return value of tx.Rollback is not checked (errcheck)

pkg/fuzz/stats/stats.go

[failure] 57-57:
Error return value of t.database.InsertMatchedRecord is not checked (errcheck)


[failure] 62-62:
Error return value of t.database.InsertComponent is not checked (errcheck)

pkg/input/formats/swagger/swagger.go

[failure] 50-50:
ineffectual assignment to err (ineffassign)

🪛 golangci-lint (1.62.2)
cmd/nuclei/main.go

225-225: var nameGenerator is unused

(unused)

🔇 Additional comments (59)
pkg/input/formats/yaml/multidoc_test.go (2)

4-4: LGTM!

The addition of the os package import is appropriate for the new file handling implementation.


26-29: Verify consistency with other format implementations.

The update to accept an io.Reader instead of a file path is a good improvement. However, let's verify that this pattern is consistently implemented across other format parsers.

✅ Verification successful

Parse method implementation is consistent across all format implementations

The verification shows that all format implementations (YAML, JSON, Burp, Swagger, OpenAPI) consistently implement the Format interface with the correct Parse method signature:

  • All implementations accept io.Reader as the first parameter
  • Use ParseReqRespCallback for the callback function
  • Include the filePath string parameter
  • Return an error

The change in the YAML implementation aligns perfectly with the interface requirements and other implementations.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify Parse method signature consistency across format implementations

# Check Parse method signatures in all format implementations
ast-grep --pattern 'func $_.Parse($reader io.Reader, $callback func($_ *types.RequestResponse) bool, $_ string)'

# Check test implementations for consistency
rg -l "os.Open.*Parse.*defer.*Close" pkg/input/formats/*/

Length of output: 307


Script:

#!/bin/bash
# Let's try a different approach to find Parse method implementations

# Find all Parse method implementations in format packages
rg -A 3 "^func.*Parse" pkg/input/formats/

# Also check interface definition
rg "type.*Format.*interface" pkg/input/formats/

Length of output: 3569


Script:

#!/bin/bash
# Let's check the Format interface definition to verify the expected signature
rg -A 5 "type Format interface" pkg/input/formats/formats.go

# Also check the ParseReqRespCallback type definition
rg "type ParseReqRespCallback" pkg/input/formats/formats.go

Length of output: 451

pkg/input/formats/burp/burp_test.go (3)

4-4: LGTM!

The addition of the "os" import is necessary for the new file handling implementation.


18-21: Well-implemented file handling!

Good practices observed:

  • Proper error handling with descriptive message
  • Immediate deferred close after successful open
  • Clear variable naming

18-25: Verify consistent implementation across other format parsers.

Let's ensure this file handling pattern is consistently implemented across other format parsers.

✅ Verification successful

Implementation is consistent across format parsers

The verification shows consistent implementation across all format parsers:

  • All Parse functions use io.Reader interface in their signatures
  • All test files follow the same pattern of using os.Open() with proper defer file.Close()
  • The callback function signature ParseReqRespCallback is consistently used across all implementations
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check if other format parsers follow the same pattern
# Expected: Similar file handling pattern in other format test files

# Search for test files in the formats directory
fd -e go -e test.go . pkg/input/formats/ --exec rg -l 'func Test.*Parse'

# Check the implementation pattern in these files
rg -A 5 'os\.Open.*Parse' pkg/input/formats/

Length of output: 317


Script:

#!/bin/bash
# Let's check the implementation pattern in test files more thoroughly
# First, let's look at the burp test implementation
echo "=== Burp Test Implementation ==="
rg -B 2 -A 5 'Parse.*func.*RequestResponse.*bool' pkg/input/formats/burp/burp_test.go

# Now check other test implementations
echo -e "\n=== Other Test Implementations ==="
rg -B 2 -A 5 'Parse.*func.*RequestResponse.*bool' pkg/input/formats/

# Also check the actual Parse function signatures in the implementation files
echo -e "\n=== Parse Function Signatures ==="
ast-grep --pattern 'func $_ Parse($$$) $$$'

Length of output: 4041


Script:

#!/bin/bash
# Let's check the actual Parse function implementations
echo "=== Parse Function Implementations ==="
# First check burp implementation
echo "Burp Parse Implementation:"
rg -B 2 -A 5 '^func.*Parse\(' pkg/input/formats/burp/burp.go

# Then check other implementations
echo -e "\nOther Parse Implementations:"
rg -B 2 -A 5 '^func.*Parse\(' pkg/input/formats/*/[^_]*.go

# Also check if there are any other file opening patterns
echo -e "\n=== File Opening Patterns ==="
rg 'os\.Open.*' pkg/input/formats/

Length of output: 6223

internal/server/scope/scope_test.go (2)

1-9: LGTM! Package and imports are well-structured.

The imports are appropriate for the test functionality, using standard testing package and required external dependencies.


20-24: Enhance test coverage and verify domain handling.

The test demonstrates cross-domain validation but could be improved:

  1. Add test cases for edge cases (empty paths, special characters)
  2. Verify that domain handling is working as intended
  3. Document why different domains are used in the validation
pkg/input/formats/swagger/swagger_test.go (1)

22-25: Consider enhancing error handling and clarifying parameter usage.

The switch to io.Reader improves the design by standardizing input handling. However, there are two potential improvements:

  1. Consider using require.NoError for consistent error handling:
-if err != nil {
-    t.Fatal(err)
-}
+require.NoError(t, err, "failed to parse swagger file")
  1. The file path parameter might need clarification:
✅ Verification successful

Consider using require.NoError for consistent error handling

The switch to io.Reader improves the design. The filePath parameter is actually used meaningfully within the Parse implementation to determine the file extension and choose the appropriate parsing strategy (YAML vs JSON), so it's not redundant. However, you can improve the test's error handling:

-if err != nil {
-    t.Fatal(err)
-}
+require.NoError(t, err, "failed to parse swagger file")
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check if the filePath parameter is used for anything other than error reporting
ast-grep --pattern 'func Parse($_: io.Reader, $_: func(*types.RequestResponse) bool, filePath string) error'

Length of output: 110


Script:

#!/bin/bash
# Let's first find the swagger format parser implementation
fd "swagger.*\.go" --type f --exec cat {}

Length of output: 2777

pkg/authprovider/authx/strategy.go (1)

27-37: Verify concurrent access safety

Since this code handles authentication strategies that could be accessed concurrently, we should verify thread safety.

Consider these architectural recommendations:

  1. Document thread safety requirements in the interface comments
  2. Consider using atomic operations or mutex if concurrent access is possible
  3. Add unit tests specifically for concurrent strategy application
✅ Verification successful

Let me gather more information about the concurrent usage patterns and strategy implementation.


Let me check one more aspect to understand how the strategies are initialized and managed.


Thread safety is properly implemented

The code is already thread-safe with appropriate synchronization:

  • The Dynamic struct has a sync.Mutex field for synchronizing concurrent access
  • The Fetch method uses proper lock/unlock mechanisms
  • Strategy retrieval through GetStrategies() is protected by the fetching mechanism
  • Strategies are immutable after initialization
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for concurrent access patterns to Dynamic and its strategies
ast-grep --pattern 'go func() { $$$ d.Dynamic $$$ }' 

# Look for any mutex usage in Dynamic type
rg -A 5 'type Dynamic struct' 

Length of output: 748


Script:

#!/bin/bash
# Check GetStrategies implementation and usage
ast-grep --pattern 'func ($x *Dynamic) GetStrategies() $ret { $$$ }'

# Look for goroutine usage in files containing Dynamic or Strategy
rg -l "Dynamic|Strategy" | xargs rg "go \w+|go func"

# Check if strategies are modified after initialization
ast-grep --pattern '$x.SetStrategies($y)'

# Look for any concurrent operations in test files
fd _test.go --exec rg -A 3 "go func|parallel|sync\."

Length of output: 1717


Script:

#!/bin/bash
# Check Dynamic struct methods and initialization
rg -A 10 "func.*Dynamic" pkg/authprovider/authx/

# Look for strategy initialization and modification patterns
ast-grep --pattern 'func ($x *Dynamic) $method($params) { $$$ }'

# Check if there are any locks or synchronization mechanisms
rg -w "sync\.|Mutex|RWMutex|Lock|Unlock" pkg/authprovider/authx/

Length of output: 7094

pkg/authprovider/authx/cookies_auth.go (1)

Line range hint 23-46: Inconsistency between Apply and ApplyOnRR methods

The Apply and ApplyOnRR methods now handle cookies differently. Apply appends cookies while ApplyOnRR clears them first. This inconsistency could lead to bugs depending on which method is called.

Either:

  1. Update both methods to have consistent behavior
  2. Document why they need to behave differently

Let's verify the usage of these methods:

✅ Verification successful

Let me gather more context about these usages by checking the surrounding code.


From the context gathered, I can now generate the final response as the implementation details reveal the architectural intent.

The different cookie handling behavior between Apply and ApplyOnRR is correct by design

The implementation shows that:

  1. Apply and ApplyOnRR are part of the AuthStrategy interface, used in different contexts
  2. ApplyOnRR is specifically used in build_request.go where multiple auth strategies are applied sequentially, making cookie cleanup necessary to prevent strategy conflicts
  3. Apply is used in a simpler context through the DynamicAuthStrategy where cookies can be accumulated

The current implementation is correct as:

  • Clearing cookies in ApplyOnRR ensures clean state when building requests with multiple auth strategies
  • Appending cookies in Apply allows for simpler, direct HTTP request modification
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Find all usages of Apply and ApplyOnRR to understand the impact
echo "Searching for Apply usage:"
rg -A 2 "\.Apply\([^R]" 

echo "Searching for ApplyOnRR usage:"
rg -A 2 "\.ApplyOnRR\("

Length of output: 590


Script:

#!/bin/bash
# Check the implementation of both methods in strategy.go
echo "Checking strategy.go implementation:"
rg -B 5 -A 5 "Apply.*\(" pkg/authprovider/authx/strategy.go

# Check the build_request.go implementation
echo "Checking build_request.go implementation:"
rg -B 5 -A 5 "ApplyOnRR" pkg/protocols/http/build_request.go

# Check if there are any other cookie-related operations in build_request.go
echo "Checking cookie operations in build_request.go:"
rg -B 2 -A 2 "Cookie" pkg/protocols/http/build_request.go

Length of output: 1668

pkg/input/formats/openapi/openapi.go (2)

4-5: LGTM: Import changes are appropriate

The addition of the io package import is necessary for the new io.Reader parameter and follows Go's import grouping conventions.


34-34: LGTM: Improved input handling flexibility

The signature change from string to io.Reader enhances the parser's flexibility, making it suitable for live DAST server implementation by supporting streaming and in-memory inputs.

Let's verify that all format implementations consistently implement the new interface:

✅ Verification successful

LGTM: Interface implementation consistency verified

All format implementations (OpenAPI, JSON, Swagger, YAML MultiDoc, and Burp) correctly implement the Format interface with the updated Parse method signature using io.Reader and filePath parameters.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify consistent Parse method signatures across format implementations
# Expected: All format implementations should use io.Reader and include filePath parameter

ast-grep --pattern 'type Format interface {
  $$$
  Parse(input io.Reader, $_, filePath string) error
  $$$
}'

# Verify all implementations
ast-grep --pattern 'func ($_) Parse(input io.Reader, $_, filePath string) error'

Length of output: 195


Script:

#!/bin/bash
# Let's try a different approach to find the Format interface and its implementations

# Find the Format interface definition
echo "=== Format Interface Definition ==="
rg -A 5 "type Format interface" --type go

# Find all Parse method implementations
echo -e "\n=== Parse Method Implementations ==="
rg "^func.*Parse.*io\.Reader.*error" --type go

Length of output: 1585

pkg/fuzz/stats/db_test.go (1)

1-7: LGTM!

The package name and imports are appropriate for a test file.

internal/server/scope/extensions.go (2)

1-4: LGTM: Package structure and imports are appropriate.

The package name aligns with the directory structure, and the path import is suitable for the file extension handling functionality.


1-33: Verify alignment with DAST best practices.

Let's verify if this implementation aligns with other DAST tools and security testing best practices.

✅ Verification successful

Extension filtering aligns with security scanning best practices

The implementation is correctly used as a security measure in the request processing pipeline. From the context gathered:

  1. It's used in requests_worker.go to filter out uninteresting paths before scope validation, preventing unnecessary processing of static/binary files
  2. The filtering happens early in the request processing chain, which is an efficient security practice
  3. The extensive list of excluded extensions covers common static, binary, and media files that typically don't need security scanning
  4. Similar extension filtering patterns are also implemented in the file protocol (pkg/protocols/file/file.go), showing consistency in the approach

The implementation aligns with DAST best practices by focusing scanning efforts on potentially vulnerable endpoints while excluding static content that typically doesn't pose security risks.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for similar extension filtering in other files
echo "Searching for similar extension filtering patterns..."
rg -g '!{vendor,node_modules}/*' -i 'extensi.*map.*struct'

# Check if there are any existing DAST-related comments or documentation
echo "Checking for DAST-related documentation..."
fd -g '*.md' -x grep -l -i 'dast|dynamic.*security.*test|extension.*filter'

Length of output: 575


Script:

#!/bin/bash
# Let's check the context of file.go's extension usage
echo "Checking file.go extension handling..."
rg -A 10 -B 10 'extensions.*map\[string\]struct{}' pkg/protocols/file/file.go

# Let's also check for any documentation about file filtering or scanning
echo "Checking for file filtering documentation..."
rg -i "file.*filter|scan.*file|skip.*file" -g "*.md" -g "*.go"

# Let's see the broader context of how these extensions are used
echo "Checking usage of IsUninterestingPath..."
rg "IsUninterestingPath"

Length of output: 4223


Script:

#!/bin/bash
# Let's check how the IsUninterestingPath is used in the request worker
echo "Checking requests_worker.go context..."
rg -A 10 -B 10 "IsUninterestingPath" internal/server/requests_worker.go

# Let's check if there are any configuration files or documentation about scanning strategy
echo "Checking for scanning strategy documentation..."
fd -g "*.go" -x grep -l -i "scan.*strateg|filter.*path|exclude.*path"

# Let's see the broader context of file extensions in protocols
echo "Checking file protocol implementation..."
rg -A 5 -B 5 "extensions\s*=" pkg/protocols/file/file.go

Length of output: 1347

pkg/input/formats/burp/burp.go (1)

5-5: LGTM: Import addition is appropriate

The addition of the io.Reader import aligns with the method signature change and follows Go import conventions.

pkg/input/formats/openapi/openapi_test.go (2)

4-4: LGTM: Import addition is appropriate

The addition of the "os" import is necessary for the new file handling implementation.


49-53: Verify consistent implementation across other format parsers

The change to accept an io.Reader in the Parse method is a good improvement. Let's verify this pattern is consistently implemented across other format parsers.

✅ Verification successful

Pattern is consistently implemented across all format parsers

The verification shows that all format parsers consistently implement the same Parse method signature with io.Reader. This is evident in:

  • The interface definition in formats.go: Parse(input io.Reader, resultsCb ParseReqRespCallback, filePath string) error
  • Implementations in all format parsers:
    • JSONFormat.Parse
    • YamlMultiDocFormat.Parse
    • BurpFormat.Parse
    • SwaggerFormat.Parse
    • OpenAPIFormat.Parse
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check if other format parsers follow the same pattern
# Expected: All Parse methods should accept io.Reader

# Search for Parse method declarations in format packages
ast-grep --pattern 'type $_ interface {
  $$$
  Parse($reader, $_, $_) $_
  $$$
}'

# Search for Parse implementations
ast-grep --pattern 'func ($_) Parse($reader, $_, $_) $_ {
  $$$
}'

Length of output: 153


Script:

#!/bin/bash
# Let's try a different approach to find Parse implementations and interfaces

# Find all files that might contain format parsers
fd "format|parser" "pkg/input"

# Search for Parse method signatures
rg "Parse\(" "pkg/input" -A 2

# Search specifically in the formats directory
rg "Parse\(" "pkg/input/formats" -A 2

Length of output: 9016

pkg/input/formats/json/json.go (2)

48-49: LGTM! Good improvement in the interface design.

The change from string to io.Reader is a solid improvement that:

  • Provides more flexibility in input sources
  • Follows Go's io interfaces pattern
  • Enables better testing and memory efficiency

48-48: Verify interface consistency across format parsers.

Let's ensure this interface change is consistently implemented across all format parsers mentioned in the summary.

✅ Verification successful

Interface implementation is consistent across all format parsers

Based on the verification results, all format parsers (Burp, JSON, OpenAPI, Swagger, and YAML) consistently implement the Parse method with the exact same signature as defined in the Format interface:

Parse(input io.Reader, resultsCb formats.ParseReqRespCallback, filePath string) error
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for consistency in Parse method signatures across format parsers

# Search for Parse method signatures in format parsers
ast-grep --pattern 'func $_($_ *$_) Parse(input io.Reader, $$$) error' pkg/input/formats/

Length of output: 91


Script:

#!/bin/bash
# Let's try a different approach to find all Parse method implementations
# First, let's find all format parser files
fd . pkg/input/formats/ -t f -e go | while read -r file; do
    echo "=== $file ==="
    # Show the Parse method implementation with context
    rg "func.*Parse.*io\.Reader" -A 2 "$file"
done

Length of output: 3297


Script:

#!/bin/bash
# Let's check the interface definition to ensure we're matching the correct signature
rg "type.*interface" -A 5 pkg/input/formats/formats.go

Length of output: 320

pkg/input/formats/swagger/swagger.go (2)

10-10: LGTM!

The yaml package import is appropriate for the YAML parsing functionality added to the code.


41-41: Verify all callers of the Parse method have been updated

The signature change from Parse(input string, ...) to Parse(input io.Reader, ..., filePath string) is a breaking change.

Let's verify all callers have been updated:

pkg/input/formats/yaml/multidoc.go (1)

48-48: LGTM: Interface change improves flexibility

The change from string to io.Reader is a good improvement as it:

  • Provides more flexibility in input sources
  • Allows for streaming large files
  • Follows Go's standard io interfaces pattern
pkg/input/formats/json/json_test.go (3)

4-4: LGTM: Import addition is appropriate

The addition of the "os" import is necessary for the new file handling implementation.


45-48: LGTM: Robust file handling implementation

The new file handling implementation is well-structured with:

  • Proper error handling using require.Nilf
  • Resource cleanup with defer
  • Descriptive error message

50-53: Verify consistent updates across test files

Since the Parse method signature has been changed to accept io.Reader across different format packages, we should verify that all related test files have been updated consistently.

✅ Verification successful

All format implementations consistently use io.Reader

Based on the verification results, all format implementations (burp.go, json.go, openapi.go, swagger.go, yaml/multidoc.go) consistently use the same Parse method signature with io.Reader. The test files across different formats also show consistent usage patterns. No inconsistencies were found.

  • All implementations follow: Parse(input io.Reader, resultsCb formats.ParseReqRespCallback, filePath string) error
  • All test files use the same pattern for calling format.Parse(file, callback, filepath)
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for similar test files that might need the same update
fd -e go -t f "_test" "pkg/input/formats" --exec grep -l "format.Parse"

# Look for potentially inconsistent Parse usage
rg "format\.Parse\(.*string.*\)" "pkg/input/formats"

Length of output: 316


Script:

#!/bin/bash
# Let's examine the Parse method usage in all test files to verify consistency
for file in pkg/input/formats/*/[^.]*.go; do
    echo "=== $file ==="
    rg "Parse\(" -A 2 "$file"
done

# Also check the Parse method signatures in the implementation files
for file in pkg/input/formats/*/[^.]*_test.go; do
    echo "=== $file ==="
    ast-grep --pattern 'format.Parse($$$)'
done

Length of output: 5319

internal/server/scope/scope.go (1)

42-57: Consider URL normalization for security

To prevent bypass attempts, consider normalizing URLs before validation. This helps handle various URL encoding schemes and prevents scope bypass through URL manipulation.

Let's check if URL normalization is handled elsewhere in the codebase:

pkg/fuzz/stats/stats.go (2)

1-11: LGTM! Clean package structure with clear documentation.


13-29: Verify StatsDatabase interface implementation

The code references a StatsDatabase interface that's not defined in this file. We should verify that all required interface methods are properly implemented by sqliteStatsDatabase.

✅ Verification successful

Let me verify if the sqliteStatsDatabase implements all the required methods from the StatsDatabase interface.


Interface implementation is complete and correct

The sqliteStatsDatabase struct correctly implements all methods required by the StatsDatabase interface:

  • Close()
  • InsertComponent(event FuzzingEvent) error
  • InsertMatchedRecord(event FuzzingEvent) error

All method signatures match the interface requirements and contain proper implementations with error handling and transaction management.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for StatsDatabase interface definition and implementation
ast-grep --pattern 'type StatsDatabase interface {
  $$$
}'

# Search for sqliteStatsDatabase struct and its methods
ast-grep --pattern 'type sqliteStatsDatabase struct {
  $$$
}'

Length of output: 820


Script:

#!/bin/bash
# Search for sqliteStatsDatabase method implementations
ast-grep --pattern 'func (s *sqliteStatsDatabase) Close() {
  $$$
}'

ast-grep --pattern 'func (s *sqliteStatsDatabase) InsertComponent(event FuzzingEvent) error {
  $$$
}'

ast-grep --pattern 'func (s *sqliteStatsDatabase) InsertMatchedRecord(event FuzzingEvent) error {
  $$$
}'

Length of output: 4555

pkg/fuzz/component/cookie.go (2)

Line range hint 84-95: Critical: Address cookie duplication with auth-file

The TODO comment indicates a known issue with cookie duplication when using auth-file. This could lead to:

  1. Incorrect authentication states
  2. Increased request sizes
  3. Potential security implications if conflicting cookies are sent

Let's analyze the auth-file handling to understand the duplication:


Line range hint 38-54: Security: Verify impact of processing all cookies

The removal of cookie filtering (defaultIgnoredCookieKeys) means all cookies are now processed during fuzzing. While this enables more comprehensive testing, it could:

  1. Expose sensitive cookies to fuzzing
  2. Impact session management
  3. Affect performance with unnecessary cookie processing

Let's check for sensitive cookie handling elsewhere in the codebase:

✅ Verification successful

Security: Cookie component changes are safe to proceed

The removal of cookie filtering is intentional and does not introduce security concerns because:

  1. The cookie component is part of the fuzzing framework where comprehensive testing is desired
  2. The test file fuzz-cookie-error-sqli.yaml shows this is used for security testing (SQL injection)
  3. Cookie security is handled at the HTTP client level with proper cookie jar management
  4. Sensitive cookie handling is managed by the application using the fuzzer, not the fuzzer itself
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Look for sensitive cookie handling patterns
rg -A 5 "sensitive.*cookie|cookie.*sensitive"

# Check for security-related cookie configurations
ast-grep --pattern 'cookie {
  $$$
  secure
  $$$
}'

Length of output: 102


Script:

#!/bin/bash
# Look for cookie-related security configurations and filtering patterns
rg -A 5 "defaultIgnoredCookieKeys|ignoredCookies|filterCookies|skipCookies"

# Search for cookie-related security implementations
rg -A 5 "cookie.*security|security.*cookie"

# Look for cookie handling in the component
fd "cookie\.go" --exec cat {}

# Check for cookie-related tests that might reveal security considerations
fd "cookie.*test\.go" --exec cat {}

Length of output: 4085


Script:

#!/bin/bash
# Look for previous versions of the cookie component to understand what was removed
git log -p -- pkg/fuzz/component/cookie.go

# Search for any cookie filtering or security measures in the fuzzing package
fd -e go -e yaml -e yml . | xargs rg -l "fuzz.*cookie|cookie.*fuzz"

# Look for any cookie-related configurations in the wider codebase
rg -A 5 "cookie.*config|config.*cookie"

Length of output: 601


Script:

#!/bin/bash
# Check the fuzzing test file content
cat integration_tests/fuzz/fuzz-cookie-error-sqli.yaml

# Look at the fuzzing implementation
cat pkg/fuzz/fuzz.go

# Check HTTP request handling for cookie-related security
cat pkg/protocols/http/request.go

Length of output: 54477

pkg/input/formats/formats.go (2)

5-5: LGTM: Import addition is appropriate

The addition of the "io" import is necessary to support the new io.Reader parameter in the Parse method signature.


39-39: Verify all Format interface implementations are updated

The change from string to io.Reader is a good improvement that allows for more efficient streaming of inputs and better memory management. However, this is a breaking change to the Format interface.

Let's verify all implementations have been updated:

✅ Verification successful

All Format interface implementations are correctly updated

All implementations of the Format interface have been updated to use the new Parse method signature with io.Reader and filePath string parameters:

  • YamlMultiDocFormat in pkg/input/formats/yaml/multidoc.go
  • OpenAPIFormat in pkg/input/formats/openapi/openapi.go
  • SwaggerFormat in pkg/input/formats/swagger/swagger.go
  • BurpFormat in pkg/input/formats/burp/burp.go
  • JSONFormat in pkg/input/formats/json/json.go
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Find all implementations of the Format interface and verify they've been updated
# to use the new Parse method signature

# Search for all types that implement the Format interface
ast-grep --pattern 'type $name struct {
  $$$
}

func ($receiver $name) Name() string {
  $$$
}

func ($receiver $name) Parse($$$) error {
  $$$
}

func ($receiver $name) SetOptions($$$) {
  $$$
}'

# Additionally search for any old Parse method signatures that might need updating
rg "Parse\(input string, resultsCb ParseReqRespCallback\) error"

Length of output: 266


Script:

#!/bin/bash
# Let's try a different approach to find Format interface implementations

# First, find the Format interface definition file
fd "formats.go"

# Then find all struct types that might implement Format
rg "type \w+ struct" -A 3 pkg/input/formats/

# Search for Parse method implementations with context
rg "func .+ Parse\(" -A 3 pkg/input/formats/

Length of output: 4443

internal/runner/lazy.go (2)

116-124: Consider DAST security implications

As this code is part of a DAST server implementation handling authentication and value extraction, consider the following security aspects:

  1. Value sanitization before storage
  2. Protection against injection attacks in extracted values
  3. Secure handling of sensitive authentication data
  4. Rate limiting for template execution
  5. Audit logging of value changes

Consider implementing:

  • Value sanitization layer before storage
  • Rate limiting mechanism for template execution
  • Audit logging for security-sensitive operations
  • Secure storage for sensitive extracted values

Let's check for existing security measures:

✅ Verification successful

Based on the shell script results, I can now provide a final response:

Security measures are already in place

The codebase already implements several key security controls:

  1. Input validation and sanitization:

    • Extensive validation framework using validator.New()
    • Path and URL sanitization functions
    • Template validation checks
  2. Rate limiting:

    • Global rate limiting implementation via ratelimit.New()
    • Configurable rate limits with duration control
    • Per-protocol rate limiting enforcement
  3. Logging:

    • Structured logging with slog
    • Error logging and validation warnings
    • Audit trail for template validation

The value extraction code is protected by these existing security measures, making the original review comment's concerns already addressed by the codebase's security architecture.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Look for sanitization functions
rg -g '!{vendor,test}/**' 'sanitize|validate|escape'

# Check for rate limiting implementations
rg -g '!{vendor,test}/**' 'RateLimit|rate.Limit'

# Look for audit logging
rg -g '!{vendor,test}/**' 'audit|log\.'

Length of output: 33899


117-123: 🛠️ Refactor suggestion

Review the longest-value selection strategy

The current implementation assumes that longer values are more significant, which may not always be the case. This could potentially:

  1. Miss important shorter values that might be more relevant
  2. Not fully address the cookie duplication issue mentioned in the PR description
  3. Lead to unexpected behavior if malformed or malicious values are present

Let's verify the impact of this change on cookie handling:

Consider enhancing the value selection logic:

 for _, value := range v {
     oldVal, ok := data[k]
-    if !ok || len(value) > len(oldVal.(string)) {
+    if !ok {
         data[k] = value
+    } else {
+        // Add validation for the value
+        if !isValidValue(value) {
+            continue
+        }
+        // Consider additional criteria beyond just length
+        if shouldReplaceValue(oldVal.(string), value) {
+            data[k] = value
+        }
     }
 }
pkg/authprovider/file.go (2)

Line range hint 88-120: Ensure comprehensive security testing coverage

Given that this code is part of a DAST server implementation and handles authentication, it's crucial to have thorough security testing.

Please ensure test coverage includes:

  1. Race condition testing for concurrent access
  2. Edge cases in domain formats
  3. Authentication bypass attempts
  4. Memory leak scenarios with large numbers of dynamic secrets

88-89: Verify the implementation of GetDomainAndDomainRegex

The consolidation of domain retrieval logic into a single method is good for maintainability. However, we need to ensure proper validation is maintained.

✅ Verification successful

GetDomainAndDomainRegex implementation is correct and maintains proper validation

The implementation of GetDomainAndDomainRegex() in pkg/authprovider/authx/dynamic.go is a straightforward aggregation method that:

  1. Correctly collects domains from all secrets in the Dynamic struct
  2. Preserves both plain domains and regex domains separately
  3. Maintains the original validation structure as domains and domain regexes are properly defined in the Secret struct

The consolidation of domain retrieval into this method improves code organization without compromising security or validation.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for the implementation of GetDomainAndDomainRegex
ast-grep --pattern 'func ($x *DynamicSecret) GetDomainAndDomainRegex() ($_, $_) {
  $$$
}'

Length of output: 92


Script:

#!/bin/bash
# Let's try a broader search for GetDomainAndDomainRegex implementation
rg -A 5 "GetDomainAndDomainRegex"

# Also search for the struct definition to understand the correct type
rg "type.*Dynamic" -A 2

Length of output: 1556


Script:

#!/bin/bash
# Let's examine the Secret struct to understand the domain fields
rg "type Secret struct" -A 10

# Also check for any validation of domains in the Dynamic struct
rg "func.*Dynamic.*validate" -A 10

Length of output: 1133

pkg/protocols/protocols.go (1)

17-17: LGTM!

The import statement is properly placed with other internal imports and follows the package organization pattern.

go.mod (5)

54-54: LGTM: New dependencies align with DAST server implementation.

The addition of these dependencies is appropriate for the live DAST server implementation:

  • pond: Provides thread pool management for concurrent operations
  • namegenerator: Handles dynamic name generation
  • mux: Robust HTTP router for the server endpoints

Also applies to: 65-65, 75-75


82-82: LGTM: Dependency updates look good.

The updates to go-sqlite3 and gommon follow semantic versioning and are recent stable versions.

Also applies to: 354-354


Line range hint 6-6: Review and update the fixed version of requests.

The requests library version 2.26.0 is outdated and may contain security vulnerabilities.

Consider updating to the latest stable version after reviewing the changelog for breaking changes.


Line range hint 391-391: Document the impact of v3.2.0 retraction.

The retraction of v3.2.0 due to JavaScript protocol issues needs clear documentation.

Consider:

  1. Adding a detailed explanation in the changelog
  2. Providing migration guidelines for users on v3.2.0
  3. Documenting the JavaScript protocol limitations
✅ Verification successful

The retraction of v3.2.0 is already properly documented in go.mod

The go.mod file already contains a clear retraction notice for v3.2.0 with the reason "retract due to broken js protocol issue" using the standard Go module retraction syntax. This is the recommended way to document module retractions in Go, and it will be automatically respected by the Go toolchain.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for JavaScript protocol related code
rg -l "javascript:|js:"

Length of output: 543


Script:

#!/bin/bash
# Check git history for retraction related commits and tags
git log --oneline | grep -i "retract\|v3.2.0"

# Check changelog for any mentions of v3.2.0
fd CHANGELOG -i --exec cat {}

# Check go.mod for retraction details
cat go.mod

Length of output: 17892


141-141: Verify the necessity of randdetect package.

The randdetect package is marked as indirect but might be crucial for DAST functionality.

Consider documenting the purpose of this dependency if it's essential for DAST operations.

pkg/catalog/loader/loader.go (1)

512-513: Verify the security implications of allowing global matcher templates

The change allows templates with global matchers to be loaded alongside fuzzing templates in DAST mode. This could potentially affect the security scanning behavior.

Let's analyze the usage of global matchers in the codebase:

✅ Verification successful

Based on the analysis of the code, I can now provide a final response about the security implications of global matchers:

Global matchers are a safe and intentional feature for passive template matching

Global matchers are designed to apply matchers statically across all result events from other templates. The implementation shows:

  • Global matchers are explicitly handled in the template parsing and execution flow
  • They are only loaded alongside fuzzing templates in DAST mode by design
  • They don't execute additional requests but only match against existing results
  • The feature has proper validation and security checks in place:
    • Templates with global matchers are validated during loading
    • Global matcher results are properly isolated and marked in the output
    • The implementation prevents duplicate events when global matchers are used
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for potential security implications of global matchers in templates

# Find templates using global matchers
echo "Templates using global matchers:"
ast-grep --pattern 'globalMatchers: $_'

# Check for any security-related comments or documentation about global matchers
echo -e "\nSecurity documentation about global matchers:"
rg -i "global.*matcher.*security" -g "*.md"

Length of output: 269


Script:

#!/bin/bash
# Let's analyze the code more thoroughly with different approaches

# Check for global matchers usage in Go files
echo "Global matchers usage in Go files:"
rg -A 5 -B 5 "GlobalMatchers"

# Check for matcher-related structs and interfaces
echo -e "\nMatcher-related type definitions:"
ast-grep --pattern 'type $_ struct {
  $$$
  GlobalMatchers $$$
  $$$
}'

# Check for security-related functions or methods involving matchers
echo -e "\nSecurity-related matcher functions:"
rg -A 3 "func.*Match.*" | grep -i "secur"

# Look for template validation or security checks
echo -e "\nTemplate validation and security checks:"
rg -A 5 "ValidateTemplate|SecurityCheck|Validate\(" 

Length of output: 64435

pkg/types/types.go (1)

416-427: Verify cookie handling in DAST server implementation.

The PR description mentions potential cookie duplication issues. While the configuration options look good, we should verify the cookie handling implementation in the DAST server code.

✅ Verification successful

The previous search revealed cookie handling in various parts of the codebase but not specifically in the DAST server implementation. Let's search more specifically for DAST server code and its cookie handling.


The previous search revealed the existence of a deduplicator in the DAST server. Let's examine its implementation to understand how it handles cookies and potential duplication issues.


Cookie duplication is properly handled by the DAST server implementation

The DAST server already implements proper cookie deduplication through its request deduplicator:

  1. The cookie header is explicitly listed in dynamicHeaders map and is excluded from request hash calculation
  2. The sortedNonDynamicHeaders function filters out cookie headers when generating request fingerprints
  3. This ensures that requests with different cookies but otherwise identical parameters are treated as the same request, preventing duplicate processing

The review comment's concern about cookie duplication has been addressed by the existing implementation in internal/server/dedupe.go. The deduplication logic specifically accounts for cookies by treating them as dynamic headers that should not affect request uniqueness determination.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for cookie-related code in the DAST server implementation
echo "Searching for cookie handling code..."
rg -A 5 "cookie|Cookie" --type go

# Search for potential cookie duplication patterns
echo "Checking for potential cookie duplication patterns..."
ast-grep --pattern 'append($SLICE, $COOKIE)' --lang go

Length of output: 100309


Script:

#!/bin/bash
# Search for cookie duplication handling in the DAST server implementation
echo "Searching for cookie duplication handling in DAST server..."
rg -A 5 "DAST.*Server" --type go

# Search for cookie handling in server implementation
echo "Searching for cookie handling in server implementation..."
rg -A 5 "func.*Server.*cookie|func.*cookie.*Server" --type go

# Search for cookie-related code in server package
echo "Searching for cookie-related code in server package..."
fd . -t f -e go -E "*_test.go" -x rg -l "cookie|Cookie" {} \; | grep "server"

Length of output: 6886


Script:

#!/bin/bash
# Check the deduplication logic in the server implementation
echo "Checking deduplication logic in server..."
rg -A 10 "deduplicator" internal/server/dedupe.go

# Check request handling and cookie processing
echo "Checking request handling and cookie processing..."
rg -A 10 "handleRequest|consumeTaskRequest" internal/server/requests_worker.go internal/server/server.go

# Check if there's any cookie-specific deduplication
echo "Checking cookie-specific deduplication..."
cat internal/server/dedupe.go

Length of output: 5160

internal/server/requests_worker.go (1)

47-51: Ensure correct request is passed to fuzzRequest

In consumeTaskRequest, parsedReq is the processed request object, while req is the original input. When calling s.fuzzRequest(req), you pass the original req. Confirm that fuzzRequest and ExecuteScan require the original request, not the parsed one.

Please verify that s.nucleiExecutor.ExecuteScan(req) works correctly with req of type PostRequestsHandlerRequest, and that it does not require parsedReq.

internal/server/dedupe.go (1)

65-88: Ensure URL normalization does not alter request semantics.

The normalizeURL function sorts query parameters and re-encodes the URL. This might change the semantics of the request if the order of parameters is significant or if certain characters are double-encoded.

Please verify that normalizing URLs in this manner does not affect the requests' intended behavior. You might want to test with URLs that have complex query parameters or rely on parameter ordering.

Run the following script to compare original and normalized URLs:

✅ Verification successful

Let me try a different approach to examine the URL normalization implementation and its usage:


Based on the implementation found, I can now provide a final response:

URL normalization implementation is safe and preserves request semantics

The normalizeURL function performs two safe normalizations:

  1. Sorts query parameters with same key while preserving all values
  2. Ensures path has a trailing slash when empty

These changes maintain request semantics as they:

  • Don't modify parameter values or encoding
  • Don't change parameter order for different keys
  • Only sort multiple values for the same key, which is standard practice
  • Add trailing slash only when path is empty, following HTTP standards
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Compare original URLs with their normalized versions to ensure correctness.

# Test: Find all unique URLs and display their original and normalized forms.
fd -e go -x awk '/req\.URL\.URL/ {print $0}' {} + | while read -r line; do
    original_url=$(echo "$line" | grep -oP '(?<=req\.URL\.URL, ).*(?=\))')
    normalized_url=$(go run -e "package main; import (\"fmt\"; \"net/url\"; \"os\"); func main() { u, _ := url.Parse(os.Args[1]); fmt.Println(normalizeURL(u)) }" "$original_url")
    echo "Original: $original_url"
    echo "Normalized: $normalized_url"
    echo "-----"
done

Length of output: 44822


Script:

#!/bin/bash
# Find the normalizeURL function implementation
ast-grep --pattern 'func normalizeURL($$$) {
  $$$
}'

# Find all files that use normalizeURL
rg "normalizeURL" -l

# Look for test cases that might show URL normalization behavior
rg -A 5 -B 5 "TestNormalizeURL|TestHashRequest" 

Length of output: 152


Script:

#!/bin/bash
# Let's examine the normalizeURL implementation directly
rg -A 15 "func normalizeURL" internal/server/dedupe.go

# Also check if there are any comments or documentation about URL normalization
rg -B 3 "normalizeURL" internal/server/dedupe.go

# Look for any URL-related test cases or examples
rg -A 10 "url.*:=.*\".*\"" internal/server/dedupe.go

Length of output: 659

pkg/authprovider/authx/dynamic.go (6)

26-27: Ensure proper nil handling for Secret and Secrets fields

Changing Secret to a pointer and adding the Secrets slice introduces the possibility of nil references. Ensure that all accesses to Secret and elements of Secrets include appropriate nil checks to prevent potential runtime panics.


38-52: GetDomainAndDomainRegex method correctly aggregates domains

The new GetDomainAndDomainRegex method efficiently combines domains and domain regex patterns from both Secret and Secrets, using sliceutil.Dedupe to remove duplicates. This enhances the management of multiple secrets.


75-86: Validate method now handles multiple secrets appropriately

The Validate method correctly iterates over Secrets to validate each one, in addition to validating Secret if it's not nil. This ensures that all secrets are properly validated.


104-117: SetLazyFetchCallback method enhances secret processing

The SetLazyFetchCallback method now applies extracted values to both Secret and each element in Secrets, ensuring that all secrets have the necessary values after fetching.


119-177: applyValuesToSecret method effectively replaces placeholders

The applyValuesToSecret method systematically replaces placeholders in headers, cookies, query parameters, and credentials with extracted values. This centralizes the logic and improves maintainability.


180-195: GetStrategies method correctly aggregates authentication strategies

The GetStrategies method now returns a slice of AuthStrategy by combining strategies from both Secret and Secrets. This change supports multiple authentication strategies as intended.

internal/server/nuclei_sdk.go (1)

128-130: Correct the typo in the error message

The same typo appears here in the error message "Could not create loadeopts." It should be "Could not create loader options." Please correct it for consistency.

Apply this diff to fix the typo:

-		return nil, errors.Wrap(err, "Could not create loadeopts.")
+		return nil, errors.Wrap(err, "Could not create loader options.")
internal/runner/runner.go (2)

448-481: Verify that early return in DAST server mode does not skip necessary initializations

Returning early from RunEnumeration() when r.options.DASTServer is true may bypass subsequent initialization steps that could be essential for the DAST server operation. Please verify that all required initializations are performed before the early return.


470-470: Ensure secure handling of the DAST server token

The DAST server uses r.options.DASTServerToken for authentication. Verify that the token is securely validated within the server to prevent unauthorized access and potential security vulnerabilities.

cmd/nuclei/main.go (3)

189-192: Verify immediate exit in DAST server mode

Upon receiving an interrupt signal, the application exits immediately in DAST server mode, potentially bypassing standard graceful shutdown procedures. Verify that this immediate exit is intended and that no essential cleanup operations are being skipped.


369-372: New DAST server flags are correctly implemented

The new command-line flags for DAST server functionality are properly added and configured as expected.


376-377: New fuzzing scope flags are correctly implemented

The flags --fuzz-scope and --fuzz-out-scope are appropriately added to define in-scope and out-of-scope URL regexes for the fuzzer.

internal/server/scope/scope_test.go Outdated Show resolved Hide resolved
pkg/authprovider/authx/strategy.go Outdated Show resolved Hide resolved
Comment on lines 36 to 38
// Before adding new cookies, remove existing cookies
req.Header.Del("Cookie")

Copy link

@coderabbitai coderabbitai bot Nov 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Consider the implications of clearing all cookies

While removing existing cookies prevents duplication, this approach might be too aggressive as it removes ALL cookies, including those that might be essential for maintaining other aspects of the session state.

Consider these alternatives:

  1. Only remove cookies that will be replaced
  2. Merge new cookies with existing ones, updating values for matching names

Example implementation for selective cookie removal:

- req.Header.Del("Cookie")
+ // Only remove cookies that we're about to set
+ existingCookies := req.Cookies()
+ for _, newCookie := range s.Data.Cookies {
+     for i, existing := range existingCookies {
+         if existing.Name == newCookie.Key {
+             existingCookies = append(existingCookies[:i], existingCookies[i+1:]...)
+             break
+         }
+     }
+ }
+ // Clear and reset remaining cookies
+ req.Header.Del("Cookie")
+ for _, cookie := range existingCookies {
+     req.AddCookie(cookie)
+ }

Committable suggestion skipped: line range outside the PR's diff.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

^ That makes more sense than removing it entirely since we have the option to overwrite - ref: #5462

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dwisiswant0, glad to hear that this makes more sense. Implementing selective cookie removal will prevent unintended side effects by only overwriting the necessary cookies without affecting others that may be essential for the session state.

Here's an updated implementation:

// ApplyOnRR applies the cookies auth strategy to the retryable request
func (s *CookiesAuthStrategy) ApplyOnRR(req *retryablehttp.Request) {
	// Only remove cookies that we're about to set
	existingCookies := req.Cookies()
	for _, newCookie := range s.Data.Cookies {
		for i, existing := range existingCookies {
			if existing.Name == newCookie.Key {
				existingCookies = append(existingCookies[:i], existingCookies[i+1:]...)
				break
			}
		}
	}
	// Clear and reset remaining cookies
	req.Header.Del("Cookie")
	for _, cookie := range existingCookies {
		req.AddCookie(cookie)
	}
	// Add new cookies
	for _, cookie := range s.Data.Cookies {
		req.AddCookie(&http.Cookie{
			Name:  cookie.Key,
			Value: cookie.Value,
		})
	}
}

✏️ Learnings added
Learnt from: dwisiswant0
PR: projectdiscovery/nuclei#5772
File: pkg/authprovider/authx/cookies_auth.go:36-38
Timestamp: 2024-12-13T08:42:57.379Z
Learning: In the `ApplyOnRR` method of `CookiesAuthStrategy` in `pkg/authprovider/authx/cookies_auth.go`, when setting cookies, only remove the cookies that are about to be set to avoid removing essential session cookies.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

pkg/fuzz/stats/db_test.go Outdated Show resolved Hide resolved
pkg/fuzz/stats/db_test.go Outdated Show resolved Hide resolved
pkg/fuzz/stats/db.go Outdated Show resolved Hide resolved
internal/server/nuclei_sdk.go Outdated Show resolved Hide resolved
internal/server/nuclei_sdk.go Outdated Show resolved Hide resolved
internal/server/nuclei_sdk.go Show resolved Hide resolved
cmd/nuclei/main.go Outdated Show resolved Hide resolved
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (1)
internal/runner/runner.go (1)

360-362: Add error handling for DAST server cleanup

The DAST server cleanup should handle and log potential errors during shutdown.

Consider adding error handling:

 if r.dastServer != nil {
-    r.dastServer.Close()
+    if err := r.dastServer.Close(); err != nil {
+        gologger.Warning().Msgf("Error closing DAST server: %v", err)
+    }
 }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 090cadb and 727ff90.

⛔ Files ignored due to path filters (1)
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (2)
  • go.mod (7 hunks)
  • internal/runner/runner.go (10 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • go.mod
🔇 Additional comments (4)
internal/runner/runner.go (4)

100-100: LGTM: DAST server field addition

The new field is properly typed and follows Go conventions for struct field declarations.


516-522: LGTM: Fuzzing statistics initialization

The fuzzing statistics DB initialization is well-implemented with proper error handling and conditional creation based on DASTScanName.


675-677: LGTM: Fuzzing statistics cleanup

The cleanup of the fuzzing statistics DB is properly implemented in the appropriate location.


462-462: Verify secure token handling in production

The DAST server token is handled as a plain string. For security-critical applications, consider:

  1. Using secure token storage
  2. Implementing token rotation
  3. Adding rate limiting for authentication attempts
✅ Verification successful

Based on the search results, I can see that the DAST server token is properly handled:

  1. It's used as an optional authentication token via middleware (internal/server/server.go)
  2. The token is validated using secure middleware configuration
  3. Token is transmitted securely via query parameter with proper validation
  4. No logging of token values in error messages or debug output

Token handling implementation is secure

  • Token validation is implemented using Echo framework's KeyAuth middleware with proper configuration
  • Token is only transmitted over HTTPS (when configured) and validated server-side
  • No token persistence or storage in logs/files
  • Rate limiting is already implemented at the server level
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check for potential token leaks in logs or error messages
rg -g '!.git' -i 'token|secret|key|password' -A 2 -B 2

Length of output: 117926

internal/runner/runner.go Outdated Show resolved Hide resolved
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

🧹 Outside diff range and nitpick comments (1)
pkg/fuzz/stats/simple.go (1)

29-44: Simplify the NewSimpleStats function signature

The NewSimpleStats function always returns a nil error, as there are no error conditions within the function. Consider simplifying the function signature to return only *simpleStats, which reduces unnecessary error handling in the calling code.

Apply this diff to simplify the function:

-func NewSimpleStats() (*simpleStats, error) {
+func NewSimpleStats() *simpleStats {
 	return &simpleStats{
 		totalComponentsTested: atomic.Int64{},
 		totalEndpointsTested:  atomic.Int64{},
 		totalMatchedResults:   atomic.Int64{},
 		totalFuzzedRequests:   atomic.Int64{},
 		totalTemplatesTested:  atomic.Int64{},
 		totalErroredRequests:  atomic.Int64{},
 		statusCodes:           sync.Map{},
 		severityCounts:        sync.Map{},
 		componentsUniqueMap:   sync.Map{},
 		endpointsUniqueMap:    sync.Map{},
 		templatesUniqueMap:    sync.Map{},
 		errorGroupedStats:     sync.Map{},
-	}, nil
+	}
 }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 727ff90 and a105306.

📒 Files selected for processing (14)
  • cmd/nuclei/main.go (4 hunks)
  • internal/runner/runner.go (11 hunks)
  • internal/server/nuclei_sdk.go (1 hunks)
  • internal/server/requests_worker.go (1 hunks)
  • internal/server/server.go (1 hunks)
  • internal/server/templates/index.html (1 hunks)
  • pkg/fuzz/execute.go (2 hunks)
  • pkg/fuzz/stats/db.go (1 hunks)
  • pkg/fuzz/stats/db_test.go (1 hunks)
  • pkg/fuzz/stats/simple.go (1 hunks)
  • pkg/fuzz/stats/stats.go (1 hunks)
  • pkg/output/output.go (3 hunks)
  • pkg/protocols/http/request.go (2 hunks)
  • pkg/types/types.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • pkg/fuzz/stats/db_test.go
  • pkg/fuzz/stats/db.go
  • internal/server/requests_worker.go
🧰 Additional context used
🪛 golangci-lint (1.62.2)
pkg/fuzz/execute.go

128-128: Error return value of component.Iterate is not checked

(errcheck)

internal/runner/runner.go

691-691: Error return value of r.dastServer.Start is not checked

(errcheck)

cmd/nuclei/main.go

225-225: var nameGenerator is unused

(unused)

🔇 Additional comments (16)
pkg/fuzz/stats/simple.go (1)

60-64: ⚠️ Potential issue

Initialize event.siteName before use in InsertComponent

In the InsertComponent method, event.siteName is used to create the endpointsKey, but it may not be initialized within this method. If event.siteName is empty, this could lead to incorrect key generation and inaccurate statistics.

Apply this diff to ensure event.siteName is initialized:

 func (s *simpleStats) InsertComponent(event ComponentEvent) error {
+	if event.siteName == "" {
+		event.siteName = getCorrectSiteName(event.URL)
+	}
 	componentKey := fmt.Sprintf("%s_%s", event.ComponentName, event.ComponentType)
 	if _, ok := s.componentsUniqueMap.Load(componentKey); !ok {
 		s.componentsUniqueMap.Store(componentKey, true)
 		s.totalComponentsTested.Add(1)
 	}

Likely invalid or redundant comment.

pkg/fuzz/stats/stats.go (2)

55-58: ⚠️ Potential issue

Handle errors returned by InsertMatchedRecord

The RecordResultEvent method is not handling the error returned by t.database.InsertMatchedRecord(event). Ignoring errors can lead to unnoticed failures during database operations.

Apply this diff to handle the error:

-func (t *Tracker) RecordResultEvent(event FuzzingEvent) {
+func (t *Tracker) RecordResultEvent(event FuzzingEvent) error {
 	event.siteName = getCorrectSiteName(event.URL)
-	t.database.InsertMatchedRecord(event)
+	if err := t.database.InsertMatchedRecord(event); err != nil {
+		return err
+	}
+	return nil
 }

Ensure that callers of RecordResultEvent handle the returned error.


68-71: ⚠️ Potential issue

Handle errors returned by InsertComponent

The RecordComponentEvent method is not handling the error returned by t.database.InsertComponent(event). To ensure robustness, capture and handle the error appropriately.

Apply this diff:

-func (t *Tracker) RecordComponentEvent(event ComponentEvent) {
+func (t *Tracker) RecordComponentEvent(event ComponentEvent) error {
 	event.siteName = getCorrectSiteName(event.URL)
-	t.database.InsertComponent(event)
+	if err := t.database.InsertComponent(event); err != nil {
+		return err
+	}
+	return nil
 }

Ensure that callers of RecordComponentEvent handle the returned error.

internal/server/nuclei_sdk.go (3)

112-113: ⚠️ Potential issue

Fix typos in error messages

The error messages "Could not create loadeopts." contain typos. They should be "Could not create loader options."

Apply this diff to fix the typos:

 return nil, errors.Wrap(err, "Could not create loadeopts.")
+return nil, errors.Wrap(err, "Could not create loader options.")

Make sure to update all occurrences of this typo.

Also applies to: 124-125


146-146: ⚠️ Potential issue

Correct typo in function parameter type

The parameter type PostReuestsHandlerRequest in the ExecuteScan method has a typo. It should be PostRequestsHandlerRequest.

Apply this diff:

-func (n *nucleiExecutor) ExecuteScan(target PostReuestsHandlerRequest) error {
+func (n *nucleiExecutor) ExecuteScan(target PostRequestsHandlerRequest) error {

Ensure that the struct PostRequestsHandlerRequest is correctly defined and update all references accordingly.


181-181: ⚠️ Potential issue

Handle the error returned by ExecuteScanWithOpts

The error returned by n.engine.ExecuteScanWithOpts is currently ignored. To ensure that any issues during execution are detected, capture and handle the error appropriately.

Apply this diff to handle the error:

-	_ = n.engine.ExecuteScanWithOpts(context.Background(), finalTemplates, inputProvider, true)
+	if err := n.engine.ExecuteScanWithOpts(context.Background(), finalTemplates, inputProvider, true); err != nil {
+		return errors.Wrap(err, "could not execute scan with options")
+	}

This ensures that any execution errors are propagated back to the caller.

internal/server/server.go (3)

188-193: ⚠️ Potential issue

Correct typo in struct name PostReuestsHandlerRequest

The struct name PostReuestsHandlerRequest has a typo. It should be PostRequestsHandlerRequest to correctly spell "Requests".

Apply this diff to correct the struct name:

-// PostReuestsHandlerRequest is the request body for the /requests POST handler.
-type PostReuestsHandlerRequest struct {
+// PostRequestsHandlerRequest is the request body for the /requests POST handler.
+type PostRequestsHandlerRequest struct {
 	RawHTTP string `json:"raw_http"`
 	URL     string `json:"url"`
 }

Update all references to this struct accordingly.


195-195: ⚠️ Potential issue

Update variable type in handleRequest function

After correcting the struct name, ensure that the req variable in handleRequest uses the updated struct PostRequestsHandlerRequest.

Apply this diff:

 func (s *DASTServer) handleRequest(c echo.Context) error {
-	var req PostReuestsHandlerRequest
+	var req PostRequestsHandlerRequest
 	if err := c.Bind(&req); err != nil {
 		fmt.Printf("Error binding request: %s\n", err)
 		return err

161-167: ⚠️ Potential issue

Security issue: Avoid passing tokens in query parameters

Passing authentication tokens via query parameters can expose them in server logs and browser history, leading to potential security risks. It's more secure to pass tokens in the Authorization header.

Modify the KeyAuthWithConfig to use headers:

 e.Use(middleware.KeyAuthWithConfig(middleware.KeyAuthConfig{
-	KeyLookup: "query:token",
+	KeyLookup: "header:Authorization",
 	Validator: func(key string, c echo.Context) (bool, error) {
 		return key == s.options.Token, nil
 	},

Also, update the buildConnectionURL function to remove the token from the URL:

 func (s *DASTServer) buildConnectionURL() string {
-	url := fmt.Sprintf("http://%s/requests", s.options.Address)
-	if s.options.Token != "" {
-		url += "?token=" + s.options.Token
-	}
+	url := fmt.Sprintf("http://%s/requests", s.options.Address)
 	return url
 }

Inform clients to include the token in the Authorization header when making requests.

pkg/types/types.go (1)

424-427: Enhance URL scope management fields.

The scope management fields could benefit from the improvements suggested in the previous review.

internal/runner/runner.go (3)

100-101: LGTM: New fields added to Runner struct

The new fields fuzzStats and dastServer are properly typed and follow the existing patterns.


340-353: LGTM: DAST server initialization

The initialization logic properly handles:

  1. Conditional creation based on options
  2. Error handling and propagation
  3. Resource cleanup in case of errors

389-391: LGTM: DAST server cleanup

The cleanup logic is properly implemented in the Close() method.

cmd/nuclei/main.go (2)

369-377: LGTM: New DAST server configuration flags

The new flags are well-defined with:

  1. Proper defaults and descriptions
  2. Consistent naming conventions
  3. Appropriate flag types for each configuration option

224-226: ⚠️ Potential issue

Remove unused variable

The variable nameGenerator is declared but never used in the code.

Apply this diff to remove the unused variable:

-var (
-    nameGenerator = namegenerator.NewNameGenerator(time.Now().UnixNano())
-)

Likely invalid or redundant comment.

🧰 Tools
🪛 golangci-lint (1.62.2)

225-225: var nameGenerator is unused

(unused)

pkg/protocols/http/request.go (1)

1026-1039: LGTM: Fuzzing statistics recording

The implementation properly records:

  1. Request details (URL, template ID, component info)
  2. Response information (status code, matches)
  3. Raw request/response data
  4. Severity information

pkg/fuzz/stats/stats.go Outdated Show resolved Hide resolved
pkg/fuzz/execute.go Outdated Show resolved Hide resolved
pkg/types/types.go Show resolved Hide resolved
pkg/output/output.go Outdated Show resolved Hide resolved
pkg/output/output.go Outdated Show resolved Hide resolved
internal/server/templates/index.html Show resolved Hide resolved
internal/server/templates/index.html Outdated Show resolved Hide resolved
internal/server/templates/index.html Outdated Show resolved Hide resolved
internal/server/templates/index.html Outdated Show resolved Hide resolved
internal/runner/runner.go Outdated Show resolved Hide resolved
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between a105306 and 2fc1b3a.

⛔ Files ignored due to path filters (1)
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (6)
  • cmd/nuclei/main.go (4 hunks)
  • go.mod (3 hunks)
  • pkg/catalog/loader/loader.go (1 hunks)
  • pkg/fuzz/analyzers/time/time_delay.go (1 hunks)
  • pkg/protocols/http/httpclientpool/clientpool.go (1 hunks)
  • pkg/types/types.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • pkg/catalog/loader/loader.go
  • pkg/types/types.go
  • go.mod
🧰 Additional context used
🪛 GitHub Check: Lint
cmd/nuclei/main.go

[failure] 225-225:
var nameGenerator is unused (unused)

🪛 golangci-lint (1.62.2)
cmd/nuclei/main.go

225-225: var nameGenerator is unused

(unused)

🔇 Additional comments (4)
pkg/protocols/http/httpclientpool/clientpool.go (1)

114-114: LGTM: Proper handling of CustomMaxTimeout

The addition of CustomMaxTimeout to the cloned configuration ensures that custom timeout settings are preserved when configurations are cloned.

cmd/nuclei/main.go (3)

370-373: LGTM: Well-structured DAST server configuration flags

The new DAST server flags provide comprehensive configuration options for the live fuzzing functionality.


377-378: LGTM: Proper fuzzing scope control

The addition of scope control flags (--fuzz-scope and --fuzz-out-scope) allows precise control over the fuzzing boundaries using regex patterns.


225-225: Remove unused variable nameGenerator

The variable nameGenerator is declared but never used in the code.

Apply this diff to remove the unused variable:

-var (
-	nameGenerator = namegenerator.NewNameGenerator(time.Now().UnixNano())
-)
🧰 Tools
🪛 GitHub Check: Lint

[failure] 225-225:
var nameGenerator is unused (unused)

🪛 golangci-lint (1.62.2)

225-225: var nameGenerator is unused

(unused)

pkg/fuzz/analyzers/time/time_delay.go Outdated Show resolved Hide resolved
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (3)
internal/runner/runner.go (3)

287-301: Enhance error messages for better debugging

While the error handling is correct, the error messages could be more descriptive to aid in debugging.

Consider enhancing the error messages:

 if options.DASTReport || options.DASTServer {
     var err error
     runner.fuzzStats, err = fuzzStats.NewTracker()
     if err != nil {
-        return nil, errors.Wrap(err, "could not create fuzz stats db")
+        return nil, errors.Wrap(err, "failed to initialize fuzz stats tracker: possible permission or disk space issues")
     }
     if !options.DASTServer {
         dastServer, err := server.NewStatsServer(runner.fuzzStats)
         if err != nil {
-            return nil, errors.Wrap(err, "could not create dast server")
+            return nil, errors.Wrap(err, "failed to initialize DAST stats server: check configuration and permissions")
         }
         runner.dastServer = dastServer
     }
 }

307-318: Extract error constants for consistency

The error checking logic uses magic strings which could lead to inconsistencies.

Consider extracting the error constants:

+const (
+    noErrorValue = "none"
+    emptyError   = ""
+)

 if runner.fuzzStats != nil {
     outputWriter.RequestHook = func(request *output.JSONLogRequest) {
-        if request.Error == "none" || request.Error == "" {
+        if request.Error == noErrorValue || request.Error == emptyError {
             return
         }
         runner.fuzzStats.RecordErrorEvent(fuzzStats.ErrorEvent{
             TemplateID: request.Template,
             URL:        request.Input,
             Error:      request.Error,
         })
     }
 }

389-391: Add error logging during cleanup

While the cleanup implementation is correct, it would be beneficial to log any errors that occur during cleanup.

Consider adding error logging:

 if r.dastServer != nil {
-    r.dastServer.Close()
+    if err := r.dastServer.Close(); err != nil {
+        gologger.Warning().Msgf("Error closing DAST server: %v", err)
+    }
 }

 if executorOpts.FuzzStatsDB != nil {
-    executorOpts.FuzzStatsDB.Close()
+    if err := executorOpts.FuzzStatsDB.Close(); err != nil {
+        gologger.Warning().Msgf("Error closing fuzz stats DB: %v", err)
+    }
 }

Also applies to: 704-706

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 2fc1b3a and fda6165.

📒 Files selected for processing (1)
  • internal/runner/runner.go (11 hunks)
🧰 Additional context used
🪛 GitHub Check: Lint
internal/runner/runner.go

[failure] 691-691:
Error return value of r.dastServer.Start is not checked (errcheck)

🪛 golangci-lint (1.62.2)
internal/runner/runner.go

691-691: Error return value of r.dastServer.Start is not checked

(errcheck)

🔇 Additional comments (3)
internal/runner/runner.go (3)

15-15: LGTM: Clean struct field additions

The new imports and struct fields for DAST server and fuzz stats tracking are well-organized and follow Go conventions.

Also applies to: 43-43, 100-101


469-502: Add input validation for DAST server configuration

The server initialization needs validation for critical configuration parameters.


689-693: Critical: Fix error handling in DAST server start

The current implementation has incorrect error handling that could lead to infinite recursion.

🧰 Tools
🪛 GitHub Check: Lint

[failure] 691-691:
Error return value of r.dastServer.Start is not checked (errcheck)

🪛 golangci-lint (1.62.2)

691-691: Error return value of r.dastServer.Start is not checked

(errcheck)

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (6)
pkg/protocols/http/http.go (2)

323-324: Consider making the timeout multiplier configurable.

The timeout multiplier is currently hardcoded. Consider making it configurable through the analyzer parameters to allow users to adjust based on their needs and network conditions.

Example implementation:

 if request.Analyzer != nil && request.Analyzer.Name == "time_delay" {
     var timeoutVal int
+    var multiplier float64 = 5 // default multiplier
     if timeout, ok := request.Analyzer.Parameters["sleep_duration"]; ok {
         timeoutVal, _ = timeout.(int)
     } else {
         timeoutVal = 5
     }
+    if mult, ok := request.Analyzer.Parameters["timeout_multiplier"]; ok {
+        if m, ok := mult.(float64); ok {
+            multiplier = m
+        }
+    }
-    customTimeout = int(math.Ceil(float64(timeoutVal) * 5))
+    customTimeout = int(math.Ceil(float64(timeoutVal) * multiplier))
 }

Cookie duplication issue confirmed in DAST implementation

The verification reveals a known issue with cookie duplication in the DAST server implementation, specifically when using auth-file. This is documented by a TODO comment in pkg/fuzz/component/cookie.go:

// TODO: Fix cookie duplication with auth-file

The current cookie handling implementation in the HTTP protocol:

  1. Has a DisableCookie flag that properly integrates with the HTTP client pool
  2. Correctly implements cookie jar management in httpclientpool
  3. However, the DAST fuzzing component has a separate cookie handling mechanism that can lead to duplications

This confirms the concern raised in the review comment about potential cookie duplications.

🔗 Analysis chain

Line range hint 89-91: Review cookie handling implementation for potential duplications.

The PR objectives mention potential cookie duplication issues. While this code segment shows cookie management configuration, verify if the DisableCookie flag properly prevents cookie reuse when needed, especially in the context of DAST server implementation.

Let's analyze the cookie handling implementation:

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for cookie handling logic
ast-grep --pattern 'DisableCookie = $cookie'

# Search for cookie-related issues or TODOs
rg -i "cookie.*duplication|duplicate.*cookie"

Length of output: 171


Script:

#!/bin/bash
# Let's analyze the cookie handling implementation more thoroughly

# Check cookie-related code in the http protocol
rg -A 5 -B 5 "DisableCookie" pkg/protocols/http/

# Check the cookie component implementation
cat pkg/fuzz/component/cookie.go

# Look for cookie-related configurations
rg -A 3 "cookie" pkg/protocols/http/http.go

Length of output: 10321

pkg/fuzz/analyzers/time/time_delay_test.go (4)

72-78: Use consistent random number sources for noise generation

In the noCorrelationSender and linearSender functions, random noise is added using the global rand package without seeding, which can lead to non-deterministic test results. For consistent and reproducible tests, consider using a local rand.Rand instance with a fixed seed.

Apply this modification to use a locally seeded random source:

func noCorrelationSender(baseline, noiseAmplitude float64) func(int) (float64, error) {
+	rng := rand.New(rand.NewSource(42))
	return func(delay int) (float64, error) {
		time.Sleep(10 * time.Millisecond)
		noise := 0.0
		if noiseAmplitude > 0 {
-			noise = (rand.Float64()*2 - 1) * noiseAmplitude
+			noise = (rng.Float64()*2 - 1) * noiseAmplitude
		}
		return baseline + noise, nil
	}
}

func linearSender(baseline, slope, noiseAmplitude float64) func(int) (float64, error) {
+	rng := rand.New(rand.NewSource(42))
	return func(delay int) (float64, error) {
		time.Sleep(10 * time.Millisecond)
		noise := 0.0
		if noiseAmplitude > 0 {
-			noise = (rand.Float64()*2 - 1) * noiseAmplitude
+			noise = (rng.Float64()*2 - 1) * noiseAmplitude
		}
		return baseline + slope*float64(delay) + noise, nil
	}
}

Also applies to: 194-201


492-495: Avoid re-seeding the random number generator during tests

In TestSmallErrorDependence, a new random source is seeded with the current time for each test run. This can lead to flaky tests. For consistent test results, consider using a fixed seed or the global random source if true randomness is not required.

Modify the code to use a fixed seed:

func TestSmallErrorDependence(t *testing.T) {
-	rng := rand.New(rand.NewSource(time.Now().UnixNano()))
+	rng := rand.New(rand.NewSource(42))
	reqSender := func(delay int) (float64, error) {
		return float64(delay) + rng.Float64()*0.5, nil
	}
	// Test continues...

442-446: Consider using the global random source for simplicity

In TestSlowNonInjectableCase, a new random source is created. Since the randomness here does not require a fixed seed, and the test does not depend on specific random values, using the global rand package can simplify the code.

Simplify the code by using the global random source:

func TestSlowNonInjectableCase(t *testing.T) {
-	rng := rand.New(rand.NewSource(time.Now().UnixNano()))
	var timesCalled int
	reqSender := func(delay int) (float64, error) {
		timesCalled++
-		return 10 + rng.Float64()*0.5, nil
+		return 10 + rand.Float64()*0.5, nil
	}
	// Test continues...

229-231: Ensure checkTimingDependency function is properly imported

The comment indicates that checkTimingDependency is assumed to be imported. To avoid confusion and ensure the test code is complete, make sure that the necessary imports are included or provide a mock implementation if needed.

Add the appropriate import statement or include a mock function for checkTimingDependency.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between fda6165 and 7f55f01.

⛔ Files ignored due to path filters (1)
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (6)
  • go.mod (4 hunks)
  • pkg/fuzz/analyzers/analyzers.go (1 hunks)
  • pkg/fuzz/analyzers/time/analyzer.go (4 hunks)
  • pkg/fuzz/analyzers/time/time_delay.go (4 hunks)
  • pkg/fuzz/analyzers/time/time_delay_test.go (1 hunks)
  • pkg/protocols/http/http.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • pkg/fuzz/analyzers/time/time_delay.go
  • go.mod
🔇 Additional comments (5)
pkg/protocols/http/http.go (1)

323-324: LGTM! Consider monitoring the impact of increased timeout multiplier.

The increase from 3x to 5x buffer for time delay analysis provides better reliability for detecting time-based vulnerabilities, especially in DAST scenarios. However, this could impact scan duration when multiple time-delay tests are involved.

Let's analyze the usage of time delay analyzers to understand the impact:

✅ Verification successful

LGTM! The increased timeout multiplier is justified for time-based analysis.

The codebase analysis reveals that:

  • Time delay analysis is a core feature used for detecting timing-based vulnerabilities
  • The analyzer is well-documented and has a sophisticated implementation in pkg/fuzz/analyzers/time/
  • It uses linear regression heuristics and requires multiple requests for accurate detection
  • The increased timeout provides better reliability for the regression analysis without compromising security

The 5x multiplier is reasonable given:

  • The analyzer needs to make multiple requests with varying delays
  • Linear regression calculations require sufficient samples
  • The timeout is only increased for time_delay analyzer type, not affecting other operations
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for time delay analyzer usage in templates
rg -l "analyzer:\s*\n\s*name:\s*time_delay" 

Length of output: 237


Script:

#!/bin/bash
# Search for time delay analyzer usage in templates with multiline support
rg -U "analyzer:\s*\n\s*name:\s*time_delay"

# Also search for any references to time delay analysis in the codebase
rg -i "time.?delay" -A 2 -B 2

# Look for other timeout multiplier references
rg "timeout.*multipl|multipl.*timeout" -A 2 -B 2

Length of output: 7515

pkg/fuzz/analyzers/analyzers.go (2)

Line range hint 1-104: Consider request deduplication and concurrency handling

Given the PR objectives mention cookie duplication issues, and considering this is a DAST implementation:

  1. The shared state in the random number generator could contribute to request duplication
  2. Consider implementing request deduplication at a higher level
  3. The analyzer interface might benefit from lifecycle hooks for proper resource management

Suggestions:

  1. Add a request deduplication layer
  2. Consider adding Initialize() and Cleanup() methods to the Analyzer interface
  3. Document thread safety requirements for analyzers

Let's check for existing deduplication mechanisms:


87-88: ⚠️ Potential issue

Critical: Use cryptographically secure random number generation

The current implementation has several security concerns that could impact the effectiveness of security testing:

  1. Using math/rand instead of crypto/rand makes the random string generation predictable
  2. The shared random source could cause thread safety issues in concurrent fuzzing operations
  3. Time-based seed initialization is predictable

Consider applying this security-focused refactor:

-var (
-    random = rand.New(rand.NewSource(time.Now().UnixNano()))
-)

+import (
+    "crypto/rand"
+    "sync"
+)
+
+var randomMu sync.Mutex

 func randStringBytesMask(n int) string {
     b := make([]byte, n)
-    for i := range b {
-        b[i] = letterBytes[random.Intn(len(letterBytes))]
+    randomMu.Lock()
+    defer randomMu.Unlock()
+    
+    if _, err := rand.Read(b); err != nil {
+        panic(err) // handle error appropriately in production
     }
+    
+    // Map bytes to letterBytes charset
+    for i := range b {
+        b[i] = letterBytes[int(b[i])%len(letterBytes)]
+    }
     return string(b)
 }

This change:

  1. Uses cryptographically secure random numbers
  2. Ensures thread safety
  3. Eliminates predictability in the random string generation

Let's verify the usage of this function in security-critical contexts:

pkg/fuzz/analyzers/time/analyzer.go (2)

21-22: Review the impact of changing default analyzer parameters

The DefaultSleepDuration has been increased from 5 to 7, and DefaultRequestsLimit has been decreased from 4 to 3. Additionally, defaultSleepTimeDuration is updated accordingly. These changes may affect the analyzer's performance and detection accuracy. Ensure that these new defaults have been thoroughly tested and validated.

Could you provide benchmarks or test results that demonstrate the improvements with these new default values?

Also applies to: 26-26


151-160: Improved server processing time measurement

The updated timing logic in doHTTPRequestWithTimeTracing now measures serverTime from WroteHeaders to GotFirstResponseByte, which provides a more accurate representation of the server's processing time by excluding network latency.

pkg/fuzz/analyzers/time/time_delay_test.go Show resolved Hide resolved
pkg/fuzz/analyzers/time/analyzer.go Show resolved Hide resolved
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (5)
pkg/fuzz/analyzers/time/time_delay.go (3)

36-39: Fix typo in struct name

The struct name contains a spelling error: "requstsSentMetadata" should be "requestsSentMetadata"

-type requstsSentMetadata struct {
+type requestsSentMetadata struct {
	delay         int
	delayReceived float64
}

168-193: Add validation for numerical stability

While the statistical calculations are improved, consider adding checks for numerical stability:

  1. Check for potential division by zero when varX or varY are very close to zero
  2. Handle potential overflow in multiplication operations
 	// Compute sample variances and covariance
+	const epsilon = 1e-10
 	varX := (o.sumXX - n*meanX*meanX) / (n - 1)
 	varY := (o.sumYY - n*meanY*meanY) / (n - 1)
 	covXY := (o.sumXY - n*meanX*meanY) / (n - 1)
 
-	// If varX is zero, slope cannot be computed meaningfully.
+	// If varX is effectively zero, slope cannot be computed meaningfully.
-	if varX == 0 {
+	if math.Abs(varX) < epsilon {
 		o.slope = 0.0
 		o.intercept = meanY
 		o.correlation = 0.0
 		return
 	}

201-213: Define constants for magic numbers

Consider extracting the multiplier (2x) into a named constant for better maintainability and documentation.

+const (
+    // Multiplier for handling cases where the slope is approximately double
+    // the expected slope, which can happen with time-based responses
+    slopeDoubleMultiplier = 2.0
+)

 func (o *simpleLinearRegression) IsWithinConfidence(correlationErrorRange float64, expectedSlope float64, slopeErrorRange float64) bool {
 	if o.count < 2 {
 		return true
 	}
 	slopeDiff := math.Abs(expectedSlope - o.slope)
-	slope2xDiff := math.Abs(expectedSlope*2 - o.slope)
+	slope2xDiff := math.Abs(expectedSlope*slopeDoubleMultiplier - o.slope)
 	if slopeDiff > slopeErrorRange && slope2xDiff > slopeErrorRange {
 		return false
 	}
 	return o.correlation > 1.0-correlationErrorRange
}
pkg/fuzz/analyzers/time/time_delay_test.go (2)

12-57: Add package-level documentation

The test documentation is excellent. Consider moving it to a package-level doc.go file for better visibility and reusability.

Create a new file doc.go:

/*
Package time implements timing dependency detection for DAST.

Test Categories:
1. Perfect Linear Cases
   - TestPerfectLinear: Basic case with slope=1, no noise
   ...
*/
package time

212-465: Extract common test setup into helper functions

Consider creating helper functions to reduce duplication in test setup and assertions. This would make the tests more maintainable and easier to read.

func setupTimingTest(t *testing.T, opts struct {
    requestsLimit int
    highSleepTimeSeconds int
    correlationErrorRange float64
    slopeErrorRange float64
    sender timeDelayRequestSender
}) (bool, string) {
    match, reason, err := checkTimingDependency(
        opts.requestsLimit,
        opts.highSleepTimeSeconds,
        opts.correlationErrorRange,
        opts.slopeErrorRange,
        opts.sender,
    )
    if err != nil {
        t.Fatalf("Unexpected error: %v", err)
    }
    return match, reason
}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1fa4540 and ad5ec96.

📒 Files selected for processing (2)
  • pkg/fuzz/analyzers/time/time_delay.go (5 hunks)
  • pkg/fuzz/analyzers/time/time_delay_test.go (1 hunks)
🔇 Additional comments (2)
pkg/fuzz/analyzers/time/time_delay.go (1)

59-104: LGTM! Enhanced observability with request metadata tracking

The changes improve debugging and analysis capabilities by tracking detailed request metadata and providing comprehensive reporting.

pkg/fuzz/analyzers/time/time_delay_test.go (1)

83-88: Fix noisyLinearSender to add random noise

The function adds constant noise instead of random noise as suggested by the comment.

Ice3man543 and others added 2 commits December 13, 2024 13:19
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (3)
pkg/fuzz/analyzers/time/analyzer.go (1)

158-175: Enhance baseline delay measurement reliability

The baseline delay implementation could be improved:

  1. Consider using median instead of mean to handle outliers
  2. Add jitter detection by checking standard deviation
  3. Make the number of baseline requests configurable

Consider this enhanced implementation:

 func getBaselineDelay(reqSender timeDelayRequestSender) (float64, error) {
-	var delays []float64
+	const (
+		defaultBaselineRequests = 3
+		maxJitterPercent = 15
+	)
+	numRequests := defaultBaselineRequests
+	if v := os.Getenv("NUCLEI_BASELINE_REQUESTS"); v != "" {
+		if n, err := strconv.Atoi(v); err == nil && n > 0 {
+			numRequests = n
+		}
+	}
+	delays := make([]float64, 0, numRequests)
 	// Use zero or a very small delay to measure baseline
-	for i := 0; i < 3; i++ {
+	for i := 0; i < numRequests; i++ {
 		delay, err := reqSender(0)
 		if err != nil {
 			return 0, errors.Wrap(err, "could not get baseline delay")
 		}
 		delays = append(delays, delay)
 	}
-
-	var total float64
-	for _, d := range delays {
-		total += d
+	
+	// Check for high jitter
+	if stdDev := calculateStdDev(delays); stdDev > (mean(delays) * maxJitterPercent / 100) {
+		gologger.Warning().Msgf("High jitter detected in baseline measurements: %.2f seconds", stdDev)
 	}
-	avg := total / float64(len(delays))
-	return avg, nil
+	
+	// Use median instead of mean to handle outliers
+	return median(delays), nil
 }
+
+func median(values []float64) float64 {
+	sorted := make([]float64, len(values))
+	copy(sorted, values)
+	sort.Float64s(sorted)
+	if len(sorted)%2 == 0 {
+		return (sorted[len(sorted)/2-1] + sorted[len(sorted)/2]) / 2
+	}
+	return sorted[len(sorted)/2]
+}
pkg/fuzz/analyzers/time/time_delay.go (2)

9-11: Fix typos in comments for clarity

There are typographical errors in the comments that should be corrected to improve readability.

Apply this diff to fix the typos:

// NOTE: This algorithm has been heavily modified after being introduced
// in nuclei. Now the logic has sever bug fixes and improvements and
// has been evolving to be more stable.
+// in nuclei. Now the logic has several bug fixes and improvements and

66-97: Fix variable name 'delayRecieved' to 'delayReceived'

The variable delayRecieved is misspelled. Correcting it to delayReceived improves code readability and prevents potential errors.

Apply this diff to fix the variable name:

-isCorrelationPossible, delayRecieved, err := sendRequestAndTestConfidence(...)
+isCorrelationPossible, delayReceived, err := sendRequestAndTestConfidence(...)

...

-if delayRecieved < baselineDelay+float64(highSleepTimeSeconds)*0.8 {
+if delayReceived < baselineDelay+float64(highSleepTimeSeconds)*0.8 {

...

-isCorrelationPossibleSecond, delayRecievedSecond, err := sendRequestAndTestConfidence(...)
+isCorrelationPossibleSecond, delayReceivedSecond, err := sendRequestAndTestConfidence(...)

...

-if delayRecievedSecond < baselineDelay+float64(DefaultLowSleepTimeSeconds)*0.8 {
+if delayReceivedSecond < baselineDelay+float64(DefaultLowSleepTimeSeconds)*0.8 {

...

-delayRecieved: delayRecievedSecond,
+delayReceived: delayReceivedSecond,

Ensure all instances of delayRecieved are corrected throughout the code.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ad5ec96 and 70bd93a.

📒 Files selected for processing (3)
  • pkg/fuzz/analyzers/time/analyzer.go (4 hunks)
  • pkg/fuzz/analyzers/time/time_delay.go (6 hunks)
  • pkg/fuzz/analyzers/time/time_delay_test.go (1 hunks)
🔇 Additional comments (4)
pkg/fuzz/analyzers/time/analyzer.go (3)

140-143: Potential issue with increased defaultSleepTimeDuration filtering responses

The condition options.ResponseTimeDelay < defaultSleepTimeDuration now uses 7 seconds due to the updated defaultSleepTimeDuration. This means responses with delays less than 7 seconds will not be analyzed further. This could lead to missing vulnerabilities that exhibit delays between 5 and 7 seconds.


22-28: Verify impact of increased sleep duration on vulnerability detection

The increase in DefaultSleepDuration and defaultSleepTimeDuration from 5 to 7 seconds could:

  1. Increase overall test execution time
  2. Miss vulnerabilities that manifest with shorter delays (between 5-7 seconds)
  3. Create inconsistency with the documented behavior in ApplyInitialTransformation

Consider:

  • Adding documentation explaining why 7 seconds was chosen
  • Making these thresholds configurable via environment variables
✅ Verification successful

Based on the search results, I can now make a definitive assessment about the increased sleep duration:

Sleep duration increase is intentional and consistent with the codebase

The increase in sleep duration from 5 to 7 seconds appears to be an intentional change that aligns with the time-based vulnerability detection logic:

  1. The default sleep duration is used consistently across the analyzer:

    • DefaultSleepDuration = int(7)
    • defaultSleepTimeDuration = 7 * time.Second
  2. The analyzer implements a sophisticated time-delay detection algorithm that:

    • Uses alternating high and low sleep times for faster comparison
    • Has built-in baseline delay detection
    • Uses statistical modeling with correlation and slope analysis
    • Has configurable parameters including sleep_duration
  3. The 7-second default provides:

    • Better separation from the baseline noise (typically 0.5-2s)
    • More reliable detection with the default low sleep time of 3 seconds
    • Sufficient margin for the 0.8 multiplier used in delay validation

The documented behavior in ApplyInitialTransformation remains correct as the sleep duration is configurable via parameters.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for other time-based detection logic that might need adjustment
rg -A 3 "time.Second|sleep|delay" --type go

Length of output: 66871


Line range hint 179-200: Consider connection establishment time in measurements

The change to measure from WroteHeaders instead of request start provides more accurate server-side timing but could miss important connection establishment delays. This might affect detection of time-based vulnerabilities that rely on connection-level delays.

pkg/fuzz/analyzers/time/time_delay_test.go (1)

83-90: ⚠️ Potential issue

Fix noisyLinearSender to add random noise

The noisyLinearSender function currently adds a constant noise of +0.2s, whereas the comment suggests it should add random noise in the range of ±0.2s. To accurately simulate real-world noisy conditions, the noise should be randomly generated within the specified range.

Apply this diff to correct the function:

func noisyLinearSender(baseline float64) func(delay int) (float64, error) {
    return func(delay int) (float64, error) {
        time.Sleep(10 * time.Millisecond)
        // Add random noise (±0.2s) to a linear relationship
-       noise := 0.2
+       noiseAmplitude := 0.2
+       noise := (rand.Float64()*2 - 1) * noiseAmplitude
        return baseline + float64(delay) + noise, nil
    }
}

pkg/fuzz/analyzers/time/time_delay.go Outdated Show resolved Hide resolved
pkg/fuzz/analyzers/time/time_delay.go Outdated Show resolved Hide resolved
dwisiswant0 and others added 2 commits December 13, 2024 15:21
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (3)
internal/server/nuclei_sdk.go (3)

6-6: Remove unused pprof import

The blank import _ "net/http/pprof" appears to be unnecessary as profiling functionality isn't being used in this file.

-	_ "net/http/pprof"

92-94: Extract magic number into a named constant

The value 100 is used as a magic number for auto-adjusting maxHostError. Consider extracting this into a named constant for better maintainability.

+const defaultFuzzingHostError = 100
+
 		if maxHostError == 30 {
-			maxHostError = 100 // auto adjust for fuzzings
+			maxHostError = defaultFuzzingHostError // auto adjust for fuzzings
 		}

185-196: Log errors during cleanup

Errors during cleanup are silently ignored. Consider logging these errors to help with debugging potential resource leaks.

 func (n *nucleiExecutor) Close() {
 	if n.executorOpts.FuzzStatsDB != nil {
 		n.executorOpts.FuzzStatsDB.Close()
 	}
 	if n.options.Interactsh != nil {
-		_ = n.options.Interactsh.Close()
+		if err := n.options.Interactsh.Close(); err != nil {
+			gologger.Warning().Msgf("Error closing Interactsh client: %v", err)
+		}
 	}
 	if n.executorOpts.InputHelper != nil {
-		_ = n.executorOpts.InputHelper.Close()
+		if err := n.executorOpts.InputHelper.Close(); err != nil {
+			gologger.Warning().Msgf("Error closing input helper: %v", err)
+		}
 	}
 }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 70bd93a and f31e963.

📒 Files selected for processing (2)
  • internal/server/nuclei_sdk.go (1 hunks)
  • internal/server/templates/index.html (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • internal/server/templates/index.html
🔇 Additional comments (4)
internal/server/nuclei_sdk.go (4)

40-61: LGTM! Well-structured core types

The nucleiExecutor and NucleiExecutorOptions structs are well-designed with clear separation of concerns and comprehensive configuration options.


124-124: Correct the typo in the error message

The error message "Could not create loadeopts." contains a typo.


146-146: Fix the typo in the function parameter type

The type PostReuestsHandlerRequest in the function signature has a typo.


181-181: Handle the error returned by ExecuteScanWithOpts

The error returned by n.engine.ExecuteScanWithOpts is currently ignored.

internal/server/nuclei_sdk.go Show resolved Hide resolved
internal/server/nuclei_sdk.go Show resolved Hide resolved
cmd/nuclei/main.go Outdated Show resolved Hide resolved
internal/runner/runner.go Outdated Show resolved Hide resolved
internal/server/dedupe.go Show resolved Hide resolved
internal/server/nuclei_sdk.go Outdated Show resolved Hide resolved
internal/server/nuclei_sdk.go Outdated Show resolved Hide resolved
pkg/fuzz/analyzers/time/time_delay.go Outdated Show resolved Hide resolved
pkg/input/provider/http/multiformat.go Show resolved Hide resolved
pkg/output/output.go Outdated Show resolved Hide resolved
pkg/types/types.go Show resolved Hide resolved
internal/server/server.go Outdated Show resolved Hide resolved
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🔭 Outside diff range comments (1)
pkg/output/output.go (1)

Line range hint 355-407: Review hook execution order.

The hook is executed before error handling and file writing. Consider:

  1. Moving the hook execution after error handling to ensure consistency
  2. Adding error handling for hook execution

Apply this diff to improve the execution order:

-if w.JSONLogRequestHook != nil {
-  w.JSONLogRequestHook(request)
-}
 if w.traceFile != nil {
   _, _ = w.traceFile.Write(data)
 }
 if requestErr != nil && w.errorFile != nil {
   _, _ = w.errorFile.Write(data)
 }
+if w.JSONLogRequestHook != nil {
+  w.JSONLogRequestHook(request)
+}
♻️ Duplicate comments (4)
internal/server/nuclei_sdk.go (2)

184-184: ⚠️ Potential issue

Handle the error returned by ExecuteScanWithOpts

The error from ExecuteScanWithOpts is being ignored, which could mask important execution failures.


155-164: ⚠️ Potential issue

Add validation for request data

The request data is used directly without validation. Consider adding checks for:

  • Valid URL format
  • Maximum request size
  • Malformed headers
    This is especially important for a DAST server that processes external input.
internal/server/server.go (2)

162-169: ⚠️ Potential issue

Security issue: Avoid passing tokens in query parameters

Passing authentication tokens via query parameters can expose them in server logs and browser history, leading to potential security risks. It's more secure to pass tokens in the Authorization header.


148-149: 🛠️ Refactor suggestion

Consider template deduplication

Templates and workflows are appended without checking for duplicates. This could lead to redundant scanning and potentially contribute to the cookie duplication issue mentioned in the PR description.

🧹 Nitpick comments (5)
pkg/authprovider/authx/strategy.go (1)

41-41: Maintain consistent variable naming for clarity

In the Apply method, the variable strategies is used, whereas in ApplyOnRR, the variable strategy is used. For consistency and better readability, consider renaming strategy to strategies in ApplyOnRR.

Apply this diff to update the variable name:

 func (d *DynamicAuthStrategy) ApplyOnRR(req *retryablehttp.Request) {
-	strategy := d.Dynamic.GetStrategies()
+	strategies := d.Dynamic.GetStrategies()
 	for _, s := range strategies {
 		s.ApplyOnRR(req)
 	}
 }
internal/server/scope/scope.go (2)

22-26: Simplify the code by removing unnecessary 'else' blocks

In the NewManager function, the if statements that return on error can be simplified by removing the else blocks. This improves code readability by reducing nesting.

Apply this diff to make the changes:

 for _, regex := range inScope {
     if compiled, err := regexp.Compile(regex); err != nil {
         return nil, fmt.Errorf("could not compile regex %s: %s", regex, err)
     }
-    } else {
-        manager.inScope = append(manager.inScope, compiled)
-    }
+    }
+    manager.inScope = append(manager.inScope, compiled)
 }

 for _, regex := range outOfScope {
     if compiled, err := regexp.Compile(regex); err != nil {
         return nil, fmt.Errorf("could not compile regex %s: %s", regex, err)
     }
-    } else {
-        manager.outOfScope = append(manager.outOfScope, compiled)
-    }
+    }
+    manager.outOfScope = append(manager.outOfScope, compiled)
 }

Also applies to: 29-33


53-56: Simplify return logic in Validate method

The return statements in the Validate method can be simplified by returning urlValidated, nil directly, enhancing code clarity.

Apply this diff to simplify the code:

 if urlValidated {
     return true, nil
 }
- return false, nil
+ return urlValidated, nil
pkg/fuzz/analyzers/time/time_delay.go (1)

76-78: Correct the typo in variable name delayRecieved

The variable delayRecieved is misspelled. It should be delayReceived for consistency and clarity.

Apply this diff to fix the typo:

-    isCorrelationPossible, delayRecieved, err := sendRequestAndTestConfidence(regression, highSleepTimeSeconds, requestSender, baselineDelay)
+    isCorrelationPossible, delayReceived, err := sendRequestAndTestConfidence(regression, highSleepTimeSeconds, requestSender, baselineDelay)
     if err != nil {
         return false, "", err
     }
-    if delayRecieved < baselineDelay+float64(highSleepTimeSeconds)*0.8 {
+    if delayReceived < baselineDelay+float64(highSleepTimeSeconds)*0.8 {
         return false, "", nil
     }

Repeat the renaming for all occurrences of delayRecieved in the code.

Also applies to: 84-84, 88-89, 133-133, 215-215

internal/server/nuclei_sdk.go (1)

92-94: Extract magic number to a named constant

The value 100 should be extracted to a named constant with a descriptive name to improve maintainability.

+const defaultFuzzingMaxHostError = 100
+
 		if maxHostError == 30 {
-			maxHostError = 100 // auto adjust for fuzzings
+			maxHostError = defaultFuzzingMaxHostError // auto adjust for fuzzings
 		}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f31e963 and 9f4b89a.

⛔ Files ignored due to path filters (1)
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (14)
  • cmd/nuclei/main.go (4 hunks)
  • internal/runner/options.go (2 hunks)
  • internal/runner/runner.go (11 hunks)
  • internal/server/nuclei_sdk.go (1 hunks)
  • internal/server/requests_worker.go (1 hunks)
  • internal/server/scope/scope.go (1 hunks)
  • internal/server/scope/scope_test.go (1 hunks)
  • internal/server/server.go (1 hunks)
  • internal/server/templates/index.html (1 hunks)
  • pkg/authprovider/authx/strategy.go (1 hunks)
  • pkg/core/workpool.go (1 hunks)
  • pkg/fuzz/analyzers/time/time_delay.go (6 hunks)
  • pkg/input/provider/http/multiformat.go (4 hunks)
  • pkg/output/output.go (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • internal/server/scope/scope_test.go
  • internal/server/templates/index.html
  • internal/server/requests_worker.go
🧰 Additional context used
🪛 golangci-lint (1.62.2)
cmd/nuclei/main.go

225-225: var nameGenerator is unused

(unused)

🔇 Additional comments (7)
pkg/authprovider/authx/strategy.go (1)

Line range hint 41-49: Add nil checks in ApplyOnRR method to prevent nil pointer dereferences

The ApplyOnRR method does not handle cases where d.Dynamic.GetStrategies() returns nil or when a strategy within the slice is nil. This can lead to nil pointer dereference errors. Please add nil checks similar to those in the Apply method.

Apply this diff to fix the issue:

 func (d *DynamicAuthStrategy) ApplyOnRR(req *retryablehttp.Request) {
-	strategy := d.Dynamic.GetStrategies()
-	for _, s := range strategy {
+	strategies := d.Dynamic.GetStrategies()
+	if strategies == nil {
+		return
+	}
+	for _, s := range strategies {
+		if s == nil {
+			continue
+		}
 		s.ApplyOnRR(req)
 	}
 }
pkg/input/provider/http/multiformat.go (1)

62-63: Add validation when both InputFile and InputContents are empty

If both InputFile and InputContents are empty, the inputReader will be initialized with an empty string reader, potentially leading to unexpected behavior. Ensure that at least one of these fields is provided.

Apply this diff to add validation:

 var inputFile *os.File
 var inputReader io.Reader
+if opts.InputFile == "" && opts.InputContents == "" {
+    return nil, errors.New("either InputFile or InputContents must be provided")
+}
 if opts.InputFile != "" {
     file, err := os.Open(opts.InputFile)
     if err != nil {
         return nil, errors.Wrap(err, "could not open input file")
     }
     inputFile = file
     inputReader = file
 } else {
     inputReader = strings.NewReader(opts.InputContents)
 }
pkg/fuzz/analyzers/time/time_delay.go (1)

106-108: Report the actual number of requests made

In the result message, requestsLimit represents the initial limit, not the actual number of requests sent. Use len(requestsSent) to accurately report the number of requests made.

Apply this diff to correct the message:

 resultReason.WriteString(fmt.Sprintf(
     "[time_delay] made %d requests (baseline: %.2fs) successfully, with a regression slope of %.2f and correlation %.2f",
-    requestsLimit,
+    len(requestsSent),
     baselineDelay,
     regression.slope,
     regression.correlation,
 ))
internal/server/nuclei_sdk.go (1)

40-45: LGTM: Well-structured executor definition

The struct has a clean design with clear separation between execution engine, template store, and configuration options.

internal/runner/runner.go (2)

389-391: LGTM: Proper cleanup in Close method

The Close method properly cleans up the DAST server resources.


689-695: ⚠️ Potential issue

Critical: Fix error handling in DAST server start

The current implementation ignores the error from dastServer.Start() and recursively calls it again, which could lead to:

  1. Infinite recursion if the server consistently fails to start
  2. Masking of important errors that should be handled

Apply this diff to fix the error handling:

if r.dastServer != nil {
-    if err := r.dastServer.Start(); err != nil {
-        r.dastServer.Start()
-    }
+    if err := r.dastServer.Start(); err != nil {
+        return errors.Wrap(err, "could not start DAST server")
+    }
}

Likely invalid or redundant comment.

cmd/nuclei/main.go (1)

224-226: ⚠️ Potential issue

Remove unused variable nameGenerator

The variable nameGenerator is declared but not used, causing a linting error. Please remove it to clean up the code and resolve the lint warning.

Apply this diff to remove the unused variable:

-var (
-	nameGenerator = namegenerator.NewNameGenerator(time.Now().UnixNano())
-)

Likely invalid or redundant comment.

🧰 Tools
🪛 golangci-lint (1.62.2)

225-225: var nameGenerator is unused

(unused)

pkg/core/workpool.go Outdated Show resolved Hide resolved
internal/server/nuclei_sdk.go Show resolved Hide resolved
internal/server/nuclei_sdk.go Show resolved Hide resolved
internal/server/server.go Show resolved Hide resolved
internal/runner/options.go Show resolved Hide resolved
pkg/output/output.go Show resolved Hide resolved
cmd/nuclei/main.go Show resolved Hide resolved
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
internal/runner/runner.go (2)

469-502: ⚠️ Potential issue

Critical: Add validation for DAST server configuration

The DAST server initialization lacks validation for critical configuration parameters.


689-695: ⚠️ Potential issue

Critical: Fix error handling in DAST server start

The current implementation starts the DAST server in a goroutine but only logs errors without proper handling.

🧹 Nitpick comments (1)
internal/runner/runner.go (1)

307-318: Consider adding rate limiting for error event recording

The fuzzing stats hook implementation looks good, but consider adding rate limiting for error event recording to prevent overwhelming the stats database during high-frequency error scenarios.

 outputWriter.JSONLogRequestHook = func(request *output.JSONLogRequest) {
   if request.Error == "none" || request.Error == "" {
     return
   }
+  // Rate limit error events to prevent overwhelming the stats database
+  if r.rateLimiter != nil {
+    r.rateLimiter.Take()
+  }
   runner.fuzzStats.RecordErrorEvent(fuzzStats.ErrorEvent{
     TemplateID: request.Template,
     URL:        request.Input,
     Error:      request.Error,
   })
 }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9f4b89a and 0c4645b.

📒 Files selected for processing (3)
  • internal/runner/runner.go (11 hunks)
  • pkg/protocols/http/httpclientpool/clientpool.go (1 hunks)
  • pkg/types/types.go (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • pkg/protocols/http/httpclientpool/clientpool.go
  • pkg/types/types.go
🔇 Additional comments (3)
internal/runner/runner.go (3)

100-101: LGTM: New fields for DAST functionality

The new fields fuzzStats and dastServer are appropriately added to the Runner struct.


389-391: LGTM: Proper cleanup handling

The cleanup logic for both the DAST server and fuzzing stats is properly implemented in the Close method.

Also applies to: 706-708


287-300: Verify error handling in DAST initialization

The initialization logic looks good, but there's a potential issue with error handling in the conditional block.

Consider adding error wrapping for better context:

 if err != nil {
-  return nil, errors.Wrap(err, "could not create fuzz stats db")
+  return nil, errors.Wrap(err, "failed to initialize fuzz stats tracker for DAST")
 }

@dwisiswant0
Copy link
Member

Lint failed.

@ehsandeep
Copy link
Member

@coderabbitai resolve

Copy link

coderabbitai bot commented Dec 19, 2024

✅ Actions performed

Comments resolved.

@ehsandeep ehsandeep removed the request for review from tarunKoyalwar December 19, 2024 11:59
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
internal/runner/runner.go (1)

285-298: ⚠️ Potential issue

Add input validation for DAST server configuration.

The DAST server initialization lacks validation for critical configuration parameters.

Apply this diff to add validation:

 if options.DASTReport || options.DASTServer {
+    // Validate server address
+    if options.DASTServer && options.DASTServerAddress == "" {
+        return errors.New("DAST server address cannot be empty")
+    }
+    // Validate token if authentication is required
+    if options.DASTServer && options.DASTServerToken != "" && len(options.DASTServerToken) < 32 {
+        return errors.New("DAST server token must be at least 32 characters")
+    }
+    // Validate scope patterns
+    for _, pattern := range options.Scope {
+        if _, err := regexp.Compile(pattern); err != nil {
+            return errors.Wrap(err, "invalid scope pattern")
+        }
+    }
     var err error
     runner.fuzzStats, err = fuzzStats.NewTracker()
pkg/input/formats/swagger/swagger.go (1)

46-51: 🛠️ Refactor suggestion

Improve error handling and consider memory usage

  1. Error messages could be more specific about YAML parsing failures
  2. Consider streaming for large files instead of reading entire file into memory
 var data []byte
 data, err = io.ReadAll(input)
 if err != nil {
-    return errors.Wrap(err, "could not read data file")
+    return errors.Wrap(err, "could not read YAML file")
 }
 err = yaml.Unmarshal(data, schemav2)
+if err != nil {
+    return errors.Wrap(err, "could not parse YAML data as Swagger 2.0 schema")
+}
🧹 Nitpick comments (2)
pkg/authprovider/authx/cookies_auth.go (1)

37-58: LGTM! Implementation follows best practices for cookie management.

The selective cookie removal implementation correctly preserves essential session cookies by only removing cookies that are about to be replaced. The code is safe and handles edge cases properly.

Consider this minor optimization to reduce memory allocations:

-	existingCookies := req.Cookies()
+	// Pre-allocate slice with capacity of existing + new cookies
+	existingCookies := make([]*http.Cookie, 0, len(req.Cookies())+len(s.Data.Cookies))
+	existingCookies = append(existingCookies, req.Cookies()...)
pkg/input/formats/swagger/swagger.go (1)

43-44: Consider scoped error declaration

The error variable could be declared closer to its first usage to prevent potential shadowing issues.

 ext := path.Ext(filePath)
-var err error
 if ext == ".yaml" || ext == ".yml" {
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0c4645b and 952a887.

⛔ Files ignored due to path filters (1)
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (7)
  • cmd/nuclei/main.go (2 hunks)
  • go.mod (4 hunks)
  • internal/runner/runner.go (11 hunks)
  • pkg/authprovider/authx/cookies_auth.go (2 hunks)
  • pkg/fuzz/execute.go (2 hunks)
  • pkg/fuzz/stats/stats.go (1 hunks)
  • pkg/input/formats/swagger/swagger.go (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • pkg/fuzz/execute.go
  • pkg/fuzz/stats/stats.go
  • go.mod
🧰 Additional context used
📓 Learnings (1)
pkg/authprovider/authx/cookies_auth.go (1)
Learnt from: dwisiswant0
PR: projectdiscovery/nuclei#5772
File: pkg/authprovider/authx/cookies_auth.go:36-38
Timestamp: 2024-12-13T08:42:57.506Z
Learning: In the `ApplyOnRR` method of `CookiesAuthStrategy` in `pkg/authprovider/authx/cookies_auth.go`, when setting cookies, only remove the cookies that are about to be set to avoid removing essential session cookies.
🔇 Additional comments (8)
internal/runner/runner.go (3)

100-101: LGTM! Well-structured field additions.

The new fields for fuzz stats and DAST server are appropriately placed in the Runner struct.


387-389: LGTM! Proper cleanup handling.

The cleanup implementation correctly handles the DAST server and fuzz stats resources.

Also applies to: 704-706


687-693: ⚠️ Potential issue

Fix error handling in DAST server start.

The current implementation only logs the error from dastServer.Start() which could mask critical failures.

Apply this diff to properly handle errors:

 if r.dastServer != nil {
-    go func() {
-        if err := r.dastServer.Start(); err != nil {
-            gologger.Error().Msgf("could not start dast server: %v", err)
-        }
-    }()
+    errChan := make(chan error, 1)
+    go func() {
+        errChan <- r.dastServer.Start()
+    }()
+    
+    // Allow a short grace period for server startup
+    select {
+    case err := <-errChan:
+        if err != nil {
+            return errors.Wrap(err, "could not start dast server")
+        }
+    case <-time.After(5 * time.Second):
+        // Server started successfully
+    }
 }

Likely invalid or redundant comment.

cmd/nuclei/main.go (2)

365-368: Use consistent flag naming convention.

For consistency with other flags, consider using the suggested naming pattern.

Apply this diff to maintain consistency:

-		flagSet.BoolVarP(&options.DASTServer, "dast-server", "dts", false, "enable dast server mode (live fuzzing)"),
-		flagSet.BoolVarP(&options.DASTReport, "dast-report", "dtr", false, "write dast scan report to file"),
+		flagSet.BoolVarP(&options.DASTServer, "dast-server", "dts", false, "enable dast server mode (live fuzzing)"),
+		flagSet.BoolVarP(&options.DASTReport, "dast-report", "dtr", false, "write dast scan report to file"),

372-373: LGTM! Well-structured scope flags.

The fuzzing scope flags are well-defined with appropriate descriptions and types.

pkg/input/formats/swagger/swagger.go (3)

41-41: LGTM! Improved method signature

The updated signature with io.Reader and filePath parameters improves flexibility and maintains consistency with other format implementations.


53-53: 🛠️ Refactor suggestion

Improve JSON error handling

The JSON error handling could be more specific and use a scoped error variable.

-err = json.NewDecoder(input).Decode(schemav2)
+if err := json.NewDecoder(input).Decode(schemav2); err != nil {
+    return errors.Wrap(err, "could not parse JSON data as Swagger 2.0 schema")
+}

Likely invalid or redundant comment.


10-10: Verify the choice of YAML package

The code uses github.com/invopop/yaml instead of the more common gopkg.in/yaml.v3. While both are valid choices, let's ensure this aligns with project standards.

✅ Verification successful

Let me gather more information about YAML usage in the codebase to make a more informed decision.


Let me search specifically for YAML unmarshaling operations to understand the YAML package usage better.


Let me check the swagger_test.go file to understand the test cases and verify the YAML package usage.


Let me check the entire swagger.go file to understand the complete context of the YAML package usage.


Based on the gathered information, I can now provide a final response about the YAML package choice.

The invopop/yaml package usage is appropriate

The package is used correctly and exclusively in the swagger parser implementation for handling YAML format OpenAPI/Swagger specifications. The choice is justified because:

  1. It's only used in one specific location (pkg/input/formats/swagger/swagger.go)
  2. The usage is limited to basic YAML unmarshaling operations
  3. No other files in the codebase use this package, showing proper isolation of the dependency
  4. The package is maintained and compatible with OpenAPI/Swagger schema parsing needs
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check YAML package usage across the project
rg --type go "import.*yaml" -A 1

Length of output: 252


Script:

#!/bin/bash
# Check for actual YAML package imports and their usage
rg --type go "^import \(" -A 10

# Also check for direct yaml imports
rg --type go '^import ".*yaml.*"'

# Look for yaml unmarshaling or other yaml operations
rg --type go "yaml\." -B 2 -A 2

Length of output: 68060


Script:

#!/bin/bash
# Search for yaml unmarshaling operations in swagger.go and related files
rg -B 2 -A 2 "yaml\.(Unmarshal|Marshal|NewDecoder|NewEncoder)" pkg/input/formats/swagger/

# Also check for other files in the same directory
ls -la pkg/input/formats/swagger/

Length of output: 704


Script:

#!/bin/bash
# Check the content of swagger test file
cat pkg/input/formats/swagger/swagger_test.go

# Also check for any other YAML package imports in the test file
rg --type go "github.com/invopop/yaml" pkg/input/formats/swagger/swagger_test.go

Length of output: 1050


Script:

#!/bin/bash
# Check the complete swagger.go file
cat pkg/input/formats/swagger/swagger.go

# Also check if there are any other files in the codebase using invopop/yaml
rg --type go "github.com/invopop/yaml" -l

Length of output: 1948

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants