Skip to content

Conversation

@xusd320
Copy link
Contributor

@xusd320 xusd320 commented Jan 29, 2026

Implemented standard Node.js error translation and reorganized polyfills into a dedicated fs directory.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @xusd320, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refactors the WebAssembly file system polyfills to enhance error handling and code organization. It introduces a standardized error translation mechanism to align with Node.js error conventions, making the file system interactions more predictable. Concurrently, the file system polyfills have been moved into a dedicated fs directory, improving the overall structure and maintainability of the codebase.

Highlights

  • Standardized Error Translation: Implemented a translateError utility to convert internal WebAssembly file system errors into standard Node.js error objects (e.g., ENOENT), improving consistency and debuggability for file operations.
  • Polyfill Reorganization: Consolidated all file system (fs) related polyfills into a new, dedicated fs subdirectory (packages/utoo-web/src/webpackLoaders/polyfills/fs/) for better modularity and maintainability.
  • WASM Binding Updates: The index.d.ts file was updated to reflect changes in the WebAssembly module's exposed functions, including reordering some InitOutput properties and updating wasm_bindgen closure definitions.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively refactors the wasm-fs polyfills by organizing them into a dedicated fs directory and implementing standard Node.js error translation. These changes improve the code's modularity and make error handling more robust and predictable. My review includes a few points to fix a bug in rmSync and to improve maintainability by reducing code duplication.

@xusd320 xusd320 force-pushed the fix/wasm-fs-error-translation branch 2 times, most recently from f1f755c to cb18882 Compare January 29, 2026 14:53
@xusd320 xusd320 force-pushed the fix/wasm-fs-error-translation branch from cb18882 to f90f071 Compare January 29, 2026 14:53
@xusd320 xusd320 merged commit fcfe42c into next Jan 29, 2026
23 checks passed
@xusd320 xusd320 deleted the fix/wasm-fs-error-translation branch January 29, 2026 14:54
@github-actions
Copy link

📊 Performance Benchmark Report (with-antd)

🚀 Utoopack Performance Report: Async Task Scheduling Overhead Analysis

Report ID: utoopack_performance_report_20260129_151126
Generated: 2026-01-29 15:11:26
Trace File: trace_antd.json (1.5GB, 8.00M events)
Test Project: Unknown Project


📊 Executive Summary

This report analyzes the performance of Utoopack/Turbopack, covering the full spectrum of the Performance Analysis Protocol (P0-P4).

Key Findings

Metric Value Assessment
Total Wall Time 10,018.0 ms Baseline
Total Thread Work 87,819.5 ms ~8.8x parallelism
Thread Utilization 67.4% 🆗 Average
turbo_tasks::function Invocations 3,881,736 Total count
Meaningful Tasks (≥ 10µs) 1,515,986 (39.1% of total)
Tracing Noise (< 10µs) 2,365,750 (60.9% of total)

Workload Distribution by Tier

Category Tasks Total Time (ms) % of Work
P0: Runtime/Resolution 1,038,443 52,971.7 60.3%
P1: I/O & Heavy Tasks 36,579 3,464.6 3.9%
P3: Asset Pipeline 27,829 4,211.6 4.8%
P4: Bridge/Interop 0 0.0 0.0%
Other 413,135 19,759.8 22.5%

⚡ Parallelization Analysis (P0-P2)

Thread Utilization

Metric Value
Number of Threads 13
Total Thread Work 87,819.5 ms
Avg Work per Thread 6,755.3 ms
Theoretical Parallelism 8.77x
Thread Utilization 67.4%

Assessment: With 13 threads available, achieving 8.8x parallelism indicates significant loss of potential parallelism.


📈 Top 20 Tasks (Global)

These are the most significant tasks by total duration:

Total (ms) Count Avg (µs) % Work Task Name
43,884.0 865,102 50.7 50.0% turbo_tasks::function
8,246.9 124,288 66.4 9.4% task execution completed
6,416.2 80,745 79.5 7.3% turbo_tasks::resolve_call
2,942.1 31,690 92.8 3.4% analyze ecmascript module
2,259.0 67,198 33.6 2.6% precompute code generation
2,140.1 67,692 31.6 2.4% resolving
1,753.0 20,102 87.2 2.0% effects processing
1,717.8 35,312 48.6 2.0% module
1,559.8 11,734 132.9 1.8% process parse result
1,104.1 31,224 35.4 1.3% process module
1,068.3 6,340 168.5 1.2% parse ecmascript
1,031.0 35,586 29.0 1.2% internal resolving
844.2 28,304 29.8 1.0% resolve_relative_request
570.0 1,910 298.5 0.6% analyze variable values
525.4 19,179 27.4 0.6% handle_after_resolve_plugins
476.8 15,465 30.8 0.5% resolve_module_request
469.6 10,911 43.0 0.5% code generation
413.0 17,429 23.7 0.5% resolved
408.2 1,933 211.2 0.5% swc_parse
352.5 4,210 83.7 0.4% read file

🔍 Deep Dive by Tier

🔴 Tier 1: Runtime & Resolution (P0)

Focus: Task scheduling and dependency resolution.

Metric Value Status
Total Scheduling Time 52,971.7 ms ⚠️ High
Resolution Hotspots 9 tasks 🔍 Check Top Tasks

Potential P0 Issues:

  • Low thread utilization (67.4%) suggests critical path serialization or lock contention.
  • 2,365,750 tasks < 10µs (60.9%) contribute to scheduler pressure.

🟠 Tier 2: Physical & Resource Barriers (P1)

Focus: Hardware utilization, I/O, and heavy monoliths.

Metric Value Status
I/O Work (Estimated) 3,464.6 ms ✅ Healthy
Large Tasks (> 100ms) 16 🚨 Critical

Potential P1 Issues:

  • 16 tasks exceed 100ms. These "Heavy Monoliths" are prime candidates for splitting.

🟡 Tier 3: Architecture & Asset Pipeline (P2-P3)

Focus: Global state and transformation pipeline.

Metric Value Status
Asset Processing (P3) 4,211.6 ms 4.8% of work
Bridge Overhead (P4) 0.0 ms ✅ Low

💡 Recommendations (Prioritized P0-P2)

🚨 Critical: (P0) Improvement

Problem: 67.4% thread utilization.
Action:

  1. Profile lock contention if utilization < 60%.
  2. Convert sequential await chains to try_join.

⚠️ High Priority: (P1) Optimization

Problem: 16 heavy tasks detected.
Action:

  1. Identify module-level bottlenecks (e.g., barrel files).
  2. Optimize I/O batching for metadata.

⚠️ Medium Priority: (P3) Pipeline Efficiency

Action:

  1. Review transformation logic for frequently changed assets.
  2. Minimize cross-language serialization (P4) if overhead exceeds 10%.

📐 Diagnostic Signal Summary

Signal Status Finding
Tracing Noise (P0) ⚠️ Significant 60.9% of tasks < 10µs
Thread Utilization (P0) ✅ Good 67.4% utilization
Heavy Monoliths (P1) ⚠️ Detected 16 tasks > 100ms
Asset Pipeline (P3) 🔍 Review 4,211.6 ms total
Bridge/Interop (P4) ✅ Low 0.0 ms total

🎯 Action Items (Comprehensive P0-P4)

  1. [P0] Profile lock contention to address 32% lost parallelism
  2. [P1] Breakdown heavy monolith tasks (>100ms) to improve granularity
  3. [P1] Review I/O patterns for potential batching opportunities
  4. [P3] Optimize asset transformation pipeline hot-spots
  5. [P4] Reduce "chatty" bridge operations if interop overhead is significant

Report generated by Utoopack Performance Analysis Agent on 2026-01-29
Following: Utoopack Performance Analysis Agent Protocol

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants