Skip to content

Conversation

@RMLYC
Copy link
Collaborator

@RMLYC RMLYC commented Nov 21, 2025

  1. support real input in test_mha.py

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @RMLYC, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces the capability to use real input data for testing and benchmarking Multi-Head Attention (MHA) operations. Previously, tests relied solely on randomly generated inputs. By adding a utility function to load tensors from files and integrating this into the MHA benchmark and test scripts, users can now specify actual data paths, enabling more realistic and specific performance evaluations.

Highlights

  • New Input Loading Utility: Introduced a new utility function, load_input_from_path, in top/utils/utils.py to load PyTorch tensors from specified file paths. This function includes checks for file existence, expected tensor shape, and ensures the tensor is on the correct device and dtype.
  • Flexible Input Generation for Benchmarks: The mha_fwd_benchmark class in benchmarks/flash_attn/mha.py has been enhanced. Its gen_inputs method now accepts an optional input_path argument, allowing it to either generate random inputs or load real Q, K, V tensors from files using the new utility.
  • Command-Line Support for Real Inputs in Tests: The test_mha.py script now includes a new command-line argument, --input_path. This enables users to provide file paths for real input data (Q, K, V) when running both forward and backward Multi-Head Attention tests, offering greater flexibility and realism in testing scenarios.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for loading real input data for MHA benchmarks, which is a valuable enhancement. However, there are a few issues that need to be addressed. Most critically, the changes will cause the backward pass test to fail with a TypeError because the corresponding benchmark function was not updated. There is also a significant security concern in the new data loading utility due to the use of torch.load. Additionally, there are minor improvements that can be made regarding code consistency and logging practices.

benchmark = mha_bwd_benchmark(B, H, S, D, causal, dtype)

inputs = benchmark.gen_inputs()
inputs = benchmark.gen_inputs(input_path)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The gen_inputs method of mha_bwd_benchmark (defined in benchmarks/flash_attn/mha.py) does not accept an input_path argument. This call will raise a TypeError. You need to update mha_bwd_benchmark.gen_inputs to accept input_path and load data from it, similar to mha_fwd_benchmark.gen_inputs. Also, remember that for the backward pass, the loaded tensors for Q, K, and V will need to have requires_grad set to True.

if not os.path.exists(path):
raise FileNotFoundError(f"Input file not found: {path}")

tensor = torch.load(path)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Using torch.load is a security risk because it uses pickle internally, which can lead to arbitrary code execution if a malicious file is loaded. Since the file path is provided via a command-line argument, it's possible for a user to provide a malicious file. Consider using a safer format for saving and loading tensors, such as safetensors, especially if the input files could come from an untrusted source.

Comment on lines +32 to +43
if input_path is None:
# gen random inputs
print("Gen random inputs!")
Q = torch.randn(
self.batch, self.seq_len, self.heads, self.dim, device='cuda', dtype=self.dtype)
K = torch.randn(
self.batch, self.seq_len, self.heads, self.dim, device='cuda', dtype=self.dtype)
V = torch.randn(
self.batch, self.seq_len, self.heads, self.dim, device='cuda', dtype=self.dtype)
else:
# Load input data from file paths
print("Gen inputs from file!")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better logging control, consider using the logging module instead of print. This allows for configuring verbosity levels and directing output to different handlers, which is more flexible for a benchmarking suite.

Comment on lines +69 to +80
"""
从文件路径加载输入数据的公共函数
Args:
path: 文件路径
expected_shape: 期望的张量形状
dtype: 数据类型
device: 设备类型
Returns:
加载的张量
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The docstring is written in Chinese, which is inconsistent with the rest of the codebase being in English. To maintain consistency and improve readability for all contributors, please translate it to English.

Suggested change
"""
从文件路径加载输入数据的公共函数
Args:
path: 文件路径
expected_shape: 期望的张量形状
dtype: 数据类型
device: 设备类型
Returns:
加载的张量
"""
"""
Loads input data from a file path.
Args:
path: The file path.
expected_shape: The expected shape of the tensor.
dtype: The data type.
device: The device.
Returns:
The loaded tensor.
"""

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant