Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions test/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
reports/
dataset/
logs/
$null
*__pycache__/
.*
*.log
start.bat
!.gitignore
219 changes: 219 additions & 0 deletions test/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,219 @@
# UCM Pytest Testing Framework

A unified cache management testing framework based on pytest, supporting multi-level testing, flexible marking, performance data collection, and beautiful Allure report generation.

## Framework Features

- [x] 🏗️ **Multi-level Testing**: UnitTest(0) → Smoke(1) → Feature(2) → E2E(3)
- [x] 🏷️ **Flexible Marking**: Support for feature tags, platform tags, and reliability tags
- [x] 📊 **Data Collection**: Integrated InfluxDB performance data pushing
- [x] 📋 **Beautiful Reports**: Allure test report integration, supporting both static HTML and dynamic server modes
- [x] 🔧 **Configuration Management**: Flexible YAML-based configuration system
- [x] 🚀 **Automation**: Support for parallel test execution and automatic cleanup

## Test Level Definitions

| Level | Name | Description | Execution Time |
|-----|------|------|----------|
| 0 | UnitTest | Unit Tests | Every code commit |
| 1 | Smoke | Smoke Tests | Build verification |
| 2 | Feature | Feature Tests | When features are completed |
| 3 | E2E | End-to-End Tests | Before version release |

## Directory Structure

```
test/
├── config.yaml # Test framework configuration file
├── conftest.py # pytest configuration and fixtures, main program entry
├── pytest.ini # pytest markers and basic configuration
├── requirements.txt # Dependency package list
├── common/ # Common utility library
│ ├── __init__.py
│ ├── config_utils.py # Configuration file reading tools
│ ├── influxdb_utils.py # InfluxDB writing tools
│ └── allure_utils.py # Allure reporting tools
├── suites/ # Test case directory
│ ├── UnitTest/ # Unit tests (stage 0)
│ ├── Smoke/ # Smoke tests (stage 1)
│ ├── Feature/ # Feature tests (stage 2)
│ ├── E2E/ # End-to-end tests (stage 3)
│ └── test_demo_function.py# Example test cases
├── reports/ # Test report directory
└── logs/ # Test log directory
```

## Quick Start

### 1. Environment Setup
```bash
# Install dependencies
pip install -r requirements.txt

# Ensure Allure CLI is installed (for report generation)
# Download from: https://github.com/allure-framework/allure2/releases
```

### 2. Configuration File
The main configuration file is `config.yaml`, containing the following configuration items:
- **reports**: Report generation configuration (HTML/Allure)
- **log**: Logging configuration
- **influxdb**: Performance data push configuration
- **llm_connection**: LLM connection configuration

### 3. Running Tests
```bash
# Run all tests
pytest

# Run specific level tests
pytest --stage=1 # Run smoke tests
pytest --stage=2+ # Run feature and end-to-end tests

# Run specific tag tests
pytest --feature=performance # Run performance-related tests
pytest --platform=gpu # Run GPU platform tests
pytest --reliability=high # Run high reliability tests

# Combined filtering
pytest --stage=1 --feature=performance,accuracy # Performance and accuracy tests in smoke tests
```

## Test Case Standards

### Basic Structure
```python
import pytest
import allure
from common.config_utils import config_utils as config_instance

class TestExample:
"""Test example class"""

@pytest.mark.stage(2)
@pytest.mark.feature("performance")
@pytest.mark.platform("gpu")
def test_gpu_performance(self):
"""Test GPU performance"""
# Arrange
test_data = config_instance.get_config("test_data")

# Act & Assert
with allure.step("Execute GPU computation"):
result = perform_gpu_calculation(test_data)
assert result.is_valid

# Collect performance data
from common.influxdb_utils import push_to_influx
push_to_influx("gpu_compute_time", result.duration, {
"test_name": "test_gpu_performance",
"platform": "gpu"
})
```

### Marking Usage Guidelines

#### 1. Level Markers (Required)
```python
@pytest.mark.stage(0) # Unit tests
@pytest.mark.stage(1) # Smoke tests
@pytest.mark.stage(2) # Feature tests
@pytest.mark.stage(3) # End-to-end tests
```

#### 2. Feature Markers (Recommended)
```python
@pytest.mark.feature("performance") # Performance tests
@pytest.mark.feature("accuracy") # Accuracy tests
@pytest.mark.feature("memory") # Memory tests
```

#### 3. Platform Markers (Optional)
```python
@pytest.mark.platform("gpu") # GPU platform tests
@pytest.mark.platform("npu") # NPU platform tests
@pytest.mark.platform("cpu") # CPU platform tests
```

#### 4. Reliability Markers (Optional)
```python
@pytest.mark.reliability("high") # High reliability tests
@pytest.mark.reliability("medium") # Medium reliability tests
@pytest.mark.reliability("low") # Low reliability tests
```

## Allure Report Integration

### 1. Basic Usage
```python
import allure

@allure.feature('User Authentication')
@allure.story('Login Function')
def test_user_login():
"""Test user login functionality"""
with allure.step("Enter username and password"):
login_page.enter_credentials("user", "pass")

with allure.step("Click login button"):
login_page.click_login()

with allure.step("Verify successful login"):
assert dashboard_page.is_displayed()

# Add attachment
allure.attach("Screenshot data", name="Login Screenshot",
attachment_type=allure.attachment_type.PNG)
```

### 2. Report Configuration
Configure Allure reports in `config.yaml`:
```yaml
reports:
allure:
enabled: true
html_enable: true
serve_mode: true # Use dynamic server mode
serve_host: "localhost"
serve_port: 8081
directory: "allure-results"
```

### 3. Report Viewing
- **Static HTML Mode**: Automatically generates static HTML reports after test completion
- **Dynamic Server Mode**: Starts Allure server, providing interactive report interface

## Performance Data Collection

### InfluxDB Integration
```python
from common.influxdb_utils import push_to_influx

# Collect performance data in tests
def test_performance_metrics():
start_time = time.time()

# Execute test logic
result = perform_operation()

# Push performance data to InfluxDB
push_to_influx("operation_duration", time.time() - start_time, {
"test_name": "test_performance_metrics",
"operation_type": "calculation",
"success": str(result.success)
})
```

## Extensions and Customization

### Adding New Markers
1. Add new marker definitions in the `markers` section of `pytest.ini`
2. Keep the `markers =` and `# end of markers` lines unchanged
3. Re-run tests to use new markers

### Custom Configuration
Customize through `config.yaml`:
- Report format and storage location
- Log level and output format
- InfluxDB connection parameters
- LLM service configuration
Loading
Loading