脚本验证系统¶
The Script Validation System is a comprehensive cross-platform script verification framework designed specifically for the Nexus Embedded System project. It automates the validation of all project scripts across Windows, WSL, and Linux environments.
概述¶
The system automatically validates the functionality, compatibility, and reliability of all project scripts across multiple platforms.
主要特性¶
跨平台验证:支持 Windows、WSL 和 Linux 平台
多种验证器:功能、兼容性、性能和文档验证
多种报告格式:HTML、JSON、摘要和 JUnit XML
CI/CD 集成:支持 GitHub Actions、GitLab CI、Jenkins、Azure DevOps
灵活配置:命令行参数和配置文件
架构¶
目录结构¶
script_validation/
├── adapters/ # Platform adapters (Windows/WSL/Linux)
├── controllers/ # Validation controllers
├── handlers/ # Error handling and resource management
├── managers/ # Script and platform managers
├── reporters/ # Report generators (HTML/JSON/JUnit/Summary)
├── validators/ # Validators (functional/compatibility/performance/documentation)
├── __init__.py # Module entry point
├── __main__.py # CLI entry point
├── ci_integration.py # CI/CD integration
├── discovery.py # Script discovery
├── integration.py # Component integration
├── interfaces.py # Interface definitions
└── models.py # Data models
组件¶
- Platform Adapters
处理平台特定的脚本执行和环境设置
- Validators
执行不同类型的验证:
功能:验证脚本功能
兼容性:检查跨平台兼容性
性能:测量执行性能
文档:验证文档完整性
- Reporters
生成各种格式的验证报告
- Controllers
编排验证工作流
快速开始¶
命令行用法¶
Full validation:
python -m script_validation --mode full
Quick validation:
python -m script_validation --mode quick
Platform-specific validation:
python -m script_validation --platforms windows wsl
Generate specific format reports:
python -m script_validation --report-format html json
CI mode:
python -m script_validation --ci
Generate JUnit XML report:
python -m script_validation --ci --report-format junit
List discovered scripts:
python -m script_validation --list-scripts
Check platform availability:
python -m script_validation --check-platforms
编程接口¶
Method 1: Using convenience functions:
from script_validation import run_validation
report = run_validation(mode='full')
Method 2: Using workflow:
from script_validation import create_workflow, Platform
workflow = create_workflow(
platforms=[Platform.WINDOWS, Platform.WSL],
validators=['functional', 'compatibility'],
report_formats=['html', 'json']
)
report = workflow.run()
Method 3: Using builder pattern:
from script_validation import ValidationBuilder, Platform
from pathlib import Path
workflow = (ValidationBuilder()
.root_path(Path('./'))
.platforms(Platform.WINDOWS, Platform.LINUX)
.validators('functional', 'performance')
.report_formats('html', 'junit')
.timeout(600)
.ci_mode(True)
.build())
report = workflow.run()
命令行参数¶
Argument |
Short |
描述 |
默认 |
|---|---|---|---|
|
|
项目根目录路径 |
当前目录 |
|
|
验证模式:完整/快速/平台特定 |
full |
|
|
目标平台:windows/wsl/linux |
All available |
|
|
报告格式:html/json/summary/junit/all |
all |
|
|
报告输出目录 |
./validation_reports |
|
|
验证器:功能/兼容性/性能/文档 |
All |
|
Script execution timeout (seconds) |
300 |
|
|
Maximum memory limit (MB) |
1024 |
|
|
CI mode |
假 |
|
|
详细输出 |
假 |
|
|
禁用并行执行 |
假 |
跨平台测试指南¶
平台支持¶
脚本验证系统支持三个平台:
Windows:原生 Windows 环境
WSL:Windows Linux 子系统
Linux:原生 Linux 环境
在 Windows 上测试¶
Prerequisites:
Python 3.8+
PowerShell 5.1+
Git for Windows
Running validation:
# Full validation on Windows
python -m script_validation --platforms windows
# Test specific script types
python -m script_validation --platforms windows --validators functional
Common issues:
路径分隔符:使用
Path对象实现跨平台兼容性行结束符:确保脚本处理 CRLF 和 LF
Shell 差异:测试 CMD 和 PowerShell 执行
在 WSL 上测试¶
Prerequisites:
WSL 2 installed
WSL 中的 Python 3.8+
访问 Windows 文件系统
Running validation:
# Full validation on WSL
python -m script_validation --platforms wsl
# Test cross-platform compatibility
python -m script_validation --platforms windows wsl --validators compatibility
Common issues:
文件权限:WSL 可能与 Windows 有不同的权限
路径转换:Windows 路径在 WSL 中需要转换
Performance: File I/O may be slower across filesystem boundaries
在 Linux 上测试¶
Prerequisites:
Python 3.8+
Bash shell
Standard Unix utilities
Running validation:
# Full validation on Linux
python -m script_validation --platforms linux
# Performance testing
python -m script_validation --platforms linux --validators performance
Common issues:
Shell differences: Test with both bash and sh
Dependencies: Ensure all required tools are installed
Permissions: Check execute permissions on scripts
Cross-Platform Testing Examples¶
Example 1: Full Cross-Platform Validation
# Test on all available platforms
python -m script_validation --mode full --platforms windows wsl linux
# View HTML report
open validation_reports/report.html
Example 2: Compatibility Testing
# Focus on compatibility issues
python -m script_validation \
--validators compatibility \
--platforms windows linux \
--report-format html
Example 3: Performance Comparison
# Compare performance across platforms
python -m script_validation \
--validators performance \
--platforms windows wsl linux \
--report-format json
# Analyze results
python -c "
import json
with open('validation_reports/report.json') as f:
data = json.load(f)
for platform, results in data['performance'].items():
print(f'{platform}: {results[\"avg_time\"]}s')
"
Example 4: CI Pipeline Testing
# Simulate CI environment
python -m script_validation \
--ci \
--platforms linux \
--report-format junit \
--timeout 600
# Check exit code
echo $?
Example 5: Selective Script Testing
from script_validation import ValidationBuilder, Platform
from pathlib import Path
# Test only build scripts
workflow = (ValidationBuilder()
.root_path(Path('./scripts/building'))
.platforms(Platform.WINDOWS, Platform.LINUX)
.validators('functional')
.build())
report = workflow.run()
if report.success:
print("Build scripts validated successfully")
else:
print(f"Validation failed: {report.failures}")
CI/CD Integration¶
The system automatically detects CI environments and adjusts output format:
GitHub Actions: Uses
::error::and::warning::annotationsGitLab CI: Uses collapsible sections
Azure DevOps: Uses
##vsocommandsJenkins: Standard output format
Exit Codes¶
代码 |
Meaning |
|---|---|
0 |
Validation successful |
1 |
验证失败 |
2 |
执行错误 |
GitHub Actions Example¶
name: Script Validation
on: [push, pull_request]
jobs:
validate:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest]
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install dependencies
run: pip install -r requirements.txt
- name: Validate Scripts
run: python -m script_validation --ci --report-format junit
- name: Upload Report
if: always()
uses: actions/upload-artifact@v3
with:
name: validation-report-${{ matrix.os }}
path: validation_reports/
- name: Publish Test Results
if: always()
uses: EnricoMi/publish-unit-test-result-action@v2
with:
files: validation_reports/junit.xml
GitLab CI Example¶
script_validation:
stage: test
parallel:
matrix:
- PLATFORM: [windows, linux]
script:
- pip install -r requirements.txt
- python -m script_validation --ci --platforms $PLATFORM --report-format junit
artifacts:
reports:
junit: validation_reports/junit.xml
paths:
- validation_reports/
when: always
Jenkins Example¶
pipeline {
agent any
stages {
stage('Validate Scripts') {
parallel {
stage('Windows') {
agent { label 'windows' }
steps {
bat 'python -m script_validation --ci --platforms windows --report-format junit'
}
}
stage('Linux') {
agent { label 'linux' }
steps {
sh 'python -m script_validation --ci --platforms linux --report-format junit'
}
}
}
}
}
post {
always {
junit 'validation_reports/junit.xml'
publishHTML([
reportDir: 'validation_reports',
reportFiles: 'report.html',
reportName: 'Script Validation Report'
])
}
}
}
Azure DevOps Example¶
trigger:
- main
pool:
vmImage: 'ubuntu-latest'
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.10'
- script: |
pip install -r requirements.txt
python -m script_validation --ci --report-format junit
displayName: 'Validate Scripts'
- task: PublishTestResults@2
condition: always()
inputs:
testResultsFormat: 'JUnit'
testResultsFiles: 'validation_reports/junit.xml'
- task: PublishBuildArtifacts@1
condition: always()
inputs:
pathToPublish: 'validation_reports'
artifactName: 'validation-report'
Validators¶
Functional Validator¶
Verifies that scripts execute correctly and produce expected results.
Checks:
Script exits with code 0 on success
Required output files are created
Output matches expected format
Error handling works correctly
Example:
from script_validation.validators import FunctionalValidator
validator = FunctionalValidator()
result = validator.validate_script(script_path, platform)
Compatibility Validator¶
Checks cross-platform compatibility issues.
Checks:
Path separators are handled correctly
Line endings are compatible
Shell commands work on all platforms
Dependencies are available
Example:
from script_validation.validators import CompatibilityValidator
validator = CompatibilityValidator()
result = validator.validate_script(script_path, platforms)
Performance Validator¶
Measures script execution performance.
Metrics:
执行时间
内存使用
CPU 使用
I/O operations
Example:
from script_validation.validators import PerformanceValidator
validator = PerformanceValidator()
result = validator.validate_script(script_path, platform)
Documentation Validator¶
Validates script documentation completeness.
Checks:
Help text is available
Usage examples are provided
All options are documented
Error messages are clear
Example:
from script_validation.validators import DocumentationValidator
validator = DocumentationValidator()
result = validator.validate_script(script_path)
Report Formats¶
HTML 报告¶
Interactive HTML report with:
Summary dashboard
Platform-specific results
Detailed failure information
Performance charts
Filtering and search
Location: validation_reports/report.html
JSON 报告¶
Machine-readable JSON format:
{
"summary": {
"total_scripts": 42,
"passed": 40,
"failed": 2,
"platforms": ["windows", "linux"]
},
"results": [
{
"script": "build.py",
"platform": "windows",
"status": "passed",
"duration": 1.23
}
]
}
Location: validation_reports/report.json
JUnit XML 报告¶
Standard JUnit XML format for CI integration:
<testsuites>
<testsuite name="script_validation" tests="42" failures="2">
<testcase name="build.py[windows]" time="1.23"/>
<testcase name="test.py[linux]" time="2.45">
<failure message="Script failed">...</failure>
</testcase>
</testsuite>
</testsuites>
Location: validation_reports/junit.xml
摘要报告¶
Concise text summary:
Script Validation Summary
=========================
Total Scripts: 42
Passed: 40
Failed: 2
Success Rate: 95.2%
Failures:
- test.py [linux]: Exit code 1
- format.py [windows]: Timeout
Location: validation_reports/summary.txt
测试¶
Running Tests¶
# Run all tests
python -m pytest tests/script_validation/ -v
# Run property tests
python -m pytest tests/script_validation/ -v --hypothesis-show-statistics
# Generate coverage report
python -m pytest tests/script_validation/ --cov=script_validation --cov-report=html
Test Structure¶
tests/script_validation/
├── test_adapters.py # Platform adapter tests
├── test_validators.py # Validator tests
├── test_reporters.py # Reporter tests
├── test_integration.py # Integration tests
└── test_properties.py # Property-based tests
故障排除¶
Common Issues¶
1. Platform Not Available
Error: Platform 'wsl' not available
Solution:
Check platform is installed
Verify Python is accessible on platform
Use
--check-platformsto diagnose
2. Script Timeout
Error: Script timeout after 300 seconds
Solution:
Increase timeout:
--timeout 600Check for infinite loops
Optimize script performance
3. Permission Denied
Error: Permission denied: script.sh
Solution:
Add execute permission:
chmod +x script.shCheck file ownership
Verify platform permissions
4. Missing Dependencies
Error: Command not found: CMake
Solution:
Install required dependencies
Check PATH environment variable
Use platform-specific package manager
API 参考¶
ValidationBuilder¶
Builder for creating validation workflows:
from script_validation import ValidationBuilder, Platform
workflow = (ValidationBuilder()
.root_path(Path('./'))
.platforms(Platform.WINDOWS, Platform.LINUX)
.validators('functional', 'compatibility')
.report_formats('html', 'junit')
.timeout(600)
.max_memory(2048)
.ci_mode(True)
.parallel(True)
.verbose(True)
.build())
Platform Enum¶
Available platforms:
from script_validation import Platform
Platform.WINDOWS # Native Windows
Platform.WSL # Windows Subsystem for Linux
Platform.LINUX # Native Linux
ValidationResult¶
Result of validation:
class ValidationResult:
success: bool
total_scripts: int
passed: int
failed: int
platforms: List[Platform]
failures: List[FailureInfo]
duration: float
另请参阅¶
验证框架 - System validation framework
Build Scripts and Tools - Build scripts documentation
测试 - General testing guide
贡献指南 - Contribution guidelines