自动化测试集成:构建完整的测试流水线¶
概述¶
自动化测试是持续集成/持续部署(CI/CD)流程中的关键环节。通过将测试自动化并集成到开发流程中,可以快速发现问题、提高代码质量、加速开发迭代。本教程将详细讲解如何构建完整的自动化测试流水线。
什么是自动化测试集成?¶
**自动化测试集成**是指将各种类型的测试(单元测试、集成测试、功能测试等)自动化执行,并将测试结果集成到CI/CD流程中的过程。
核心特点: - 🤖 自动执行:代码提交后自动触发测试 - 📊 实时反馈:快速获得测试结果 - 📈 质量度量:生成测试报告和覆盖率数据 - 🔄 持续改进:测试失败阻止代码合并 - 🎯 早期发现:在开发阶段就发现问题
为什么需要自动化测试集成?¶
传统手动测试的问题: - ❌ 耗时且容易出错 - ❌ 测试覆盖不全面 - ❌ 难以频繁执行 - ❌ 依赖人工判断 - ❌ 无法保证一致性 - ❌ 测试结果难以追踪
自动化测试集成的优势: - ✅ 快速反馈:几分钟内获得测试结果 - ✅ 高覆盖率:自动执行所有测试用例 - ✅ 可重复性:每次执行结果一致 - ✅ 早期发现:在集成前发现问题 - ✅ 质量保证:阻止有问题的代码合并 - ✅ 文档化:测试报告记录质量历史
学习目标¶
完成本教程后,你将能够:
- 理解自动化测试的类型和层次
- 选择合适的测试框架和工具
- 编写可维护的自动化测试用例
- 将测试集成到Jenkins和GitLab CI
- 生成和分析测试报告
- 配置测试覆盖率检查
- 实施测试质量门禁
- 建立完整的测试策略
测试金字塔与测试策略¶
1.1 测试金字塔模型¶
**测试金字塔**是一个经典的测试策略模型,描述了不同层次测试的比例关系。
各层次测试特点:
| 测试类型 | 范围 | 速度 | 成本 | 维护性 | 占比 |
|---|---|---|---|---|---|
| 单元测试 | 单个函数/模块 | 毫秒级 | 低 | 易 | 70-80% |
| 集成测试 | 模块间交互 | 秒级 | 中 | 中 | 15-20% |
| 系统测试 | 完整系统 | 分钟级 | 高 | 难 | 5-10% |
| UI测试 | 用户界面 | 分钟级 | 很高 | 很难 | 2-5% |
嵌入式系统的测试金字塔:
1.2 测试类型详解¶
单元测试(Unit Tests)¶
定义: 测试最小可测试单元(函数、类、模块)
特点: - 完全隔离,使用Mock替代依赖 - 执行速度快(毫秒级) - 易于编写和维护 - 提供最快的反馈
示例:
// 被测试的函数
int calculate_checksum(uint8_t *data, size_t len) {
int sum = 0;
for (size_t i = 0; i < len; i++) {
sum += data[i];
}
return sum & 0xFF;
}
// 单元测试
void test_calculate_checksum_with_simple_data(void) {
uint8_t data[] = {1, 2, 3, 4};
int result = calculate_checksum(data, 4);
TEST_ASSERT_EQUAL(10, result);
}
集成测试(Integration Tests)¶
定义: 测试多个模块之间的交互
特点: - 测试模块间接口 - 可能需要部分真实环境 - 执行速度较慢(秒级) - 发现接口问题
示例:
// 集成测试:测试UART和协议解析器的交互
void test_uart_protocol_integration(void) {
// 初始化UART和协议解析器
uart_init();
protocol_init();
// 发送数据
uint8_t test_data[] = {0x01, 0x02, 0x03, 0x04};
uart_send(test_data, 4);
// 等待接收和解析
delay_ms(100);
// 验证解析结果
protocol_message_t *msg = protocol_get_message();
TEST_ASSERT_NOT_NULL(msg);
TEST_ASSERT_EQUAL(0x01, msg->command);
}
系统测试(System Tests)¶
定义: 测试完整系统的功能
特点: - 端到端测试 - 在真实或模拟环境中运行 - 执行速度慢(分钟级) - 验证系统级需求
示例:
# 系统测试:测试完整的温度监控系统
def test_temperature_monitoring_system():
# 启动系统
system = TemperatureMonitoringSystem()
system.start()
# 模拟温度传感器输入
system.inject_temperature(25.5)
# 等待系统处理
time.sleep(1)
# 验证系统响应
assert system.get_display_value() == "25.5°C"
assert system.get_alarm_status() == False
# 模拟高温
system.inject_temperature(85.0)
time.sleep(1)
# 验证报警
assert system.get_alarm_status() == True
1.3 测试策略设计¶
好的测试策略应该:
- 平衡覆盖率和效率
- 单元测试覆盖核心逻辑
- 集成测试覆盖关键路径
-
系统测试覆盖主要场景
-
快速反馈
- 单元测试在每次提交时运行
- 集成测试在合并前运行
-
系统测试在发布前运行
-
可维护性
- 测试代码清晰易懂
- 测试用例独立
-
避免重复代码
-
稳定性
- 减少不稳定的测试(Flaky Tests)
- 使用确定性的测试数据
- 避免依赖外部状态
测试框架选择与配置¶
2.1 嵌入式测试框架对比¶
主流测试框架:
| 框架 | 语言 | 特点 | 适用场景 | 学习曲线 |
|---|---|---|---|---|
| Unity | C | 轻量级、纯C | 资源受限嵌入式 | 简单 |
| CppUTest | C/C++ | Mock支持、功能丰富 | 中大型项目 | 中等 |
| Google Test | C++ | 强大、生态好 | PC端或高端嵌入式 | 较难 |
| Ceedling | C | Unity+CMock+Rake | 快速搭建 | 简单 |
| pytest | Python | 灵活、插件丰富 | 系统级测试 | 简单 |
2.2 Unity测试框架配置¶
安装Unity:
# 方法1: 作为Git子模块
git submodule add https://github.com/ThrowTheSwitch/Unity.git test/Unity
# 方法2: 直接克隆
git clone https://github.com/ThrowTheSwitch/Unity.git test/Unity
项目结构:
project/
├── src/ # 源代码
│ ├── sensor.c
│ ├── sensor.h
│ ├── protocol.c
│ └── protocol.h
├── test/ # 测试代码
│ ├── Unity/ # Unity框架
│ ├── test_sensor.c # 传感器测试
│ ├── test_protocol.c # 协议测试
│ └── test_runner.c # 测试运行器
├── build/ # 构建输出
└── Makefile # 构建脚本
基本Makefile:
# 编译器设置
CC = gcc
CFLAGS = -Wall -Wextra -std=c99 -g
# 目录
SRC_DIR = src
TEST_DIR = test
BUILD_DIR = build
UNITY_DIR = $(TEST_DIR)/Unity/src
# 文件
SRC_FILES = $(wildcard $(SRC_DIR)/*.c)
TEST_FILES = $(wildcard $(TEST_DIR)/test_*.c)
UNITY_SRC = $(UNITY_DIR)/unity.c
# 包含路径
INCLUDES = -I$(SRC_DIR) -I$(UNITY_DIR)
# 目标
TEST_EXEC = $(BUILD_DIR)/test_runner
all: test
$(BUILD_DIR):
mkdir -p $(BUILD_DIR)
test: $(BUILD_DIR)
$(CC) $(CFLAGS) $(INCLUDES) $(SRC_FILES) $(TEST_FILES) $(UNITY_SRC) -o $(TEST_EXEC)
./$(TEST_EXEC)
clean:
rm -rf $(BUILD_DIR)
.PHONY: all test clean
2.3 CppUTest框架配置¶
安装CppUTest:
# Ubuntu/Debian
sudo apt install cpputest
# 或从源码编译
git clone https://github.com/cpputest/cpputest.git
cd cpputest
mkdir build && cd build
cmake ..
make
sudo make install
CMakeLists.txt配置:
cmake_minimum_required(VERSION 3.10)
project(EmbeddedTests CXX C)
# 设置标准
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_C_STANDARD 99)
# 查找CppUTest
find_package(PkgConfig REQUIRED)
pkg_check_modules(CPPUTEST REQUIRED cpputest)
# 包含目录
include_directories(src)
include_directories(${CPPUTEST_INCLUDE_DIRS})
# 源文件
file(GLOB SRC_FILES "src/*.c")
file(GLOB TEST_FILES "test/*.cpp")
# 测试可执行文件
add_executable(test_runner ${TEST_FILES} ${SRC_FILES})
target_link_libraries(test_runner ${CPPUTEST_LIBRARIES})
# 启用测试
enable_testing()
add_test(NAME AllTests COMMAND test_runner -v)
2.4 pytest配置(用于系统测试)¶
安装pytest:
pytest.ini配置:
[pytest]
testpaths = tests
python_files = test_*.py
python_classes = Test*
python_functions = test_*
addopts =
-v
--html=reports/test_report.html
--self-contained-html
--cov=src
--cov-report=html:reports/coverage
--cov-report=term-missing
示例测试文件 (tests/test_system.py):
import pytest
import serial
import time
class TestTemperatureSystem:
@pytest.fixture
def device(self):
"""设置测试设备"""
dev = serial.Serial('/dev/ttyUSB0', 115200, timeout=1)
yield dev
dev.close()
def test_read_temperature(self, device):
"""测试读取温度"""
device.write(b'READ_TEMP\n')
response = device.readline().decode().strip()
assert response.startswith('TEMP:')
temp = float(response.split(':')[1])
assert -40 <= temp <= 125 # 有效温度范围
def test_set_alarm_threshold(self, device):
"""测试设置报警阈值"""
device.write(b'SET_ALARM:80\n')
response = device.readline().decode().strip()
assert response == 'OK'
Jenkins集成自动化测试¶
3.1 Jenkins Pipeline配置¶
Jenkinsfile示例:
pipeline {
agent any
environment {
ARM_TOOLCHAIN = '/usr/local/gcc-arm-none-eabi/bin'
PATH = "${ARM_TOOLCHAIN}:${env.PATH}"
}
stages {
stage('Checkout') {
steps {
checkout scm
sh 'git submodule update --init --recursive'
}
}
stage('Build') {
steps {
sh 'make clean'
sh 'make all'
}
}
stage('Unit Tests') {
steps {
sh 'make test'
}
post {
always {
// 发布JUnit测试报告
junit 'build/test-results/*.xml'
}
}
}
stage('Integration Tests') {
steps {
sh 'make integration-test'
}
}
stage('Code Coverage') {
steps {
sh 'make coverage'
}
post {
always {
// 发布覆盖率报告
publishHTML([
reportDir: 'coverage_html',
reportFiles: 'index.html',
reportName: 'Coverage Report'
])
}
}
}
stage('Static Analysis') {
steps {
sh 'cppcheck --enable=all --xml src/ 2> cppcheck.xml'
}
post {
always {
// 发布静态分析报告
recordIssues(
tools: [cppCheck(pattern: 'cppcheck.xml')]
)
}
}
}
}
post {
success {
echo '✓ All tests passed!'
emailext(
subject: "✓ Build Success: ${env.JOB_NAME} #${env.BUILD_NUMBER}",
body: "All tests passed successfully.",
to: "${env.CHANGE_AUTHOR_EMAIL}"
)
}
failure {
echo '✗ Tests failed!'
emailext(
subject: "✗ Build Failed: ${env.JOB_NAME} #${env.BUILD_NUMBER}",
body: "Tests failed. Check ${env.BUILD_URL} for details.",
to: "${env.CHANGE_AUTHOR_EMAIL}"
)
}
}
}
3.2 配置测试报告¶
生成JUnit格式的测试报告:
Unity不直接支持JUnit格式,需要转换。创建转换脚本 unity_to_junit.py:
#!/usr/bin/env python3
import sys
import re
from xml.etree.ElementTree import Element, SubElement, tostring
from xml.dom import minidom
def parse_unity_output(output):
"""解析Unity测试输出"""
tests = []
current_test = None
for line in output.split('\n'):
# 匹配测试开始
if line.startswith('TEST('):
match = re.match(r'TEST\((\w+), (\w+)\)', line)
if match:
current_test = {
'suite': match.group(1),
'name': match.group(2),
'status': 'passed'
}
# 匹配测试失败
elif 'FAIL' in line and current_test:
current_test['status'] = 'failed'
current_test['message'] = line
# 匹配测试结束
elif line.startswith('---') and current_test:
tests.append(current_test)
current_test = None
return tests
def generate_junit_xml(tests):
"""生成JUnit XML格式"""
testsuites = Element('testsuites')
# 按测试套件分组
suites = {}
for test in tests:
suite_name = test['suite']
if suite_name not in suites:
suites[suite_name] = []
suites[suite_name].append(test)
# 生成每个测试套件
for suite_name, suite_tests in suites.items():
testsuite = SubElement(testsuites, 'testsuite')
testsuite.set('name', suite_name)
testsuite.set('tests', str(len(suite_tests)))
failures = sum(1 for t in suite_tests if t['status'] == 'failed')
testsuite.set('failures', str(failures))
for test in suite_tests:
testcase = SubElement(testsuite, 'testcase')
testcase.set('name', test['name'])
testcase.set('classname', suite_name)
if test['status'] == 'failed':
failure = SubElement(testcase, 'failure')
failure.set('message', test.get('message', 'Test failed'))
# 格式化输出
xml_str = minidom.parseString(tostring(testsuites)).toprettyxml(indent=" ")
return xml_str
if __name__ == '__main__':
# 读取Unity输出
unity_output = sys.stdin.read()
# 解析测试结果
tests = parse_unity_output(unity_output)
# 生成JUnit XML
junit_xml = generate_junit_xml(tests)
# 输出到文件
with open('build/test-results/results.xml', 'w') as f:
f.write(junit_xml)
在Makefile中使用:
test:
mkdir -p build/test-results
./build/test_runner | tee test_output.txt
python3 unity_to_junit.py < test_output.txt
3.3 配置测试覆盖率¶
Makefile添加覆盖率支持:
# 覆盖率标志
COVERAGE_FLAGS = --coverage
LDFLAGS = --coverage
# 编译时启用覆盖率
$(BUILD_DIR)/%.o: $(SRC_DIR)/%.c
$(CC) $(CFLAGS) $(COVERAGE_FLAGS) $(INCLUDES) -c $< -o $@
# 生成覆盖率报告
coverage: test
@echo "Generating coverage report..."
lcov --capture --directory $(BUILD_DIR) --output-file coverage.info
lcov --remove coverage.info '/usr/*' '*/test/*' '*/Unity/*' --output-file coverage_filtered.info
genhtml coverage_filtered.info --output-directory coverage_html
@echo "Coverage report: coverage_html/index.html"
# 覆盖率检查(设置最低阈值)
coverage-check: coverage
@echo "Checking coverage thresholds..."
@COVERAGE=$$(lcov --summary coverage_filtered.info 2>&1 | grep lines | awk '{print $$2}' | sed 's/%//'); \
if [ $$(echo "$$COVERAGE < 80" | bc) -eq 1 ]; then \
echo "✗ Coverage $$COVERAGE% is below 80% threshold"; \
exit 1; \
else \
echo "✓ Coverage $$COVERAGE% meets 80% threshold"; \
fi
Jenkins Pipeline添加覆盖率检查:
stage('Coverage Check') {
steps {
sh 'make coverage-check'
}
post {
always {
publishHTML([
reportDir: 'coverage_html',
reportFiles: 'index.html',
reportName: 'Coverage Report'
])
// 使用Cobertura插件
cobertura(
coberturaReportFile: 'coverage.xml',
failUnhealthy: false,
failUnstable: false,
maxNumberOfBuilds: 10,
onlyStable: false,
sourceEncoding: 'ASCII',
zoomCoverageChart: false
)
}
}
}
GitLab CI集成自动化测试¶
4.1 GitLab CI配置¶
完整的.gitlab-ci.yml:
# 定义阶段
stages:
- build
- test
- analysis
- report
# 全局变量
variables:
GIT_SUBMODULE_STRATEGY: recursive
ARM_TOOLCHAIN_PATH: /usr/local/gcc-arm-none-eabi/bin
# 缓存配置
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- build/
- .cache/
# 构建阶段
build:
stage: build
image: gcc:latest
before_script:
- apt-get update && apt-get install -y make lcov
script:
- echo "Building project..."
- make clean
- make all
artifacts:
paths:
- build/
expire_in: 1 hour
tags:
- docker
# 单元测试
unit-test:
stage: test
image: gcc:latest
dependencies:
- build
before_script:
- apt-get update && apt-get install -y make
script:
- echo "Running unit tests..."
- make test
artifacts:
reports:
junit: build/test-results/*.xml
paths:
- build/test-results/
expire_in: 1 week
tags:
- docker
# 集成测试
integration-test:
stage: test
image: gcc:latest
dependencies:
- build
script:
- echo "Running integration tests..."
- make integration-test
artifacts:
reports:
junit: build/integration-results/*.xml
tags:
- docker
# 代码覆盖率
coverage:
stage: analysis
image: gcc:latest
dependencies:
- build
before_script:
- apt-get update && apt-get install -y lcov
script:
- echo "Generating coverage report..."
- make coverage
- lcov --summary coverage_filtered.info
coverage: '/lines\.*: (\d+\.\d+)%/'
artifacts:
paths:
- coverage_html/
expire_in: 1 week
tags:
- docker
# 静态分析
static-analysis:
stage: analysis
image: cppcheck:latest
script:
- cppcheck --enable=all --xml --xml-version=2 src/ 2> cppcheck.xml
artifacts:
paths:
- cppcheck.xml
expire_in: 1 week
allow_failure: true
tags:
- docker
# 生成测试报告
test-report:
stage: report
image: python:3.9
dependencies:
- unit-test
- integration-test
- coverage
before_script:
- pip install jinja2
script:
- python scripts/generate_report.py
artifacts:
paths:
- reports/
expire_in: 1 month
tags:
- docker
# 质量门禁
quality-gate:
stage: report
image: gcc:latest
dependencies:
- coverage
script:
- |
COVERAGE=$(lcov --summary coverage_filtered.info 2>&1 | grep lines | awk '{print $2}' | sed 's/%//')
echo "Coverage: $COVERAGE%"
if [ $(echo "$COVERAGE < 80" | bc) -eq 1 ]; then
echo "✗ Coverage $COVERAGE% is below 80% threshold"
exit 1
else
echo "✓ Coverage $COVERAGE% meets 80% threshold"
fi
tags:
- docker
4.2 配置测试报告页面¶
GitLab Pages配置 (添加到.gitlab-ci.yml):
pages:
stage: report
dependencies:
- coverage
- test-report
script:
- mkdir -p public
- cp -r coverage_html/* public/coverage/
- cp -r reports/* public/reports/
- |
cat > public/index.html << 'EOF'
<!DOCTYPE html>
<html>
<head>
<title>Test Reports</title>
<style>
body { font-family: Arial, sans-serif; margin: 40px; }
h1 { color: #333; }
.card { border: 1px solid #ddd; padding: 20px; margin: 20px 0; border-radius: 5px; }
a { color: #0066cc; text-decoration: none; }
a:hover { text-decoration: underline; }
</style>
</head>
<body>
<h1>Embedded Project Test Reports</h1>
<div class="card">
<h2>📊 Code Coverage</h2>
<p><a href="coverage/index.html">View Coverage Report</a></p>
</div>
<div class="card">
<h2>🧪 Test Results</h2>
<p><a href="reports/test_report.html">View Test Report</a></p>
</div>
</body>
</html>
EOF
artifacts:
paths:
- public
only:
- main
tags:
- docker
4.3 配置测试失败通知¶
GitLab CI通知配置:
在项目设置中配置Slack或Email通知:
- 进入 "Settings" → "Integrations"
- 选择 "Slack notifications" 或 "Emails on push"
- 配置通知规则
在Pipeline中添加自定义通知:
# 添加通知任务
notify-failure:
stage: report
image: alpine:latest
before_script:
- apk add --no-cache curl
script:
- |
curl -X POST -H 'Content-type: application/json' \
--data "{
\"text\": \"❌ Pipeline Failed\",
\"attachments\": [{
\"color\": \"danger\",
\"fields\": [
{\"title\": \"Project\", \"value\": \"$CI_PROJECT_NAME\", \"short\": true},
{\"title\": \"Branch\", \"value\": \"$CI_COMMIT_REF_NAME\", \"short\": true},
{\"title\": \"Commit\", \"value\": \"$CI_COMMIT_SHORT_SHA\", \"short\": true},
{\"title\": \"Author\", \"value\": \"$CI_COMMIT_AUTHOR\", \"short\": true}
],
\"actions\": [{
\"type\": \"button\",
\"text\": \"View Pipeline\",
\"url\": \"$CI_PIPELINE_URL\"
}]
}]
}" \
$SLACK_WEBHOOK_URL
when: on_failure
tags:
- docker
测试报告生成与分析¶
5.1 生成综合测试报告¶
创建报告生成脚本 (scripts/generate_report.py):
#!/usr/bin/env python3
"""生成综合测试报告"""
import os
import json
from datetime import datetime
from jinja2 import Template
def parse_junit_xml(xml_file):
"""解析JUnit XML文件"""
import xml.etree.ElementTree as ET
tree = ET.parse(xml_file)
root = tree.getroot()
results = {
'total': 0,
'passed': 0,
'failed': 0,
'skipped': 0,
'tests': []
}
for testsuite in root.findall('.//testsuite'):
for testcase in testsuite.findall('testcase'):
test = {
'name': testcase.get('name'),
'classname': testcase.get('classname'),
'time': float(testcase.get('time', 0))
}
results['total'] += 1
if testcase.find('failure') is not None:
test['status'] = 'failed'
test['message'] = testcase.find('failure').get('message', '')
results['failed'] += 1
elif testcase.find('skipped') is not None:
test['status'] = 'skipped'
results['skipped'] += 1
else:
test['status'] = 'passed'
results['passed'] += 1
results['tests'].append(test)
return results
def parse_coverage_info(info_file):
"""解析覆盖率信息"""
coverage = {
'lines': 0.0,
'functions': 0.0,
'branches': 0.0
}
with open(info_file, 'r') as f:
content = f.read()
# 提取覆盖率数据
import re
lines_match = re.search(r'lines\.*: (\d+\.\d+)%', content)
if lines_match:
coverage['lines'] = float(lines_match.group(1))
functions_match = re.search(r'functions\.*: (\d+\.\d+)%', content)
if functions_match:
coverage['functions'] = float(functions_match.group(1))
return coverage
def generate_html_report(test_results, coverage_data):
"""生成HTML报告"""
template = Template('''
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Test Report</title>
<style>
body {
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
margin: 0;
padding: 20px;
background-color: #f5f5f5;
}
.container {
max-width: 1200px;
margin: 0 auto;
background-color: white;
padding: 30px;
border-radius: 8px;
box-shadow: 0 2px 4px rgba(0,0,0,0.1);
}
h1 {
color: #333;
border-bottom: 3px solid #4CAF50;
padding-bottom: 10px;
}
.summary {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
gap: 20px;
margin: 30px 0;
}
.metric {
padding: 20px;
border-radius: 8px;
text-align: center;
}
.metric.passed { background-color: #e8f5e9; border-left: 4px solid #4CAF50; }
.metric.failed { background-color: #ffebee; border-left: 4px solid #f44336; }
.metric.coverage { background-color: #e3f2fd; border-left: 4px solid #2196F3; }
.metric-value {
font-size: 36px;
font-weight: bold;
margin: 10px 0;
}
.metric-label {
color: #666;
font-size: 14px;
}
.progress-bar {
width: 100%;
height: 30px;
background-color: #e0e0e0;
border-radius: 15px;
overflow: hidden;
margin: 20px 0;
}
.progress-fill {
height: 100%;
background-color: #4CAF50;
transition: width 0.3s ease;
display: flex;
align-items: center;
justify-content: center;
color: white;
font-weight: bold;
}
table {
width: 100%;
border-collapse: collapse;
margin: 20px 0;
}
th, td {
padding: 12px;
text-align: left;
border-bottom: 1px solid #ddd;
}
th {
background-color: #f5f5f5;
font-weight: bold;
}
.status {
padding: 4px 12px;
border-radius: 12px;
font-size: 12px;
font-weight: bold;
}
.status.passed { background-color: #4CAF50; color: white; }
.status.failed { background-color: #f44336; color: white; }
.status.skipped { background-color: #ff9800; color: white; }
.timestamp {
color: #999;
font-size: 14px;
margin-top: 20px;
}
</style>
</head>
<body>
<div class="container">
<h1>🧪 Automated Test Report</h1>
<div class="timestamp">
Generated: {{ timestamp }}
</div>
<div class="summary">
<div class="metric passed">
<div class="metric-label">Tests Passed</div>
<div class="metric-value">{{ test_results.passed }}</div>
</div>
<div class="metric failed">
<div class="metric-label">Tests Failed</div>
<div class="metric-value">{{ test_results.failed }}</div>
</div>
<div class="metric coverage">
<div class="metric-label">Code Coverage</div>
<div class="metric-value">{{ "%.1f"|format(coverage_data.lines) }}%</div>
</div>
</div>
<h2>Test Results</h2>
<div class="progress-bar">
<div class="progress-fill" style="width: {{ (test_results.passed / test_results.total * 100)|int }}%">
{{ test_results.passed }} / {{ test_results.total }}
</div>
</div>
<table>
<thead>
<tr>
<th>Test Name</th>
<th>Class</th>
<th>Status</th>
<th>Time (s)</th>
</tr>
</thead>
<tbody>
{% for test in test_results.tests %}
<tr>
<td>{{ test.name }}</td>
<td>{{ test.classname }}</td>
<td><span class="status {{ test.status }}">{{ test.status.upper() }}</span></td>
<td>{{ "%.3f"|format(test.time) }}</td>
</tr>
{% endfor %}
</tbody>
</table>
<h2>Coverage Details</h2>
<table>
<tr>
<th>Metric</th>
<th>Coverage</th>
</tr>
<tr>
<td>Line Coverage</td>
<td>{{ "%.2f"|format(coverage_data.lines) }}%</td>
</tr>
<tr>
<td>Function Coverage</td>
<td>{{ "%.2f"|format(coverage_data.functions) }}%</td>
</tr>
</table>
</div>
</body>
</html>
''')
html = template.render(
test_results=test_results,
coverage_data=coverage_data,
timestamp=datetime.now().strftime('%Y-%m-%d %H:%M:%S')
)
return html
def main():
"""主函数"""
# 解析测试结果
test_results = parse_junit_xml('build/test-results/results.xml')
# 解析覆盖率
coverage_data = parse_coverage_info('coverage_filtered.info')
# 生成HTML报告
html_report = generate_html_report(test_results, coverage_data)
# 保存报告
os.makedirs('reports', exist_ok=True)
with open('reports/test_report.html', 'w') as f:
f.write(html_report)
print("✓ Test report generated: reports/test_report.html")
# 生成JSON报告(用于其他工具)
report_data = {
'timestamp': datetime.now().isoformat(),
'tests': test_results,
'coverage': coverage_data
}
with open('reports/test_report.json', 'w') as f:
json.dump(report_data, f, indent=2)
print("✓ JSON report generated: reports/test_report.json")
if __name__ == '__main__':
main()
5.2 趋势分析¶
创建趋势分析脚本 (scripts/analyze_trends.py):
#!/usr/bin/env python3
"""分析测试趋势"""
import json
import os
from datetime import datetime
import matplotlib.pyplot as plt
def load_historical_data(reports_dir='reports/history'):
"""加载历史测试数据"""
history = []
if not os.path.exists(reports_dir):
return history
for filename in sorted(os.listdir(reports_dir)):
if filename.endswith('.json'):
with open(os.path.join(reports_dir, filename), 'r') as f:
data = json.load(f)
history.append(data)
return history
def plot_coverage_trend(history):
"""绘制覆盖率趋势图"""
dates = [datetime.fromisoformat(h['timestamp']) for h in history]
coverage = [h['coverage']['lines'] for h in history]
plt.figure(figsize=(12, 6))
plt.plot(dates, coverage, marker='o', linewidth=2, markersize=8)
plt.axhline(y=80, color='r', linestyle='--', label='Target: 80%')
plt.xlabel('Date')
plt.ylabel('Coverage (%)')
plt.title('Code Coverage Trend')
plt.legend()
plt.grid(True, alpha=0.3)
plt.xticks(rotation=45)
plt.tight_layout()
plt.savefig('reports/coverage_trend.png', dpi=150)
print("✓ Coverage trend chart saved: reports/coverage_trend.png")
def plot_test_results_trend(history):
"""绘制测试结果趋势图"""
dates = [datetime.fromisoformat(h['timestamp']) for h in history]
passed = [h['tests']['passed'] for h in history]
failed = [h['tests']['failed'] for h in history]
plt.figure(figsize=(12, 6))
plt.plot(dates, passed, marker='o', label='Passed', color='green', linewidth=2)
plt.plot(dates, failed, marker='x', label='Failed', color='red', linewidth=2)
plt.xlabel('Date')
plt.ylabel('Number of Tests')
plt.title('Test Results Trend')
plt.legend()
plt.grid(True, alpha=0.3)
plt.xticks(rotation=45)
plt.tight_layout()
plt.savefig('reports/test_results_trend.png', dpi=150)
print("✓ Test results trend chart saved: reports/test_results_trend.png")
def main():
"""主函数"""
# 加载历史数据
history = load_historical_data()
if len(history) < 2:
print("Not enough historical data for trend analysis")
return
# 生成趋势图
plot_coverage_trend(history)
plot_test_results_trend(history)
# 生成趋势报告
latest = history[-1]
previous = history[-2]
coverage_change = latest['coverage']['lines'] - previous['coverage']['lines']
test_change = latest['tests']['total'] - previous['tests']['total']
print("\n📊 Trend Analysis:")
print(f"Coverage change: {coverage_change:+.2f}%")
print(f"Test count change: {test_change:+d}")
if coverage_change > 0:
print("✓ Coverage is improving")
elif coverage_change < 0:
print("⚠ Coverage is decreasing")
else:
print("→ Coverage is stable")
if __name__ == '__main__':
main()
质量门禁与最佳实践¶
6.1 配置质量门禁¶
质量门禁(Quality Gate) 是一组必须满足的质量标准,不满足则阻止代码合并或部署。
常见质量门禁规则:
- 测试通过率: 所有测试必须通过
- 代码覆盖率: 覆盖率不低于阈值(如80%)
- 静态分析: 无严重或高危问题
- 代码复杂度: 圈复杂度不超过限制
- 技术债务: 技术债务不增加
Jenkins质量门禁配置:
stage('Quality Gate') {
steps {
script {
// 检查测试通过率
def testResults = junit 'build/test-results/*.xml'
if (testResults.failCount > 0) {
error("Tests failed: ${testResults.failCount} failures")
}
// 检查覆盖率
def coverage = sh(
script: "lcov --summary coverage_filtered.info 2>&1 | grep lines | awk '{print \$2}' | sed 's/%//'",
returnStdout: true
).trim().toFloat()
if (coverage < 80.0) {
error("Coverage ${coverage}% is below 80% threshold")
}
// 检查静态分析
def issues = scanForIssues tool: cppCheck(pattern: 'cppcheck.xml')
if (issues.size > 0) {
def highPriority = issues.findAll { it.priority == 'HIGH' }
if (highPriority.size > 0) {
error("Found ${highPriority.size} high priority issues")
}
}
echo "✓ All quality gates passed"
}
}
}
GitLab CI质量门禁配置:
quality-gate:
stage: report
image: alpine:latest
dependencies:
- unit-test
- coverage
- static-analysis
script:
- |
echo "Checking quality gates..."
# 检查测试结果
if [ -f build/test-results/results.xml ]; then
FAILURES=$(grep -c 'failure' build/test-results/results.xml || true)
if [ $FAILURES -gt 0 ]; then
echo "✗ Quality Gate Failed: $FAILURES test failures"
exit 1
fi
fi
# 检查覆盖率
COVERAGE=$(lcov --summary coverage_filtered.info 2>&1 | grep lines | awk '{print $2}' | sed 's/%//')
if [ $(echo "$COVERAGE < 80" | bc) -eq 1 ]; then
echo "✗ Quality Gate Failed: Coverage $COVERAGE% < 80%"
exit 1
fi
# 检查静态分析
if [ -f cppcheck.xml ]; then
ERRORS=$(grep -c 'severity="error"' cppcheck.xml || true)
if [ $ERRORS -gt 0 ]; then
echo "✗ Quality Gate Failed: $ERRORS static analysis errors"
exit 1
fi
fi
echo "✓ All quality gates passed"
tags:
- docker
6.2 测试最佳实践¶
1. 测试命名规范
使用描述性的测试名称:
// ❌ 不好的命名
void test1(void);
void test_func(void);
// ✅ 好的命名
void test_calculate_checksum_with_valid_data_returns_correct_value(void);
void test_uart_send_with_null_buffer_returns_error(void);
void test_led_toggle_changes_state_from_off_to_on(void);
2. 测试独立性
每个测试应该独立运行:
// ❌ 不好的做法:测试之间有依赖
static int global_counter = 0;
void test_increment(void) {
global_counter++;
TEST_ASSERT_EQUAL(1, global_counter);
}
void test_increment_again(void) {
global_counter++;
TEST_ASSERT_EQUAL(2, global_counter); // 依赖前一个测试
}
// ✅ 好的做法:每个测试独立
void test_increment(void) {
int counter = 0;
counter++;
TEST_ASSERT_EQUAL(1, counter);
}
void test_increment_from_zero(void) {
int counter = 0;
counter++;
TEST_ASSERT_EQUAL(1, counter);
}
3. 使用setUp和tearDown
static sensor_t *sensor;
void setUp(void) {
// 每个测试前初始化
sensor = sensor_create();
sensor_init(sensor);
}
void tearDown(void) {
// 每个测试后清理
sensor_destroy(sensor);
sensor = NULL;
}
void test_sensor_read_temperature(void) {
float temp = sensor_read_temperature(sensor);
TEST_ASSERT_FLOAT_WITHIN(0.1, 25.0, temp);
}
4. 测试边界条件
void test_buffer_write_at_boundaries(void) {
// 测试第一个位置
TEST_ASSERT_EQUAL(SUCCESS, buffer_write(0, 'A'));
// 测试最后一个位置
TEST_ASSERT_EQUAL(SUCCESS, buffer_write(BUFFER_SIZE - 1, 'Z'));
// 测试超出边界
TEST_ASSERT_EQUAL(ERROR_OUT_OF_BOUNDS, buffer_write(BUFFER_SIZE, 'X'));
TEST_ASSERT_EQUAL(ERROR_OUT_OF_BOUNDS, buffer_write(-1, 'Y'));
}
5. 测试错误处理
void test_error_handling(void) {
// 测试NULL指针
TEST_ASSERT_EQUAL(ERROR_NULL_POINTER, process_data(NULL, 10));
// 测试无效参数
TEST_ASSERT_EQUAL(ERROR_INVALID_PARAM, process_data(buffer, 0));
TEST_ASSERT_EQUAL(ERROR_INVALID_PARAM, process_data(buffer, -1));
// 测试资源不足
mock_malloc_fail_next_call();
TEST_ASSERT_EQUAL(ERROR_NO_MEMORY, allocate_buffer(1024));
}
6.3 持续改进策略¶
1. 定期审查测试
- 每月审查测试覆盖率
- 识别未测试的代码
- 删除过时的测试
- 重构重复的测试代码
2. 测试度量指标
跟踪以下指标: - 测试数量趋势 - 测试通过率 - 代码覆盖率 - 测试执行时间 - 缺陷发现率
3. 测试金字塔平衡
定期检查测试分布:
#!/bin/bash
# 统计测试分布
UNIT_TESTS=$(find test -name "test_*.c" | wc -l)
INTEGRATION_TESTS=$(find test/integration -name "test_*.c" | wc -l)
SYSTEM_TESTS=$(find test/system -name "test_*.py" | wc -l)
TOTAL=$((UNIT_TESTS + INTEGRATION_TESTS + SYSTEM_TESTS))
echo "Test Distribution:"
echo "Unit Tests: $UNIT_TESTS ($((UNIT_TESTS * 100 / TOTAL))%)"
echo "Integration Tests: $INTEGRATION_TESTS ($((INTEGRATION_TESTS * 100 / TOTAL))%)"
echo "System Tests: $SYSTEM_TESTS ($((SYSTEM_TESTS * 100 / TOTAL))%)"
# 检查是否符合70-20-10原则
if [ $((UNIT_TESTS * 100 / TOTAL)) -lt 60 ]; then
echo "⚠ Warning: Unit tests below 60%"
fi
4. 测试代码审查
测试代码也需要审查: - 测试是否清晰易懂 - 测试是否真正验证了功能 - 是否有重复的测试 - 是否测试了边界条件 - 错误处理是否充分
实战演练¶
7.1 演练1:搭建完整的测试流水线¶
目标: 为STM32项目搭建完整的自动化测试流水线
步骤1: 准备项目结构
# 创建项目目录
mkdir stm32-test-demo && cd stm32-test-demo
# 创建目录结构
mkdir -p src test/Unity build
# 初始化Git
git init
git submodule add https://github.com/ThrowTheSwitch/Unity.git test/Unity
步骤2: 创建示例代码
src/temperature.h:
#ifndef TEMPERATURE_H
#define TEMPERATURE_H
#include <stdint.h>
typedef enum {
TEMP_OK = 0,
TEMP_ERROR_INVALID_PARAM = -1,
TEMP_ERROR_OUT_OF_RANGE = -2
} temp_error_t;
// 将ADC值转换为温度(摄氏度)
temp_error_t temperature_convert(uint16_t adc_value, float *temperature);
// 检查温度是否在安全范围内
int temperature_is_safe(float temperature);
#endif
src/temperature.c:
#include "temperature.h"
#define ADC_MAX 4095
#define TEMP_MIN -40.0f
#define TEMP_MAX 125.0f
#define SAFE_TEMP_MAX 80.0f
temp_error_t temperature_convert(uint16_t adc_value, float *temperature) {
if (temperature == NULL) {
return TEMP_ERROR_INVALID_PARAM;
}
if (adc_value > ADC_MAX) {
return TEMP_ERROR_OUT_OF_RANGE;
}
// 简单的线性转换:ADC 0-4095 映射到 -40°C 到 125°C
*temperature = TEMP_MIN + (adc_value * (TEMP_MAX - TEMP_MIN) / ADC_MAX);
return TEMP_OK;
}
int temperature_is_safe(float temperature) {
return (temperature >= TEMP_MIN && temperature <= SAFE_TEMP_MAX);
}
步骤3: 创建测试用例
test/test_temperature.c:
#include "unity.h"
#include "temperature.h"
void setUp(void) {
// 每个测试前执行
}
void tearDown(void) {
// 每个测试后执行
}
// 测试正常转换
void test_temperature_convert_with_valid_adc_returns_ok(void) {
float temp;
temp_error_t result = temperature_convert(2048, &temp);
TEST_ASSERT_EQUAL(TEMP_OK, result);
TEST_ASSERT_FLOAT_WITHIN(1.0, 42.5, temp);
}
// 测试NULL指针
void test_temperature_convert_with_null_pointer_returns_error(void) {
temp_error_t result = temperature_convert(2048, NULL);
TEST_ASSERT_EQUAL(TEMP_ERROR_INVALID_PARAM, result);
}
// 测试超出范围的ADC值
void test_temperature_convert_with_out_of_range_adc_returns_error(void) {
float temp;
temp_error_t result = temperature_convert(5000, &temp);
TEST_ASSERT_EQUAL(TEMP_ERROR_OUT_OF_RANGE, result);
}
// 测试边界值
void test_temperature_convert_with_min_adc_returns_min_temp(void) {
float temp;
temperature_convert(0, &temp);
TEST_ASSERT_FLOAT_WITHIN(0.1, -40.0, temp);
}
void test_temperature_convert_with_max_adc_returns_max_temp(void) {
float temp;
temperature_convert(4095, &temp);
TEST_ASSERT_FLOAT_WITHIN(0.1, 125.0, temp);
}
// 测试安全检查
void test_temperature_is_safe_with_normal_temp_returns_true(void) {
TEST_ASSERT_TRUE(temperature_is_safe(25.0));
TEST_ASSERT_TRUE(temperature_is_safe(50.0));
}
void test_temperature_is_safe_with_high_temp_returns_false(void) {
TEST_ASSERT_FALSE(temperature_is_safe(85.0));
TEST_ASSERT_FALSE(temperature_is_safe(100.0));
}
void test_temperature_is_safe_with_low_temp_returns_false(void) {
TEST_ASSERT_FALSE(temperature_is_safe(-50.0));
}
// 主函数
int main(void) {
UNITY_BEGIN();
RUN_TEST(test_temperature_convert_with_valid_adc_returns_ok);
RUN_TEST(test_temperature_convert_with_null_pointer_returns_error);
RUN_TEST(test_temperature_convert_with_out_of_range_adc_returns_error);
RUN_TEST(test_temperature_convert_with_min_adc_returns_min_temp);
RUN_TEST(test_temperature_convert_with_max_adc_returns_max_temp);
RUN_TEST(test_temperature_is_safe_with_normal_temp_returns_true);
RUN_TEST(test_temperature_is_safe_with_high_temp_returns_false);
RUN_TEST(test_temperature_is_safe_with_low_temp_returns_false);
return UNITY_END();
}
步骤4: 创建Makefile
Makefile:
CC = gcc
CFLAGS = -Wall -Wextra -std=c99 -g --coverage
LDFLAGS = --coverage
SRC_DIR = src
TEST_DIR = test
BUILD_DIR = build
UNITY_DIR = $(TEST_DIR)/Unity/src
SRC_FILES = $(wildcard $(SRC_DIR)/*.c)
TEST_FILES = $(wildcard $(TEST_DIR)/test_*.c)
UNITY_SRC = $(UNITY_DIR)/unity.c
INCLUDES = -I$(SRC_DIR) -I$(UNITY_DIR)
TEST_EXEC = $(BUILD_DIR)/test_runner
.PHONY: all test coverage clean
all: test
$(BUILD_DIR):
mkdir -p $(BUILD_DIR)
mkdir -p $(BUILD_DIR)/test-results
test: $(BUILD_DIR)
$(CC) $(CFLAGS) $(INCLUDES) $(SRC_FILES) $(TEST_FILES) $(UNITY_SRC) -o $(TEST_EXEC)
./$(TEST_EXEC)
coverage: test
@echo "Generating coverage report..."
lcov --capture --directory . --output-file coverage.info
lcov --remove coverage.info '/usr/*' '*/test/*' '*/Unity/*' --output-file coverage_filtered.info
genhtml coverage_filtered.info --output-directory coverage_html
@echo "Coverage report: coverage_html/index.html"
clean:
rm -rf $(BUILD_DIR) *.gcov *.gcda *.gcno coverage.info coverage_filtered.info coverage_html
步骤5: 运行测试
7.2 演练2:集成到GitLab CI¶
步骤1: 创建.gitlab-ci.yml
stages:
- build
- test
- report
variables:
GIT_SUBMODULE_STRATEGY: recursive
build:
stage: build
image: gcc:latest
before_script:
- apt-get update && apt-get install -y make
script:
- make clean
- make all
artifacts:
paths:
- build/
expire_in: 1 hour
test:
stage: test
image: gcc:latest
dependencies:
- build
before_script:
- apt-get update && apt-get install -y make lcov
script:
- make test
- make coverage
coverage: '/lines\.*: (\d+\.\d+)%/'
artifacts:
paths:
- coverage_html/
- build/test-results/
expire_in: 1 week
pages:
stage: report
dependencies:
- test
script:
- mkdir -p public
- cp -r coverage_html/* public/
artifacts:
paths:
- public
only:
- main
步骤2: 提交并推送
步骤3: 查看结果 - 在GitLab查看Pipeline执行情况 - 查看测试报告 - 查看覆盖率报告
7.3 演练3:添加质量门禁¶
步骤1: 创建质量检查脚本 (scripts/quality_check.sh):
#!/bin/bash
set -e
echo "========================================="
echo "Quality Gate Check"
echo "========================================="
# 检查测试结果
echo "Checking test results..."
if [ ! -f build/test-results/results.xml ]; then
echo "✗ Test results not found"
exit 1
fi
# 检查覆盖率
echo "Checking code coverage..."
COVERAGE=$(lcov --summary coverage_filtered.info 2>&1 | grep lines | awk '{print $2}' | sed 's/%//')
echo "Coverage: $COVERAGE%"
if [ $(echo "$COVERAGE < 80" | bc) -eq 1 ]; then
echo "✗ Coverage $COVERAGE% is below 80% threshold"
exit 1
fi
echo "✓ Coverage $COVERAGE% meets 80% threshold"
# 检查静态分析(如果有)
if [ -f cppcheck.xml ]; then
echo "Checking static analysis..."
ERRORS=$(grep -c 'severity="error"' cppcheck.xml || true)
if [ $ERRORS -gt 0 ]; then
echo "✗ Found $ERRORS static analysis errors"
exit 1
fi
echo "✓ No static analysis errors"
fi
echo "========================================="
echo "✓ All quality gates passed"
echo "========================================="
步骤2: 更新.gitlab-ci.yml
quality-gate:
stage: report
image: gcc:latest
dependencies:
- test
before_script:
- apt-get update && apt-get install -y lcov bc
- chmod +x scripts/quality_check.sh
script:
- ./scripts/quality_check.sh
常见问题与解决方案¶
8.1 测试执行问题¶
问题1: 测试运行缓慢
原因: - 测试用例过多 - 测试依赖真实硬件 - 测试之间有依赖关系
解决方案:
# 1. 并行执行测试
make test -j4
# 2. 只运行修改相关的测试
make test-changed
# 3. 使用Mock替代真实硬件
# 在测试中使用Mock而不是真实GPIO/UART等
问题2: 测试不稳定(Flaky Tests)
原因: - 依赖时间或随机数 - 依赖外部状态 - 竞态条件
解决方案:
// ❌ 不稳定的测试
void test_timeout(void) {
start_timer();
delay_ms(100); // 依赖真实时间
TEST_ASSERT_TRUE(is_timeout());
}
// ✅ 稳定的测试
void test_timeout(void) {
mock_time_set(0);
start_timer();
mock_time_set(100); // 使用Mock时间
TEST_ASSERT_TRUE(is_timeout());
}
问题3: 测试覆盖率不准确
原因: - 编译优化影响覆盖率 - 内联函数未统计 - 宏定义未展开
解决方案:
8.2 CI集成问题¶
问题1: CI环境与本地环境不一致
解决方案:
# 使用Docker确保环境一致
test:
image: gcc:10.3.0 # 指定确切版本
before_script:
- apt-get update
- apt-get install -y make=4.3-4.1 # 指定工具版本
问题2: 构建产物丢失
解决方案:
# 正确配置artifacts和dependencies
build:
artifacts:
paths:
- build/
expire_in: 1 hour
test:
dependencies:
- build # 明确指定依赖
问题3: 测试报告无法查看
解决方案:
# 确保报告格式正确
test:
artifacts:
reports:
junit: build/test-results/*.xml # JUnit格式
cobertura: coverage.xml # Cobertura格式
8.3 测试维护问题¶
问题1: 测试代码重复
解决方案:
// 提取公共测试辅助函数
static void assert_temperature_in_range(float temp, float expected, float tolerance) {
TEST_ASSERT_FLOAT_WITHIN(tolerance, expected, temp);
}
// 使用参数化测试(如果框架支持)
void test_temperature_conversion(void) {
struct {
uint16_t adc;
float expected_temp;
} test_cases[] = {
{0, -40.0},
{2048, 42.5},
{4095, 125.0}
};
for (int i = 0; i < sizeof(test_cases)/sizeof(test_cases[0]); i++) {
float temp;
temperature_convert(test_cases[i].adc, &temp);
assert_temperature_in_range(temp, test_cases[i].expected_temp, 0.1);
}
}
问题2: 测试难以理解
解决方案:
// 使用清晰的测试结构和注释
void test_uart_communication_protocol(void) {
// Arrange: 准备测试数据
uint8_t test_data[] = {0x01, 0x02, 0x03, 0x04};
uint8_t expected_response[] = {0xAA, 0x01, 0x02, 0x03, 0x04, 0x55};
// Act: 执行被测试的功能
uart_send(test_data, sizeof(test_data));
uint8_t *response = uart_receive();
// Assert: 验证结果
TEST_ASSERT_EQUAL_UINT8_ARRAY(expected_response, response, sizeof(expected_response));
}
进阶主题¶
9.1 测试驱动开发(TDD)¶
TDD流程: 1. 🔴 Red: 先写测试(测试失败) 2. 🟢 Green: 写最少的代码让测试通过 3. 🔵 Refactor: 重构代码,保持测试通过
示例:
// 步骤1: 先写测试
void test_calculate_average_with_three_numbers(void) {
int numbers[] = {10, 20, 30};
float result = calculate_average(numbers, 3);
TEST_ASSERT_FLOAT_WITHIN(0.01, 20.0, result);
}
// 步骤2: 实现最简单的代码
float calculate_average(int *numbers, int count) {
int sum = 0;
for (int i = 0; i < count; i++) {
sum += numbers[i];
}
return (float)sum / count;
}
// 步骤3: 添加更多测试,然后重构
void test_calculate_average_with_empty_array(void) {
TEST_ASSERT_EQUAL(0.0, calculate_average(NULL, 0));
}
// 重构:添加错误处理
float calculate_average(int *numbers, int count) {
if (numbers == NULL || count <= 0) {
return 0.0;
}
int sum = 0;
for (int i = 0; i < count; i++) {
sum += numbers[i];
}
return (float)sum / count;
}
9.2 性能测试¶
测试执行时间:
#include <time.h>
void test_function_performance(void) {
clock_t start = clock();
// 执行被测试的函数
for (int i = 0; i < 1000; i++) {
process_data(test_buffer, BUFFER_SIZE);
}
clock_t end = clock();
double time_spent = (double)(end - start) / CLOCKS_PER_SEC;
// 验证性能要求
TEST_ASSERT_TRUE(time_spent < 1.0); // 应在1秒内完成
printf("Performance: %.3f seconds for 1000 iterations\n", time_spent);
}
9.3 内存泄漏检测¶
使用Valgrind:
# 编译时添加调试信息
gcc -g -o test_runner test_runner.c
# 使用Valgrind检测内存泄漏
valgrind --leak-check=full --show-leak-kinds=all ./test_runner
在CI中集成Valgrind:
memory-check:
stage: test
image: gcc:latest
before_script:
- apt-get update && apt-get install -y valgrind
script:
- make test
- valgrind --leak-check=full --error-exitcode=1 ./build/test_runner
allow_failure: false
9.4 模糊测试(Fuzzing)¶
使用AFL进行模糊测试:
// fuzz_target.c
#include <stdint.h>
#include <stddef.h>
// 被测试的函数
int parse_packet(uint8_t *data, size_t len);
// AFL入口点
int main(int argc, char **argv) {
uint8_t buffer[1024];
size_t len = read(0, buffer, sizeof(buffer));
parse_packet(buffer, len);
return 0;
}
编译和运行:
# 使用AFL编译
afl-gcc -o fuzz_target fuzz_target.c
# 创建测试用例目录
mkdir -p testcases findings
# 添加初始测试用例
echo "test" > testcases/test1
# 运行模糊测试
afl-fuzz -i testcases -o findings ./fuzz_target
总结与最佳实践¶
10.1 关键要点¶
测试策略: - ✅ 遵循测试金字塔:70%单元测试,20%集成测试,10%系统测试 - ✅ 测试应该快速、独立、可重复 - ✅ 使用Mock隔离外部依赖 - ✅ 测试边界条件和错误处理
CI集成: - ✅ 每次提交自动运行测试 - ✅ 生成测试报告和覆盖率报告 - ✅ 配置质量门禁阻止低质量代码 - ✅ 快速反馈测试结果
质量保证: - ✅ 设置合理的覆盖率目标(80%+) - ✅ 定期审查和维护测试 - ✅ 跟踪测试度量指标 - ✅ 持续改进测试流程
10.2 检查清单¶
测试编写: - [ ] 测试命名清晰描述性 - [ ] 每个测试独立运行 - [ ] 使用setUp/tearDown管理测试环境 - [ ] 测试覆盖正常、边界和错误情况 - [ ] 测试代码简洁易懂
CI配置: - [ ] 配置自动触发测试 - [ ] 生成JUnit格式测试报告 - [ ] 生成代码覆盖率报告 - [ ] 配置测试失败通知 - [ ] 设置质量门禁
持续改进: - [ ] 定期审查测试覆盖率 - [ ] 删除过时的测试 - [ ] 重构重复的测试代码 - [ ] 跟踪测试趋势 - [ ] 优化测试执行时间
10.3 推荐资源¶
书籍: - "Test Driven Development for Embedded C" by James Grenning - "Working Effectively with Legacy Code" by Michael Feathers - "xUnit Test Patterns" by Gerard Meszaros
工具: - Unity: https://github.com/ThrowTheSwitch/Unity - CppUTest: https://cpputest.github.io/ - CMock: https://github.com/ThrowTheSwitch/CMock - lcov: http://ltp.sourceforge.net/coverage/lcov.php
在线资源: - Embedded Artistry: https://embeddedartistry.com/ - Interrupt Blog: https://interrupt.memfault.com/ - Test Driven Development for Embedded Systems: https://pragprog.com/titles/jgade/
10.4 下一步学习¶
完成本教程后,建议继续学习:
- 高级测试技术
- 属性测试(Property-Based Testing)
- 契约测试(Contract Testing)
-
突变测试(Mutation Testing)
-
性能测试
- 负载测试
- 压力测试
-
基准测试
-
安全测试
- 模糊测试(Fuzzing)
- 静态安全分析
-
渗透测试
-
DevOps实践
- 持续部署(CD)
- 基础设施即代码(IaC)
- 监控和告警
练习题¶
练习1: 基础测试编写¶
为以下函数编写完整的测试用例:
// 计算数组中的最大值
int find_max(int *array, int size) {
if (array == NULL || size <= 0) {
return 0;
}
int max = array[0];
for (int i = 1; i < size; i++) {
if (array[i] > max) {
max = array[i];
}
}
return max;
}
要求: - 测试正常情况 - 测试边界条件 - 测试错误处理 - 测试覆盖率达到100%
练习2: CI配置¶
为你的嵌入式项目配置完整的CI流水线: - 自动编译 - 运行单元测试 - 生成覆盖率报告 - 配置质量门禁 - 发送测试结果通知
练习3: Mock实现¶
为以下硬件接口创建Mock:
// SPI接口
void spi_init(void);
void spi_write(uint8_t data);
uint8_t spi_read(void);
void spi_transfer(uint8_t *tx_data, uint8_t *rx_data, size_t len);
要求: - 实现完整的Mock功能 - 提供验证函数 - 编写使用Mock的测试用例
恭喜! 你已经完成了自动化测试集成教程。现在你可以为嵌入式项目构建完整的自动化测试流水线,提高代码质量和开发效率。
记住:好的测试不仅能发现Bug,更能提高代码设计质量,让你更有信心地重构和添加新功能。持续改进你的测试策略,让测试成为开发流程的自然组成部分!