由于全文是英文PPT,我把关键的要点转帖上来。
SOFTWARE QUALITY IN 2010:
A SURVEY OF THE STATE OF THE ART
Author:Capers Jones
Founder and Chief Scientist Emeritus
数据来源:
Data collected from 1984 through 2010
•About 675 companies (150 clients in Fortune 500 set)
•About 35 government/military groups
•About 13,500 total projects
•New data = about 50-75 projects per month
•Data collected from 24 countries
•Observations during more than 15 lawsuits
软件行业类型与弊病(已省略)
INDUSTRY HAZARD
Airlines Safety hazards
INDUSTRY HAZARD
Defense Security hazards
INDUSTRY HAZARD
Finance Financial transaction hazards
INDUSTRY HAZARD
Health Care Safety hazards
INDUSTRY HAZARD
Insurance Liability, benefit hazards
INDUSTRY HAZARD
State, Local Governments Local economic hazards
INDUSTRY HAZARD
Manufacturing Operational hazards
INDUSTRY HAZARD
National Government Citizen record hazards
INDUSTRY HAZARD
Public Utilities Safety hazards
Telecommunications Service disruption hazards
美国平均软件质量
U.S. AVERAGES FOR SOFTWARE QUALITY
Defect Removal Delivered
Defect Origins Potential Efficiency Defects
Requirements 1.00 77% 0.23
Design 1.25 85% 0.19
Coding 1.75 95% 0.09
Documents 0.60 80% 0.12
Bad Fixes 0.40 70% 0.12
TOTAL 5.00 85% 0.75
最好的软件质量
BEST IN CLASS SOFTWARE QUALITY
Defect Origins Potential Efficiency Defects
Requirements 0.40 85% 0.08
Design 0.60 97% 0.02
Coding 1.00 99% 0.01
Documents 0.40 98% 0.01
Bad Fixes 0.10 95% 0.01
TOTAL 2.50 96% 0.13
通常在通过CMMI 3级水平系统软件公司发现
Most often found in systems software > SEI CMM Level 3
差的软件质量
POOR SOFTWARE QUALITY - MALPRACTICE
Defect Removal Delivered
Defect Origins Potential Efficiency Defects
Requirements 1.50 50% 0.75
Design 2.20 50% 1.10
Coding 2.50 80% 0.50
Documents 1.00 70% 0.30
Bad Fixes 0.80 50% 0.40
TOTAL 8.00 62% 3.05
通常在大型C/S项目发现(大于5000功能点)
Most often found in large client-server projects (> 5000 FP).
上述所有包含以功能点作为单位的所有软件缺陷源
(Function points show all defect sources - not just coding defects)
各类软件类型的质量情况
|
System
Software
|
Commercial
Software
|
Information
Software
|
Military
Software
|
Outsource
Software
|
Defect
Potentials
|
6.0
|
5.0
|
4.5
|
7.0
|
5.2
|
Defect
Removal
|
94%
|
90%
|
73%
|
96%
|
92%
|
Efficiency
Delivered
Defects
|
0.4
|
0.5
|
1.2
|
0.3
|
0.4
|
First Year
Discovery Rate
|
65%
|
70%
|
30%
|
75%
|
60%
|
First Year
Reported
Defects
|
0.26
|
0.35
|
0.36
|
0.23
|
0.30
|
|
Web
Software
|
Embedded
Software
|
SEI-CMMI3
Software
|
SEI-CMM1
Software
|
Overall
average
|
Defect
Potentials
|
4.0
|
5.5
|
3.0
|
5.5
|
5.1
|
Defect
Removal
|
72%
|
95%
|
95%
|
73%
|
86.7%
|
Efficiency
Delivered
Defects
|
1.1
|
0.3
|
0.15
|
1.5
|
0.68
|
First Year
Discovery Rate
|
95%
|
90%
|
60%
|
35%
|
64.4%
|
First Year
Reported
Defects
|
1.0
|
0.27
|
0.09
|
0.52
|
0.43
|
良好质量结果帮助90%的提升
GOOD QUALITY RESULTS > 90% SUCCESS RATE
•Formal Inspections (Requirements, Design, and Code)
• Static analysis (for about 25 languages out of 2,500 in all)
• Joint Application Design (JAD)
• Software Six-Sigma methods (tailored for software projects)
• Quality Metrics using function points
• Quality Metrics using IBM’s Orthogonal classification
• Defect Removal Efficiency Measurements
• Automated Defect tracking tools
• Active Quality Assurance (> 5% SQA staff)
• Utilization of TSP/PSP approaches
• => Level 3 on the SEI capability maturity model (CMMI)
• Virtualization for reuse and debugging
• Quality Estimation Tools
• Automated Test Support Tools + testing specialists
• Root-Cause Analysis
实际可操作的软件质量定义
A PRACTICAL DEFINITION OF SOFTWARE
•
Low Defect Potentials (<
2.5 per Function Point)
•
High Defect Removal
Efficiency (> 95%)
•
Unambiguous, Stable
Requirements (< 2.5% change)
•
Explicit Requirements
Achieved (> 97.5% achieved)
•
High User Satisfaction
Ratings (> 90% “excellent”)
-
Installation
-
Ease of learning
-
Ease of use
-
Functionality
-
Compatibility
-
Error handling
-
User information (screens, manuals, tutorials)
-
Customer support
-
Defect repairs
Quality Measurements Have Found:
•Individual programmers -- Less than 50% efficient in finding bugs in their own software
•Normal test steps -- often less than 75% efficient (1 of 4 bugs remain)
•Design Reviews and Code Inspections -- often more than 65% efficient; have topped 90%
•Inspections, static analysis, virtualization, plus formal testing – are often more than 95% efficient; have hit 99%
•Reviews, Inspections, static analysis, and virtualization -- lower costs and schedules by as much as 30%
•1) Requirements: Hardest to prevent and repair
•2) Design: Most severe and pervasive
•3) Code: Most numerous; easiest to fix
•4) Documentation: Can be serious if ignored
•5) Bad Fixes: Very difficult to find
•6) Bad Test Cases: Common and troublesome
•7) Data quality: Common but hard to measure
•8) Web content: Unmeasured to date
Severity 1: TOTAL FAILURES 1% at release
Severity 2: MAJOR PROBLEMS 20% at release
Severity 3: MINOR PROBLEMS 35% at release
Severity 4: COSMETIC ERRORS 44% at release
INVALID USER
OR SYSTEM ERRORS 15% of reports
DUPLICATE MULTIPLE REPORTS 30% of
reports
ABEYANT CAN’T RECREATE ERROR 5%
of reports