Types of tests

 

1. BVT – build verification test

A brief test run on daily builds to determine if anything major is broken

Does not do deep functionality checking

Gets more stringent as the dev cycle goes on

If the BVT passes, the build is accepted into testing

If the BVT fails, the build is declared “Failed” and is not accepted for testing

Purpose is to automatically figure out if the build is worth testing

 

2. Acceptance test

A variable length test run to determine if a minimum level of functionality is available after changes by dev

Between brief BVT and deep functionality

If it passes, code can be checked in

If it fails, dev needs to do more work before checking in

 

3. Functionality test

A longer comprehensive test of all functionality available in a test area

Goal is:

100% function coverage

>75% branch coverage

But NOT 100% code coverage

Boundary conditions and equivalence class partitioning may be used

These tests often form the basis of stress tests

 

4. Unit testing

Unit = smallest functional block of code

Functionality testing

 

5. Component testing

Component = units of code interacting

 Integration testing

 

6. System testing

System = components interacting.  End to end or system testing

 

7. Stress test

A test that runs until the code fails

Does not necessarily provide user equivalence

Scalable; stress level from easy to very difficult

Scopable; can be used from unit to system level

Standard, essential metric to determine for ship quality

Metric is checked over time to spot stability degradation

72 hours’ work as a benchmark

 

8. Configuration test

A test to check compatibility with hardware and software

Uses a test matrix to manage large numbers of systems, devices and target platforms

Designed to catch major classes of incompatible devices

 

9. Regression testing

Regress all critical bugs on critical milestones. To see the new changes’ impacts

 

10. Documentation and help file test

 

11. Security test

 

12. Installation test

 

Blah, blah, blah……

 

### Pytest Framework Mark Usage and Examples In the context of the pytest framework, `mark` decorators provide a way to categorize tests or modify their behavior. This functionality allows developers to selectively run specific subsets of tests based on custom criteria. #### Basic Syntax To apply marks within test functions, use the decorator syntax as shown below: ```python import pytest @pytest.mark.<marker_name> def test_example(): assert True ``` Marks can also be applied directly above individual test methods without using parentheses when no parameters are needed[^1]. #### Common Use Cases ##### Skipping Tests Conditionally Tests marked with `skipif` will not execute under certain conditions specified through expressions passed into this marker. ```python import sys import pytest @pytest.mark.skipif(sys.version_info < (3, 7), reason="requires python3.7 or higher") def test_python_version(): pass ``` ##### Expected Failures Marking tests that should fail but currently do not helps track issues over time until they eventually succeed. ```python @pytest.mark.xfail(reason="known issue") def test_known_issue(): raise Exception("This is an expected failure.") ``` ##### Custom Categories Creating user-defined markers enables grouping similar types of tests together for easier management during development cycles. ```python # conftest.py file where new markers may be registered. def pytest_configure(config): config.addinivalue_line( "markers", "slow: mark test as slow." ) @pytest.mark.slow def test_long_running_process(): """A hypothetical long-running process.""" import time time.sleep(5) assert True ``` When executing tests via command line interface, one could filter out all non-slow tests like so: ```bash pytest -v -m 'not slow' ``` Or conversely include only those tagged specifically as being part of the performance suite: ```bash pytest -v -m 'performance' ``` --related questions-- 1. How does one register additional markers beyond what comes pre-installed? 2. Can multiple markers coexist on single test definitions simultaneously? 3. What mechanisms exist inside pytest for handling parameterized testing scenarios alongside marking features? 4. Is there any impact from applying too many different kinds of marks across large projects?
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值