WebEx Ph 408.435.7000
307 W Tasman Fax 408.435.7004
San Jose, CA 95134
QAForum Test Cases Guideline Document
Author: Vincyye Chan
Last updated: 05/11/2004
Revision: 2.1
Revision History
DateAuthorSection(s)Description of Change(s)
04/29/04Vincyye ChanAllInitial draft
05/03/04Vincyye ChanAllRevised base on feedbacks
05/03/04Vincyye Chan3.1Add test Description template
5.2Add JavaClient and Agent to Table 5.2
3.2.1Add note for unused options in Priority
05/07/04Vincyye ChanAllAdding Priority 1,2,3 to module
05/11/04Vincyye ChanAllAdding Summary and TA information
5/12/04Vincyye Chan3.3.2
4.2
3.2.1.1Adding Integration Suite OS/Browser selection
Adding Spec verification
Update Priority 1,2,3 definition and give examples
PREFACE
Summary: The QAForum Test Cases Guideline Document gives the overall guidance for creating test cases, test module, test suites, test plan and test run in the QAForum; and defines their corresponding correlation to the different test phases defined in QA Test Strategy document. All the definitions in this document will serve as the fundamental QAForum test cases guideline and should be well understand and followed when creating test cases in QAForum.
Document Version Control: It is the reader's responsibility to ensure they have the latest version of this document. Questions should be directed to the owner of this document, or the project manager.
PREFACE. 2
1. Introduction. 4
1.1 Purpose. 4
1.2 Overview.. 4
2. QAForum Structure Overview.. 5
3. QAForum Test Cases Guideline. 6
3.1 Creating Test Cases. 6
3.2 Creating Test Modules. 7
3.3 Creating Test Suites. 9
3.4 Creating Test Plan/Run. 10
4. QAForum Test Run and Test Cycle Phases Correlation. 12
4.1 Phase I Feature and Functional Interoperability. 12
4.2 Phase II System Benchmarking. 12
4.3 Phase III User Acceptance. 12
4.4 Bug Validation. 12
4.5 Test Automation. 13
4.6 Spec Verification. 13
4.7 Phase I, II, III Test Strategy Summary. 13
5. Appendix. 14
5.1 QAForum Structure Example – Phase I Run. 14
5.2 Naming Convention Reference Table. 15
5.3 Choosing the Objective. 16
6. Reference.. 17
7. Glossary.. 17
1. Introduction
QAForum Test Cases Guideline Document standardizes the QAForum test cases structure and clarifies the definitions and processes from test cases to test runs.
1.1 Purpose
The main purpose of this document is to provide clear instructions to QA team on creating test cases, test modules, test suites, test plan and test run. It also gives the definition on all the options and provides examples to clearly guide the QA engineer through the process.
Another important purpose of this document is to define the correlation between the Phase I, II, III testing cycle and the QAForum test runs.
1.2 Overview
For each Phase of a release, QA needs to create a corresponding test run in the QAForum for tracking. Each of the run is based on a Plan; a Plan is consisted of test Suites; a Suite is the group of multiple test Modules and each test Module composed of tens of test cases. And for each test Run QA created, it should associate with a certain test Phase.
2. QAForum Structure Overview
Test Run
Test Plan
Test Suite A [OS/Browser combinations]
Test Module AA [Priority 1, Priority 2, Priority 3]
Test Case 1 [Precondition, Steps, Expected Result], [Objective]
Test Case 2
…
Test Module BB
Test Suite B
Test Module XX
Test Module YY
Test Module AA
Test Suite C
Test Module DD
Test Module EE
…
3. QAForum Test Cases Guideline
3.1 Creating Test Cases
3.1.1 Test Case Objectives
For each test cases entered, an objective must be selected to differentiate the purpose of each test cases. The possible options for objectives are as follows:
§ Functionality
Test Cases with this Objective are used to test the fundamental functionality of the feature.
E.g. Send a Chat message; create a Poll
§ Functional Interoperability
Test Cases with this Objective are used to test the behavior when combining two of more features together. Testing on the horizontal level of the product.
E.g. Chat+Video
§ Product Interface (System/Product Integration)
Test Cases with this Objective are used to test the integration from page to client and the flow of the product (end-to-end testing). This is considered the vertical level of testing for a product.
E.g. Host Schedules a meeting -> Host Starts the meeting -> Attendees Join meeting -> Host Loads some document -> Host Starts Polling -> Attendee Chat with Host -> Host End meeting
§ Error Handling
Test Cases with this Objective are used to test any negative, exceptions and out of boundary cases
E.g. Input a Character for a numeric field
§ Backward Compatibility
Test Cases with this Objective are used to test the files saved with an older version can still work in the newer version
E.g. Polling files created from older version can still be loaded into the newer version
§ Upgrade
Test Cases with this Objective are used to test if site can still function normally after Super Admin and site upgraded to a later version
E.g. T19 -> T20 upgrade
§ Migration
Test Cases with this Objective are the specific test cases for Migration other than the general site testing.
E.g. SA default value matches site behavior
§ Use Cases
Test Cases with this Objective are used to test user scenarios provided by PM, tech support, etc. China team creates the general use cases and US leads will provide customer use cases.
§ Performance
Test Cases with this Objective are used to establish the performance Benchmark for our product.
E.g. Performance date on Load test and Stress test
§ Security
Test Cases with this Objective are used to test the Product Security.
E.g. JMA test cases, login, change meeting ID to join meeting
3.1.2 Test Case Structure
In order to construct a test case in a better and clearer way, we must specify the following in the description:
1. [Precondition] – defines the entry criteria for the test case (e.g. super-admin option, lab computer setup)
2. [Steps] – clearly list each operation needed for this test case and enter in the correct order
3. [Expected Result] – defines what is the output should be based on spec
3.1.3 Guideline
When creating test cases, the followings apply:
q Select the correct Objective for each test case
q Choose the right test Category (see next section for the different Category available)
q Mark if the test cases can be automated or not
q Description of test case should be precise and clear
Note: In all descriptions, please avoid any ambiguous wordings like some, may be, etc, and make sure the meaning of the wordings being used.
3.2 Creating Test Modules
A test Module consists of many individual test cases for either a particular feature (Chat) or have a common test objective (Performance).
3.2.1 Module Priority
3.2.1.1 Priority Definition
There are three categories to choose from when creating a test case for a module:
1. Priority 1
Test cases include core cases for the feature module (the main functionality of the feature). These test cases will be executed for Regression test cycle or for Certification test. And they will also be the considered as the priority test cases for automation.
E.g. In Polling, create a new Poll with Single/Multiple/Short answer and start/end Poll are the Priority 1 cases
(You have to think what Polling does, the main functionality of Polling)
E.g. File Browse: Transfer a file from remote to local and from local to remote, abort a file transfer
2. Priority 2
Majority of the test cases for the module should belong to this priority. Test cases in this priority are created to test the secondary functionality of the features (secondary functionality means even without using these functionality, the main functionality of the feature can still work.
E.g. Polling: Change type, Options, Save Poll etc, will be Priority 2 cases
(Priority 2 cases are secondary functionality of Polling, without these functionality working, Polling can still run – like even Save Poll not working, we can still create/start/end a Poll)
E.g. File Browse: Transfer multiple files, abort multiple transfers, transfer a large file
3. Priority 3
Test cases in this priority include low usage functionality, corner cases and exception test cases. Besides, most of the error handling cases should belong to this priority.
E.g. Polling: Disable Timer, Poll question contains more than 20 multiple choices, go up and down between choices, multiple choice Poll questions does not have any choice etc. are all Priority 3 cases
(Priority 3 cases are corner test cases, or seldom used functionality as well as exceptions and error handling cases)
E.g. File Browse: Transfer different type of files, Transfer a file already exist, Close transfer window when transfer going on
3.2.1.2 Test Cases Ratios
Priority 130% of all cases
Priority 240% of all cases
Priority 330% of all cases
3.2.2 Naming Convention
All test Modules must follow the same naming convention in order to distinguish between Page, Client, and their sublevel features.
Definition: [Product Name]-[Product Layer]-[Feature Name]-{Feature Subset}
Example:
SC-Page-SiteAdmin-EditUser
SC-Client-Chat
SC-Integration-Performance-Stress
SC-Page-FI
SC-Client-FI
SC-Integration-PI
Note: Refer to Appendix table 5.2 for Product/Product Layer/Feature names
3.2.3 Guideline
When creating test Modules, the following applies:
q Add all Test Case in Module into the correct Branch (e.g. T20)
q Uses the correct Naming Convention when creating Module name
3.3 Creating Test Suites
A Suite is used to organize the test Modules based on different intensions. It also defines on what platform(s) these suites will be executed.
3.3.1 Suites Categories
A test Suite can be categorized in one of these categories:
1. Feature Level
The Feature Level test Suites are organized into Level I Suite, Level II Suite and Level III Suite. The Level I, II and III test Suites are run against Level 1, 2 and 3 OS/Browser combinations respectively.
2. Integration Level (FI, PI, MT, Performance, etc.)
Focus on the integration type of testing on modules and the product as well as the performance testing.
3. TA
Test Automation cases for detecting high-level issues for packages and regression checking during bug fixes.
3.3.2 Basic Test Suites for Creating Run
For each release, QA leads need to create 9 test suites including 3 for feature level, 3 for integration level and 3 for TA.
Level I Feature Suite
§ Includes ALL Feature modules* and selects Level 1 Browser/OS combinations to run on
§ Priority 1&2&3 test cases for New Features
§ Priority 1&2 test cases in Old Features
Level II Feature Suite
§ Includes ALL Feature modules* and selects Level 2 Browser/OS combinations
§ Priority 1&2 test cases for both New and Old Features
Level III Feature Suite
§ Includes minimum function Feature modules* and selects Level 3 Browser/OS combinations
§ Only cover basic test cases like Join, leave meeting and major component can start (All cases covered by SVT suite)
Phase I Integration Suite
§ Select Level 1 Browser/OS combinations
§ Includes Priority 1&2 test cases in Functional Interoperability module
§ Includes Priority 1&2 test cases in Product Interface module
Phase II Integration Suite
§ Select Level 1 Browser/OS combinations
§ Select all Priority 3 test cases in Functional Interoperability module
§ Select all Priority 3 test cases in Product Interface module
§ Include all Use Cases in all modules including the CustomerUseCases module
§ Include all Backward Compatibility test cases in each module
§ Include all test cases in Upgrade module
§ Include all test cases in Performance and Stress test module
§ Include all test cases in Product Installation module
Phase III Integration Suite
§ Select Level 1 Browser/OS combinations
§ Select all Priority 1&2 test cases in Functional Interoperability module
§ Select all Priority 1&2 test cases in Product Interface module
§ Select all test cases in MT and Priority 1 test cases in all Modules
SVT Suite
§ The very basic functionality test cases being automated by TA
BVT Suite
§ Larger scoop of basic functionality test cases being automated by TA
URL API Suite
§ All automated test cases in URL API module
* Selection should base on Level 1,2,3 features in the Functional Priority Document
3.4 Creating Test Plan/Run
3.4.1 Test Plan
A test Plan is defines which test Suites to be executed in a test Run. QA should create one test plan for each testing cycle.
Phase I Run – 100% Product walkthrough, find 70% bugs**
Includes
§ Level I Feature Suite
§ Level II Feature Suite
§ Level III Feature Suite
§ Phase I Integration Suite
Phase II Run – Focus on the System Benchmarking and use cases testing**
Includes
§ Phase II Integration Suite
Phase III Run – Focus on finding any regression issue caused by bug fix**
Includes
§ Phase III Integration Suite
TA Run
Includes
§ SVT Suite
§ BVT Suite
§ URL API Suite
** For detail definition for each Phase, please refer to QA Test Strategy document.
3.4.2 Test Run
A test Run is created based on a Plan. A Plan defines which Suites to run. QA needs to create one test Run for each of the test cycles (Phase I, II, III) as well as the TA Run. The next session will explain the relationship between test Run and the three test Phases.
4. QAForum Test Run and Test Cycle Phases Correlation
4.1 Phase I Feature and Functional Interoperability
Phase I will execute test cases in each of the following area:
1. Feature test
New Features: Priority 1&2&3 test cases (Level I, II, III Feature Suite)
Old Features: All Priority 1&2 test cases (Level I, II, III Feature Suite)
2. Functional Interoperability test
Phase I Integration Suite
3. Product Interface test
Phase I Integration Suite
4.2 Phase II System Benchmarking
Phase II will execute test cases in each of the following area:
1. Package Installation
Phase II Integration Suite
2. Use Case
Phase II Integration Suite
3. Performance Benchmark and Stress Test
Phase II Integration Suite
4. Backward Compatibility and Upgrade Test
Phase II Integration Suite
5. Functional Interoperability and Product Interface Subset
Phase II Integration Suite
4.3 Phase III User Acceptance
1. Regression Test
Phase III Integration Suite; Feature suites (Including any feature module(s). Each team can decide if this is needed based on the bug fix quality and scoop when CF)
2. Migration Test (MT)
Phase III Integration Suite
4.4 Bug Validation
During each execution Phase, by default QA should validate all fixed bugs and run regression test on effected area to detect regression.
4.5 Test Automation
TA will be part of every test Phase. All TA scripts will be run multiple times depend on their purpose.
SVT Suite – Run against every All-In-One package to ensure the package quality
BVT Suite – Run during every build to detect regression
URL API Suite – Run periodically or during each milestone
4.6 Spec Verification
The spec verification will not have a separate suite but should be included in the test phases.
UI and Overall Layout Test – should be included in Phase I testing. It can either have a separate module or verify with spec in hand.
Wordings and Text Layout – should be included in Phase II testing by going through word-by-word, paragraph-by-paragraph for all the messages and text base on the spec.
4.7 Phase I, II, III Test Strategy Summary
Test Plan/RunTest TypeTest SuiteTest ScopeTest Case PriorityOS/Browsers
123123
30%40%30%Defined in WBX005726
Phase IFeature Suite
(FS)Level IAll feature modulesNEWNEWNEWÖ
OLDOLD
Level IIAll feature modulesNEWNEW Ö
OLDOLD
Level IIIMinimum modulesSVT Cases Ö
SVT Cases
Integration Suite
(IS)
PhasePhase I IntegrationFI, PIÖÖ
TASVT, BVT and URL API
Phase IIPhase II IntegrationFI, PI Ö
Use Cases, BC, Upgrade, Performance, Stress, Installation
TASVT, BVT and URL API
CFCF
Phase IIIPhase III IntegrationFI, PIÖÖ
MT
Selected Feature ModulesNEW/OLD
TASVT, BVT and URL API
ERER
5. Appendix
5.1 QAForum Structure Example – Phase I Run
RunPlanSuiteModuleTest Case
Phase I Test Run
Phase I Test Plan
Level I SuiteTC-Page-CalendarCase 1: Session Information of a single training session
Case 2: Listing of a multiple training session
Case 3: …
TC-Page-SiteAdmin-Add/EditUser
Case 1: Add a new user
Case 2: Modify user information
Case 3: …
TC-Client-BOCase 1: Host Allow BO session
Case 2: Host Invite attendee to a BO session
Case 3: …
…
Level II SuiteTC-Page-CalendarCase 1: Session Information of a single training session
Case 2: …
TC-Page-SiteAdmin-Add/EditUserCase 1: Add a new user
Case 2: …
TC-Client-BOCase 1: Host Allow BO session
Case 2: …
…
Level III SuiteTC-Page-InstantSessionCase 1: Starts a instant session
TC-Page-Start/Join MeetingCase 1: Join in a session from Live Sessions
TC-Client-ASCase 1: Present share MS Word application
…
Phase I Integration SuiteTC-Client-FICase 1: Starts Chat while doing FT with large file (>100K)
Case 2: In BO session but continue main session telephony
Case 3: …
TC-Integration-PICase 1: Set the site to allow only 5 attendees in each session -> Host starts a session contains Test -> Host starts telephony -> Host launches Test
Case 2: Site uses custom session type and attendee e-Commerce -> Host starts a recorded session and registered attendee join in -> Host do App Share and Doc Share -> Host end meeting -> Attendee access the recorded session
Case 3: …
…
5.2 Naming Convention Reference Table
LevelsName Can Be SpecifiedRemarks
Product MC
TC
EC
SC
SMT SMARTtech is a standalone product
$C
CC
EE Including MyWebex, login, etc.
WCC DB, etc.
Product LayerPage
Client
JavaClient
Agent Used by SC only
PE Used by MC only
Integration
Feature NamePerformanceOnly used by Integration Product Layer
CustomerUseCasesOnly used by Integration Product Layer
Functional Interoperability (FI)Only used by Integration Product Layer
Product Interface (PI)Only used by Integration Product Layer
Migration Test (MT)Only used by Integration Product Layer
InstallationOnly used by Integration Product Layer
UpgradeOnly used by Integration Product Layer
I18n
L10N
…
Examples:
MC-Client-i18n
MC-Page-L10N
MC-PE-Email
MC-Integration-Performance-Load
MC-Integration-FI
MC-Integration-FI-CrossPlatform
MC-Integration-PI
MC-Integration-PI-CrossPlatform
MC-JavaClient-AppShare
MC-Client-ClientDownload
5.3 Choosing the Objective
Module TypeObjectives can be used
Feature Module (e.g. MC-Client-Chat)Functionality
Use Case
Error Handling
Backward Compatibility
Security
Functional Interoperability Module
(e.g. MC-Integration-FI)Functional Interoperability
Error Handling
Use Cases
Security
Product Interface
(e.g. MC-Integration-PI)Product Interface
Error Handling
Use Cases
Security
Performance
(e.g. MC-Integration-Performance-Stress)Performance
Upgrade
(e.g. MC-Integration-Upgrade)Upgrade
Error Handling
Use Cases
Security
Installation
(e.g. MC-Integration-Installation)Functionality
Error Handling
Use Cases
Security
Migration
(e.g. MC-Integration-Migration)Migration
Error Handling
6. Reference
q QA Test Strategy – defines QA test life cycle and detail meaning on each cycle during the whole release.
q Functional Priority Document – details what are the features belongs to each Level of testing (Level 1, 2 and 3) and list out the OS/Browser combination in each Level. Each Service should have its own Functional Priority Document.
q Systems and Browsers (WBX005726) – Defines OS/Browser combinations need to be tested for each release.
7. Glossary
FI Functional Interoperability
PI Product Interface
MT Migration Tool
BC Backward Compatibility