Scrum + Manual Testing methodology (for WEB apps.)

本文提出了一种将正式的质量保证流程融入Scrum的方法,通过引入专门的测试人员角色,并详细阐述了从冲刺计划到回顾会议期间测试人员的具体职责与工作流程。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

I believe there are two wrong ways to add the formal QA to Scrum: to give up manual testing skills and learn automation or give up agile principle of quick feedback and test one iteration behind. There is a better way and I’ve seen some articles struggling around with generic phrases like collaboration, optimization, exploratory testing. But none describing the methodology.

What follows is a theoretical research (based on different resources) trying to define such a methodology. The main idea is to add new role (beside the team, scrum master and project owner) called a tester. The methodology sounds very logical. Why nothing like this has been published beforehand? Am I missing a lot of issues with this methodology? Anyone willing to apply it?! Please e-mail ojnjars@inbox.lv if you are willing to contribute this research.


Context – Assumptions 

To make things easier we make several assumptions – clarification of development project specifics. So we will describe them as assumptions here:

1. Development Scrum consists of 2 week sprints

2. The software is a web based with certain data in database. The role of tester will include testing software in “QA environment”, while developers are still doing unit tests in their environment which is not as close to production one.

3. It is decided that software deployment in “QA environment” should follow the manual installation process the same as it will take place in production environment. It optionally needs smart decision if to redeploy data into database. The process may take up to 30 minutes and is never done this way by developers in their environment

4. The process of creating installation package (build process) may also take up to 30 minutes, so will be executed once per night (nightly build)


Day 1. Sprint Planning 

During the first day of a spring – sprint planning meeting – tester participates in order to help to define acceptance criteria (by asking what if questions to product owner) and estimating testing effort for stories. It may appear that there are stories requiring a little development effort but a lot of testing effort. Story selection may be changed to disallow too much stories of such type into one sprint. Alternatively part of testing activities may be postponed, creating a separate story (adding what I would call “quality debt”) and adding to backlog.


Day 2. Test Case Drafting 

A tester works: one day behind” the team. The second day of a sprint is dedicated by tester to draft initial set of test cases for each features included in the sprint. 


Day 3-8. Normal days

Each day starting day 3 of sprint in the morning tester installs latest nightly build (manually using installation procedure/instructions supplied by development team) to QA environment, do fast smoke tests if they fail and emergency fixes are applied by the team. 

Features developed yesterday shall be tested today by tester. As required tester communicates with each developer to transition the knowledge of the feature functionality. Tester must also update (complete) Test Cases in the process of testing as more test ideas may come out during Exploratory Testing. If a feature testing requires too much effort or tester discovers too much ideas and can’t accomplish testing within the day he create new story ticket for further feature investigation. If the ticket is not closed by the end of a sprint it appears to be “Quality Debt” and must be announced to product owner for prioritization. 

Because developers do unit testing most of the features should pass acceptance criteria. However if for any reason they do not pass them in QA environment, QA may return ticket to the team. If acceptance criteria are met but there are more bugs around, tester reports them separately, sending the initial story to End-To-End testing. Tester also has to do wider investigation (based on project quality goals) and report any usability suggestions, performance, security or other potential treats. Such bugs must be assigned to Product Owner if they don’t directly break the acceptance criteria of a story. Bugs that (developer) team disagrees with or appears to require too much effort also get assigned to Product Owner. They have to be addressed during sprint retrospective meeting. So it appears that during sprint retrospective no feature could be demonstrated if it has an open bug against it assigned to anyone but product owner. 


Day 9 – The QA day

Because tester works one day behind, the day 8 of the sprint shall be the last day of a new feature development. That is the reason why day 9 can be dedicated by team to bug fixing, re-factoring and analysis of new features which might be included into next sprint i.e. next sprint preparation. 


Day 10 - retrospective meeting 

In order to demonstrate a feature in retrospective meeting it should be implemented; tested and all bugs either fixed or assigned to Product Owner for clarification (otherwise feature is postponed to next sprint). Tester should demonstrate that features meet acceptance criteria and demonstrate the issues assigned to product owner if any. Product Owner should decide and prioritize those features/defects. The team participates in the meeting to qualify their performance and probably counter tester’s negative opinion. Tester also describes “Quality Debt” changes – what was not tested due to lack of time. Product owner may prioritize the postponed tests or even allow skipping them.


More QA stuff after sprint

Test Cases created during sprint shall be peer-reviewed by another tester at any time later. For example by End-To-End tester if there are any. Ideally review should be done during or by end of the day 10 so that tester has the feedback by next planning meeting, but it may be impossible. In any case tester is not forced to respond to review feedback (fix test cases and run additional test if identified) until the end of sprint (so that tester commitment during sprint planning is not compromised). 

If more tests are identified during review they must be executed later and any defects discovered should be assigned to product owner (added to backlog). 


Testing outside sprints 

Still there are types of tests that are not directly related to functionality implemented by sprints. For example non-functional tests: to validate that given architecture satisfies high availability requirements. Actually the tester may not have enough skills to do performance tests and a separate team or person may be assigned to do performance testing based on agreed schedule.

More over it is highly welcome to develop system-level automated regression tests (based on, but not necessarily repeating manual test cases) along with developer created unit tests. Those tests will aim for different type of regression bugs. This activity may also take place with a significant delay to feature implementation.

The tester should make a smart decision to nominate the build that was especially successful (without critical bugs) to be installed to the special environment such as for performance test or test automation environment.

内容概要:本文深入解析了扣子COZE AI编程及其详细应用代码案例,旨在帮助读者理解新一代低门槛智能体开发范式。文章从五个维度展开:关键概念、核心技巧、典型应用场景、详细代码案例分析以及未来发展趋势。首先介绍了扣子COZE的核心概念,如Bot、Workflow、Plugin、Memory和Knowledge。接着分享了意图识别、函数调用链、动态Prompt、渐进式发布及监控可观测等核心技巧。然后列举了企业内部智能客服、电商导购助手、教育领域AI助教和金融行业合规质检等应用场景。最后,通过构建“会议纪要智能助手”的详细代码案例,展示了从需求描述、技术方案、Workflow节点拆解到调试与上线的全过程,并展望了多智能体协作、本地私有部署、Agent2Agent协议、边缘计算插件和实时RAG等未来发展方向。; 适合人群:对AI编程感兴趣的开发者,尤其是希望快速落地AI产品的技术人员。; 使用场景及目标:①学习如何使用扣子COZE构建生产级智能体;②掌握智能体实例、自动化流程、扩展能力和知识库的使用方法;③通过实际案例理解如何实现会议纪要智能助手的功能,包括触发器设置、下载节点、LLM节点Prompt设计、Code节点处理和邮件节点配置。; 阅读建议:本文不仅提供了理论知识,还包含了详细的代码案例,建议读者结合实际业务需求进行实践,逐步掌握扣子COZE的各项功能,并关注其未来的发展趋势。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值