How Google TestsSoftware - Crawl, walk, run.

本文介绍了谷歌如何通过分阶段发布和持续迭代的方式确保产品质量。以Gmail为例,产品在达到99.99%的邮件数据正常运行时间标准后才去除Beta标签。对于Chrome浏览器,则通过Canary、Dev、Test等多个阶段的严格测试来收集反馈并逐步完善。

One of the key ways Google achievesgood results with fewer testers than many companies is that we rarely attemptto ship a large set of features at once. In fact, the exact opposite is oftenthe goal: build the core of a product and release it the moment it is useful toas large a crowd as feasible, then get their feedback and iterate. This is whatwe did with Gmail, a product that kept its beta tag for four years. That tagwas our warning to users that it was still being perfected. We removed the betatag only when we reached our goal of 99.99% uptime for a real user’s emaildata. Obviously, quality is a work in progress!

It’s not as cowboy a processas I make it out to be. In fact, in order to make it to what we call the betachannel release, a product must go through a number of other channels and proveits worth. For Chrome, a product I spent my first two years at Google workingon, multiple channels were used depending on our confidence in the product’squality and the extent of feedback we were looking for. The sequence looked somethinglike this:

Canary Channel is used forcode we suspect isn’t fit for release. Like a canary in a coalmine, if itfailed to survive then we had work to do. Canary channel builds are only forthe ultra tolerant user running experiments and not depending on theapplication to get real work done.

Dev Channel is whatdevelopers use on their day-to-day work. All engineers on a product areexpected to pick this build and use it for real work.

Test Channel is the buildused for internal dog food and represents a candidate beta channel build givengood sustained performance.

The Beta Channel or ReleaseChannel builds are the first ones that get external exposure. A build only getsto the release channel after spending enough time in the prior channels that isgets a chance to prove itself against a barrage of both tests and real usage.

This crawl, walk, runapproach gives us the chance to run tests and experiment on our applicationsearly and obtain feedback from real human beings in addition to all theautomation we run in each of these channels every day.

There are analyticalbenefits to this process as well. If a bug is found in the field a tester cancreate a test that reproduces it and run it against builds in each channel todetermine if a fix has already been implemented.

One of the key ways Google achievesgood results with fewer testers than many companies is that we rarely attemptto ship a large set of features at once. In fact, the exact opposite is oftenthe goal: build the core of a product and release it the moment it is useful toas large a crowd as feasible, then get their feedback and iterate. This is whatwe did with Gmail, a product that kept its beta tag for four years. That tagwas our warning to users that it was still being perfected. We removed the betatag only when we reached our goal of 99.99% uptime for a real user’s emaildata. Obviously, quality is a work in progress!

It’s not as cowboy a processas I make it out to be. In fact, in order to make it to what we call the betachannel release, a product must go through a number of other channels and proveits worth. For Chrome, a product I spent my first two years at Google workingon, multiple channels were used depending on our confidence in the product’squality and the extent of feedback we were looking for. The sequence looked somethinglike this:

Canary Channel is used forcode we suspect isn’t fit for release. Like a canary in a coalmine, if itfailed to survive then we had work to do. Canary channel builds are only forthe ultra tolerant user running experiments and not depending on theapplication to get real work done.

Dev Channel is whatdevelopers use on their day-to-day work. All engineers on a product areexpected to pick this build and use it for real work.

Test Channel is the buildused for internal dog food and represents a candidate beta channel build givengood sustained performance.

The Beta Channel or ReleaseChannel builds are the first ones that get external exposure. A build only getsto the release channel after spending enough time in the prior channels that isgets a chance to prove itself against a barrage of both tests and real usage.

This crawl, walk, runapproach gives us the chance to run tests and experiment on our applicationsearly and obtain feedback from real human beings in addition to all theautomation we run in each of these channels every day.

There are analyticalbenefits to this process as well. If a bug is found in the field a tester cancreate a test that reproduces it and run it against builds in each channel todetermine if a fix has already been implemented.How Google TestsSoftware - Part Four

转载于:https://www.cnblogs.com/scios/p/5945557.html

一、数据采集层:多源人脸数据获取 该层负责从不同设备 / 渠道采集人脸原始数据,为后续模型训练与识别提供基础样本,核心功能包括: 1. 多设备适配采集 实时摄像头采集: 调用计算机内置摄像头(或外接 USB 摄像头),通过OpenCV的VideoCapture接口实时捕获视频流,支持手动触发 “拍照”(按指定快捷键如Space)或自动定时采集(如每 2 秒采集 1 张),采集时自动框选人脸区域(通过Haar级联分类器初步定位),确保样本聚焦人脸。 支持采集参数配置:可设置采集分辨率(如 640×480、1280×720)、图像格式(JPG/PNG)、单用户采集数量(如默认采集 20 张,确保样本多样性),采集过程中实时显示 “已采集数量 / 目标数量”,避免样本不足。 本地图像 / 视频导入: 支持批量导入本地人脸图像文件(支持 JPG、PNG、BMP 格式),自动过滤非图像文件;导入视频文件(MP4、AVI 格式)时,可按 “固定帧间隔”(如每 10 帧提取 1 张图像)或 “手动选择帧” 提取人脸样本,适用于无实时摄像头场景。 数据集对接: 支持接入公开人脸数据集(如 LFW、ORL),通过预设脚本自动读取数据集目录结构(按 “用户 ID - 样本图像” 分类),快速构建训练样本库,无需手动采集,降低系统开发与测试成本。 2. 采集过程辅助功能 人脸有效性校验:采集时通过OpenCV的Haar级联分类器(或MTCNN轻量级模型)实时检测图像中是否包含人脸,若未检测到人脸(如遮挡、侧脸角度过大),则弹窗提示 “未识别到人脸,请调整姿态”,避免无效样本存入。 样本标签管理:采集时需为每个样本绑定 “用户标签”(如姓名、ID 号),支持手动输入标签或从 Excel 名单批量导入标签(按 “标签 - 采集数量” 对应),采集完成后自动按 “标签 - 序号” 命名文件(如 “张三
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值