Machine Learning week 6 quiz: Machine Learning System Design

本文探讨了机器学习系统设计中的关键概念,包括正负样本分类、召回率计算、大规模数据集训练条件、阈值调整对分类器性能的影响以及如何在不同数据集上评估模型的有效性。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Machine Learning System Design

5 试题

1. 

You are working on a spam classification system using regularized logistic regression. "Spam" is a positive class (y = 1) and "not spam" is the negative class (y = 0). You have trained your classifier and there are m = 1000 examples in the cross-validation set. The chart of predicted class vs. actual class is:

  Actual Class: 1 Actual Class: 0
Predicted Class: 1 85 890
Predicted Class: 0 15 10

For reference:

  • Accuracy = (true positives + true negatives) / (total examples)
  • Precision = (true positives) / (true positives + false positives)
  • Recall = (true positives) / (true positives + false negatives)
  • F1 score = (2 * precision * recall) / (precision + recall)

What is the classifier's recall (as a value from 0 to 1)?

Enter your answer in the box below. If necessary, provide at least two values after the decimal point.

2. 

Suppose a massive dataset is available for training a learning algorithm. Training on a lot of data is likely to give good performance when two of the following conditions hold true.

Which are the two?

The classes are not too skewed.

A human expert on the application domain

can confidently predict y when given only the features x

(or more generally, if we have some way to be confident

that x contains sufficient information to predict y

accurately).

Our learning algorithm is able to

represent fairly complex functions (for example, if we

train a neural network or other model with a large

number of parameters).

When we are willing to include high

order polynomial features of x (such as x21x22,

x1x2, etc.).

3. 

Suppose you have trained a logistic regression classifier which is outputing hθ(x).

Currently, you predict 1 if hθ(x)threshold, and predict 0 if hθ(x)ltthreshold, where currently the threshold is set to 0.5.

Suppose you decrease the threshold to 0.1. Which of the following are true? Check all that apply.

The classifier is likely to now have higher recall.

The classifier is likely to have unchanged precision and recall, but

higher accuracy.

The classifier is likely to now have higher precision.

The classifier is likely to have unchanged precision and recall, but

lower accuracy.

4. 

Suppose you are working on a spam classifier, where spam

emails are positive examples (y=1) and non-spam emails are

negative examples (y=0). You have a training set of emails

in which 99% of the emails are non-spam and the other 1% is

spam. Which of the following statements are true? Check all

that apply.

If you always predict non-spam (output

y=0), your classifier will have 99% accuracy on the

training set, and it will likely perform similarly on

the cross validation set.

If you always predict non-spam (output

y=0), your classifier will have an accuracy of

99%.

A good classifier should have both a

high precision and high recall on the cross validation

set.

If you always predict non-spam (output

y=0), your classifier will have 99% accuracy on the

training set, but it will do much worse on the cross

validation set because it has overfit the training

data.

5. 

Which of the following statements are true? Check all that apply.

It is a good idea to spend a lot of time

collecting a large amount of data before building

your first version of a learning algorithm.

If your model is underfitting the

training set, then obtaining more data is likely to

help.

On skewed datasets (e.g., when there are

more positive examples than negative examples), accuracy

is not a good measure of performance and you should

instead use F1 score based on the

precision and recall.

After training a logistic regression

classifier, you must use 0.5 as your threshold

for predicting whether an example is positive or

negative.

Using a very large training set

makes it unlikely for model to overfit the training

data.

评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值