Coursera机器学习第8周作业

本文探讨了主成分分析(PCA)在数据压缩、可视化和作为替代线性回归的应用,详细解释了如何合理选择主成分的数量,确保至少99%的方差被保留。文章还分析了PCA的局限性,包括其对局部最优解的敏感性和输入特征标准化的重要性。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1. 

1、Consider the following 2D dataset:

Which of the following figures correspond to possible values that PCA may return for u(1)(the first eigenvector / first principal component)? Check all that apply (you may have to check more than one figure).

选1、4

2、Which of the following is a reasonable way to select the number of principal components 

k?

(Recall that n is the dimensionality of the input data and m is the number of input examples.)

Choose k to be the smallest value so that at least 99% of the variance is retained.

Choose k to be the largest value so that at least 99% of the variance is retained

Choose k to be 99% of m (i.e., k=0.99m, rounded to the nearest integer).

Use the elbow method.

选1

3、Suppose someone tells you that they ran PCA in such a way that "95% of the variance was retained." What is an equivalent statement to this?     选3

1mmi=1||x(i)x(i)approx||21mmi=1||x(i)||20.05

1mmi=1||x(i)||21mmi=1||x(i)x(i)approx||20.95

1mmi=1||x(i)x(i)approx||21mmi=1||x(i)||20.05

1mmi=1||x(i)x(i)approx||21mmi=1||x(i)||20.95

4、Which of the following statements are true? Check all that apply.   选1和3

Given input data xRn, it makes sense to run PCA only with values of k that satisfy kn. (In particular, running it with k=n is possible but not helpful, and k>n does not make sense.)

PCA is susceptible to local optima; trying multiple random initializations may help.

Even if all the input features are on very similar scales, we should still perform mean normalization (so that each feature has zero mean) before running PCA.

Given only z(i) and Ureduce, there is no way to reconstruct any reasonable approximation to x(i).

5、Which of the following are recommended applications of PCA? Select all that apply.  选1和3

Data compression: Reduce the dimension of your data, so that it takes up less memory / disk space.

As a replacement for (or alternative to) linear regression: For most learning applications, PCA and linear regression give substantially similar results.

Data visualization: To take 2D data, and find a different way of plotting it in 2D (using k=2).

Data compression: Reduce the dimension of your input data x(i), which will be used in a supervised learning algorithm (i.e., use PCA so that your supervised learning algorithm runs faster).

Which of the following is a reasonable way to select the number of principal components k?

(Recall that n is the dimensionality of the input data and m is the number of input examples.)

Choose k to be the smallest value so that at least 99% of the variance is retained.

Choose k to be the largest value so that at least 99% of the variance is retained

Choose k to be 99% of m (i.e., k=0.99m, rounded to the nearest integer).

Use the elbow method.

Which of the following statements are true? Check all that apply.

Given input data xRn, it makes sense to run PCA only with values of k that satisfy kn. (In particular, running it with k=n is possible but not helpful, and k>n does not make sense.)

PCA is susceptible to local optima; trying multiple random initializations may help.

Even if all the input features are on very similar scales, we should still perform mean normalization (so that each feature has zero mean) before running PCA.

Given only z(i) and Ureduce, there is no way to reconstruct any reasonable approximation to x(i).

评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值