AI-009: 吴恩达教授(Andrew Ng)的机器学习课程学习笔记38-47

本文深入探讨了Andrew Ng机器学习课程中的神经网络概念,包括非线性假设的重要性、神经元和大脑模仿、模型表示、多类分类、成本函数、反向传播算法、参数展开、梯度检查、随机初始化及自动驾驶应用实例。

本文是学习Andrew Ng的机器学习系列教程的学习笔记。教学视频地址:

https://study.163.com/course/introduction.htm?courseId=1004570029#/courseDetail?tab=1

38. Neural Networks - Representation - Non-linear hypotheses

Why neural networks?

Simple linear or logistic regression together with adding in maybe the quadratic or the cubic features, that is not the good way to learn complex nonlinear hypotheses when n is large, because just end up with too many features.

for example: Visual recognition

The computation would be very expensive to find and represent all of these features.

39. Neural Networks - Representation - neurons and the brain
 

mimic 模仿 大脑可以根据接入的信号不同产生不同的功能,比如将听觉区域接入视觉信号,就能'看'见;

cut ear or hand neural and connect eye neural, this part of brain will learn to see.(neuro-rewiring experiments重接实验)

try to find out brain’s learning algorithm!

you can plug in almost any sensor to the brain and the brain’s learning algorithm will just figure out how to learn from that data and deal with that data.

40. Neural networks: Representation - model representation I
 

Nucleus

Dendrite input wires

Cell body胞体

Node of Ranvier 兰氏节

Axon output wire

Myelin sheath 髓鞘

Schwann cell

Axon terminal 轴突末梢

one neurons send a little pulse of electricity via its axon to some different neuron’s dendrite.

next show the computational steps that are represented by this diagram.

forward propagation 前向

41. Neural Networks - Representation - examples and intuitions
 

just put a large negative weight in front of the variable you want to negate.

end up with a nonlinear decision boundary 得到一个非线性的决策

each layer compute even more complex functions, then the neural networks can deal with complex question.

41. Neural Networks - Representation - Mult-class classification

42. Neural Networks - Learning- Cost function

42. Neural Networks - Learning - back propagation algorithm
 

back propagation algorithm 反向播算法

the key is how to compute these partial derivative terms.

是如何些偏导项

43. Neural Networks - Learning - Implementation note: unrolling parameters 
 

How to onvert back and forth between the matrix representation of the parameters versus the vector representation of the parameters.

The advantage of the matrix representation is that when your parameters are stored as matrices it’s more convenient when you’re doing forward propagation and back propagation and it’s easier when your parameters are stored as matrices to take advantage of the sort of, vectorized implementations.

The advantage of the vector representation when you have like thetaVec or DVec is that when you are using the advanced optimization algorithms. Those algorithms tend to assume that you have all of your parameters unrolled into a big long vector.

44. Neural Networks - Learning - Gradient checking

45. Neural Networks - Learning - Random Initialization

all choose 0 will not work

we will only get one features

the epsilon here has no relationship with the epsilon in gradient checking.

46. Neural Network - Learning - Putting it together
 

47. Neural Networks - Learning - Autonomouse driving example
 

left top is man and neural network result.

Left bottom is image of road.

ALVINN

see human drive and after 2 minutes will auto drive.

can auto switch one-lade road and two lane-road weight. 单车道、双

在 C++ 中,将 `signed char*` 转换为 `float*` 会引发不兼容的二进制数据类型转换问题,因为这两种类型在内存中的表示方式不同。`signed char` 通常表示 8 位有符号整数,而 `float` 是 32 位浮点数,其内部结构遵循 IEEE 754 标准。直接进行指针转换会导致未定义行为,编译器通常会发出警告或错误。 如果目标是将 `signed char` 数据解释为 `float` 的二进制形式(例如从二进制文件或网络流中读取原始数据),则需要确保数据的内存布局与 `float` 类型兼容,并使用 `reinterpret_cast` 进行转换: ```cpp #include <iostream> int main() { signed char data[4] = {0x00, 0x00, 0x80, 0x3F}; // IEEE 754 表示 1.0f 的小端格式 float* fptr = reinterpret_cast<float*>(data); std::cout << "Value: " << *fptr << std::endl; // 输出 1.0 return 0; } ``` 然而,如果 `signed char*` 指向的是数值数据(例如图像像素值),而目标是将其转换为浮点数进行数学运算,则应使用逐元素转换: ```cpp #include <iostream> #include <vector> int main() { signed char data[4] = {-128, -1, 0, 127}; std::vector<float> result; for (int i = 0; i < 4; ++i) { result.push_back(static_cast<float>(data[i])); } for (float val : result) { std::cout << "Converted: " << val << std::endl; } return 0; } ``` 此外,如果目的是抑制 C++ 编译器对某些类型转换的警告(如使用 `-Wdeprecated-declarations` 或其他类型转换警告),可以使用 `#pragma` 指令临时忽略特定警告: ```cpp #pragma clang diagnostic push #pragma clang diagnostic ignored "-Wdeprecated-declarations" // 可能产生警告的代码 float* fptr = (float*)data; #pragma clang diagnostic pop ``` 需要注意的是,这种做法应谨慎使用,确保转换逻辑的正确性和可移植性。 ###
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

铭记北宸

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值