np.random.seed()、np.random.random()系列函数、np.squeeze()的用法

本文介绍了Python中numpy库的几个常用函数。np.random.seed()可控制随机数生成,使每次生成的随机数相同;np.random.random()系列函数能生成不同类型随机数,如均匀分布小数、整数、浮点数等;np.squeeze()可压缩矩阵秩为1的维度。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

在最近的学习中遇到了这两个函数,详细说一下这两个函数的使用方法:

 

1.np.random.seed():

这个函数控制着随机数的生成。当你将seed值设为某一定值,则np.random下随机数生成函数生成的随机数永远是不变的。更清晰的说,即当你把设置为seed(0),则你每次运行代码第一次用np.random.rand()产生的随机数永远是0.5488135039273248;第二次用np.random.rand()产生的随机数永远是0.7151893663724195,以此类推。具体例子见以下代码:

import numpy as np

np.random.seed(0)
for i in range(6):
    print(np.random.rand())

0.5488135039273248
0.7151893663724195
0.6027633760716439
0.5448831829968969
0.4236547993389047
0.6458941130666561


np.random.seed(0)
for i in range(3):
    print(np.random.rand())

0.5488135039273248
0.7151893663724195
0.6027633760716439

由代码易知,相同的code第二次运行的时候,生成的前三个随机数没变。

值得一提的是,numpy.random.seed()和numpy.random.RandomState()这两个在数据处理中比较常用的函数,两者实现的作用是一样的,都是使每次随机生成数一样。


2.np.random.random()系列函数:

(1) np.random.rand():随机生成均匀分布的[0,1]的随机小数

(2) np.random.randint()、np.random.ranom.random_integers():生成均匀分布的整数。根据所传的参数,前者范围为[start,end),后者为[start,end]

(3) np.random.random()、np.random.sanmple() 、np.random.random_sanmple() 、np.random.randf():生成随机浮点数,取值范围:[0,1)

(4) np.random.randn():随机生成服从正态分布~N(0,1)的浮点数


3. np.squeeze():

将矩阵秩为1的维度压缩掉。代码示例如下:

X = np.arange(15).reshape(-1,1)
print(X.shape)

>>>(15, 1)


y = X.squeeze()
print(y.shape)

>>>(15,)

我在用matplotlib.pyplot下的scatter()方法时发现,若scatter()的参数“c”,若赋值(15,1),则报错;赋值(15,),则运行成功。所以在这里我用到了squeeze()函数。

 

 

这些看似很简单的函数,作为初学者的我,的确是给我造成了极大地困扰。今天特小结于此,希望以后使用Python可以更得心应手吧!

import numpy as np from matplotlib import pyplot as plt def xavier_initializer(layer_dims_, seed=16): np.random.seed(seed) parameters_ = {} num_L = len(layer_dims_) for l in range(num_L - 1): temp_w = np.random.randn(layer_dims_[l + 1], layer_dims_[l]) * np.sqrt(1 / layer_dims_[l]) temp_b = np.zeros((1, layer_dims_[l + 1])) parameters_['W' + str(l + 1)] = temp_w parameters_['b' + str(l + 1)] = temp_b return parameters_ def he_initializer(layer_dims_, seed=16): np.random.seed(seed) parameters_ = {} num_L = len(layer_dims_) for l in range(num_L - 1): temp_w = np.random.randn(layer_dims_[l + 1], layer_dims_[l]) * np.sqrt(2 / layer_dims_[l]) temp_b = np.zeros((1, layer_dims_[l + 1])) parameters_['W' + str(l + 1)] = temp_w parameters_['b' + str(l + 1)] = temp_b return parameters_ def cross_entry_sigmoid(y_hat_, y_): ''' 计算在二分类时的交叉熵 :param y_hat_: 模型输出值 :param y_: 样本真实标签值 :return: ''' m = y_.shape[0] loss = -(np.dot(y_.T, np.log(y_hat_)) + np.dot(1 - y_.T, np.log(1 - y_hat_))) / m return np.squeeze(loss) def cross_entry_softmax(y_hat_, y_): ''' 计算多分类时的交叉熵 :param y_hat_: :param y_: :return: ''' m = y_.shape[0] loss = -np.sum(y_ * np.log(y_hat_)) / m return loss def sigmoid(z): a = 1 / (1 + np.exp(-z)) return a def relu(z): a = np.maximum(0, z) return a def softmax(z): z -= np.max(z) # 防止过大,超出限制,导致计算结果为 nan z_exp = np.exp(z) softmax_z = z_exp / np.sum(z_exp, axis=1, keepdims=True) return softmax_z def sigmoid_backward(da_, cache_z): a = 1 / (1 + np.exp(-cache_z)) dz_ = da_ * a * (1 - a) assert dz_.shape == cache_z.shape return dz_ def softmax_backward(y_, cache_z): # a = softmax(cache_z) dz_ = a - y_ assert dz_.shape == cache_z.shape return dz_ def relu_backward(da_, cache_z): dz = np.array(da_, copy=True) dz[cache_z <= 0] = 0 assert (dz.shape == cache_z.shape) return dz def update_parameters_with_gd(parameters_, grads, learning_rate): L_ = int(len(parameters_) / 2) for l in range(1, L_ + 1): parameters_['W' + str(l)] -= learning_rate * grads['dW' + str(l)] parameters_['b' + str(l)] -= learning_rate * grads['db' + str(l)] return parameters_ def update_parameters_with_sgd(parameters_, grads, learning_rate): L_ = int(len(parameters_) / 2) for l in range(1, L_ + 1): parameters_['W' + str(l)] -= learning_rate * grads['dW' + str(l)] parameters_['b' + str(l)] -= learning_rate * grads['db' + str(l)] return parameters_ def initialize_velcoity(paramters): v = {} L_ = int(len(paramters) / 2) for l in range(1, L_ + 1): v['dW' + str(l)] = np.zeros(paramters['W' + str(l)].shape) v['db' + str(l)] = np.zeros(paramters['b' + str(l)].shape) return v def update_parameters_with_sgd_momentum(parameters, grads, velcoity, beta, learning_rate): L_ = int(len(parameters) / 2) for l in range(1, L_ + 1): velcoity['dW' + str(l)] = beta * velcoity['dW' + str(l)] + (1 - beta) * grads['dW' + str(l)] velcoity['db' + str(l)] = beta * velcoity['db' + str(l)] + (1 - beta) * grads['db' + str(l)] parameters['W' + str(l)] -= learning_rate * velcoity['dW' + str(l)] parameters['b' + str(l)] -= learning_rate * velcoity['db' + str(l)] return parameters, velcoity def initialize_adam(paramters_): l = int(len(paramters_) / 2) square_grad = {} velcoity = {} for i in range(l): for i in range(l): square_grad['dW' + str(i + 1)] = np.zeros(paramters_['W' + str(i + 1)].shape) square_grad['db' + str(i + 1)] = np.zeros(paramters_['b' + str(i + 1)].shape) velcoity['dW' + str(i + 1)] = np.zeros(paramters_['W' + str(i + 1)].shape) velcoity['db' + str(i + 1)] = np.zeros(paramters_['b' + str(i + 1)].shape) return velcoity, square_grad def update_parameters_with_sgd_adam(parameters_, grads_, velcoity, square_grad, epoch, learning_rate=0.1, beta1=0.9, beta2=0.999, epsilon=1e-8): l = int(len(parameters_) / 2) for i in range(l): velcoity['dW' + str(i + 1)] = beta1 * velcoity['dW' + str(i + 1)] + (1 - beta1) * grads_['dW' + str(i + 1)] velcoity['db' + str(i + 1)] = beta1 * velcoity['db' + str(i + 1)] + (1 - beta1) * grads_['db' + str(i + 1)] vw_correct = velcoity['dW' + str(i + 1)] / (1 - np.power(beta1, epoch)) # 这里是对迭代初期的梯度进行修正 vb_correct = velcoity['db' + str(i + 1)] / (1 - np.power(beta1, epoch)) square_grad['dW' + str(i + 1)] = beta2 * square_grad['dW' + str(i + 1)] + (1 - beta2) * ( grads_['dW' + str(i + 1)] ** 2) square_grad['db' + str(i + 1)] = beta2 * square_grad['db' + str(i + 1)] + (1 - beta2) * ( grads_['db' + str(i + 1)] ** 2) sw_correct = square_grad['dW' + str(i + 1)] / (1 - np.power(beta2, epoch)) sb_correct = square_grad['db' + str(i + 1)] / (1 - np.power(beta2, epoch)) parameters_['W' + str(i + 1)] -= learning_rate * vw_correct / np.sqrt(sw_correct + epsilon) parameters_['b' + str(i + 1)] -= learning_rate * vb_correct / np.sqrt(sb_correct + epsilon) return parameters_, velcoity, square_grad def set_ax_gray(ax): ax.patch.set_facecolor("gray") ax.patch.set_alpha(0.1) ax.spines['right'].set_color('none') # 设置隐藏坐标轴 ax.spines['top'].set_color('none') ax.spines['bottom'].set_color('none') ax.spines['left'].set_color('none') ax.grid(axis='y', linestyle='-.') def plot_costs(costs, labels, colors=None): if colors is None: colors = ['b', 'lightcoral'] ax = plt.subplot() assert len(costs) == len(labels) for i in range(len(costs)): ax.plot(costs[i], color=colors[i], label=labels[i]) set_ax_gray(ax) ax.legend(loc='upper right') ax.set_xlabel('num epochs') ax.set_ylabel('cost') plt.show() 以上是bpnnUtil.py
最新发布
06-13
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

_illusion_

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值