tf.nn.conv1d(或tf.compat.v1.nn.conv1d)函数

本文详细解析了TensorFlow中tf.nn.conv1d函数的功能、参数及返回值,介绍了如何利用该函数实现1D卷积计算,包括输入张量、滤波器、步长、填充方式等关键概念。
部署运行你感兴趣的模型镜像

tf.nn.conv1d

也可用作tf.compat.v1.nn.conv1d

1.函数功能

根据给定的输入3-D 张量和滤波器(filter)张量,计算1D卷积

2.函数参数

tf.nn.conv1d(
    value=None,
    filters=None,
    stride=None,
    padding=None,
    use_cudnn_on_gpu=None,
    data_format=None,
    name=None,
    input=None,
    dilations=None
)
参数含义
value3维的输入张量,其维度是[batch, in_width, in_channels](NWC格式) ,batch为样本维,表示多少个样本,in_width为宽度维,表示样本的宽度,in_channels维通道维,表示样本有多少个通道(如灰度单色是1,RGB图片是3)。 可看作一个平铺开的二维数组[batch, 行数, 列数]即把每一个样本看作一个平铺开的二维数组。value其类型必须是float16, float32, 或者 float64之一
filters三维张量,类型与输入张量value一致,filters的格式为[filter_width, in_channels, out_channels]。按照value的第二种看法,filter_width可理解为每次与value进行卷积的行数,in_channels表示value一共有多少列(与value中的in_channels相对应)。out_channels表示输出通道,可理解为一共有多少个卷积核,也即卷积核的数目。
stridefilter在每个步骤中向右移动的步数,可以是一个int整数或者int列表(包含1个或3个int元素的int列表)。。
padding边界的处理方式, ‘SAME’ 代表给边界加锁padding让卷积的输出和输入保持同样(SAME)的尺寸, 'VALID’表达给边界填充0
use_cudnn_on_gpu是否使用cudnn加速,bool类型,默认True.
data_format可以是"NWC"和"NCW";默认为"NWC",数据按[batch,in_width,in_channels]的顺序存储;"NCW"格式将数据存储为[batch, in_channels, in_width]。
name操作的名称(可选).
input输入的别名.
dilations输入的每个维度的膨胀因子,可以是一个int值或int的列表(长度是1或者3),默认值是1。如果某维度的膨胀因子(k)大于1,则在该维度上,每个filter元素之间将有k-1个cell被跳过。在batch和depth维度上膨胀因子值必须为1。

注意:
如果data_format是“NWC”,则给定一个形状为[batch,in_width,in_channels]的输入张量,或者如果data_format是“NCW”,并且过滤器/内核张量的形状为[filter_width,in_channels,out_channels],则[batch,in_channels,in_width],此运算重新计算参数,将其传递给 conv2d 以执行等效卷积操作。

3.返回

Tensor,与输入具有相同的类型。

4.可能引发的异常

ValueError:如果data_format无效。

您可能感兴趣的与本文相关的镜像

TensorFlow-v2.15

TensorFlow-v2.15

TensorFlow

TensorFlow 是由Google Brain 团队开发的开源机器学习框架,广泛应用于深度学习研究和生产环境。 它提供了一个灵活的平台,用于构建和训练各种机器学习模型

E:\python\python.exe "E:\Program Files\JetBrains\pycharm_project\PointNetCFD-main\PointNetCFD.py" 2025-05-20 16:17:39.868194: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 2025-05-20 16:17:41.799963: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. Number of data is: 75 E:\python\Lib\site-packages\keras\src\layers\convolutional\base_conv.py:107: UserWarning: Do not pass an `input_shape`/`input_dim` argument to a layer. When using Sequential models, prefer using an `Input(shape)` object as the first layer in the model instead. super().__init__(activity_regularizer=activity_regularizer, **kwargs) 2025-05-20 16:17:47.483218: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: SSE3 SSE4.1 SSE4.2 AVX AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. WARNING:tensorflow: The following Variables were used a Lambda layer's call (tf.compat.v1.nn.conv1d), but are not present in its tracked objects: <tf.Variable 'conv1d/kernel:0' shape=(1, 2, 64) dtype=float32> It is possible that this is intended behavior, but it is more likely an omission. This is a strong indication that this layer should be formulated as a subclassed Layer rather than a Lambda layer. WARNING:tensorflow: The following Variables were used a Lambda layer's call (tf.compat.v1.nn.conv1d_1), but are not present in its tracked objects: <tf.Variable 'conv1d_1/kernel:0' shape=(1, 64, 64) dtype=float32> It is possible that this is intended behavior, but it is more likely an omission. This is a strong indication that this layer should be formulated as a subclassed Layer rather than a Lambda layer. WARNING:tensorflow: The following Variables were used a Lambda layer's call (tf.compat.v1.nn.conv1d_2), but are not present in its tracked objects: <tf.Variable 'conv1d_2/kernel:0' shape=(1, 64, 64) dtype=float32> It is possible that this is intended behavior, but it is more likely an omission. This is a strong indication that this layer should be formulated as a subclassed Layer rather than a Lambda layer. WARNING:tensorflow: The following Variables were used a Lambda layer's call (tf.compat.v1.nn.conv1d_3), but are not present in its tracked objects: <tf.Variable 'conv1d_3/kernel:0' shape=(1, 64, 128) dtype=float32> It is possible that this is intended behavior, but it is more likely an omission. This is a strong indication that this layer should be formulated as a subclassed Layer rather than a Lambda layer. WARNING:tensorflow: The following Variables were used a Lambda layer's call (tf.compat.v1.nn.conv1d_4), but are not present in its tracked objects: <tf.Variable 'conv1d_4/kernel:0' shape=(1, 128, 1024) dtype=float32> It is possible that this is intended behavior, but it is more likely an omission. This is a strong indication that this layer should be formulated as a subclassed Layer rather than a Lambda layer. WARNING:tensorflow: The following Variables were used a Lambda layer's call (tf.compat.v1.nn.conv1d_5), but are not present in its tracked objects: <tf.Variable 'conv1d_5/kernel:0' shape=(1, 1088, 512) dtype=float32> It is possible that this is intended behavior, but it is more likely an omission. This is a strong indication that this layer should be formulated as a subclassed Layer rather than a Lambda layer. WARNING:tensorflow: The following Variables were used a Lambda layer's call (tf.compat.v1.nn.conv1d_6), but are not present in its tracked objects: <tf.Variable 'conv1d_6/kernel:0' shape=(1, 512, 256) dtype=float32> It is possible that this is intended behavior, but it is more likely an omission. This is a strong indication that this layer should be formulated as a subclassed Layer rather than a Lambda layer. WARNING:tensorflow: The following Variables were used a Lambda layer's call (tf.compat.v1.nn.conv1d_7), but are not present in its tracked objects: <tf.Variable 'conv1d_7/kernel:0' shape=(1, 256, 128) dtype=float32> It is possible that this is intended behavior, but it is more likely an omission. This is a strong indication that this layer should be formulated as a subclassed Layer rather than a Lambda layer. WARNING:tensorflow: The following Variables were used a Lambda layer's call (tf.compat.v1.nn.conv1d_8), but are not present in its tracked objects: <tf.Variable 'conv1d_8/kernel:0' shape=(1, 128, 128) dtype=float32> It is possible that this is intended behavior, but it is more likely an omission. This is a strong indication that this layer should be formulated as a subclassed Layer rather than a Lambda layer. WARNING:tensorflow: The following Variables were used a Lambda layer's call (tf.compat.v1.nn.conv1d_9), but are not present in its tracked objects: <tf.Variable 'conv1d_9/kernel:0' shape=(1, 128, 3) dtype=float32> It is possible that this is intended behavior, but it is more likely an omission. This is a strong indication that this layer should be formulated as a subclassed Layer rather than a Lambda layer. Traceback (most recent call last): File "E:\Program Files\JetBrains\pycharm_project\PointNetCFD-main\PointNetCFD.py", line 199, in <module> model.compile(optimizers.Adam(lr=learning_rate, beta_1=0.9, beta_2=0.999, epsilon=0.000001, decay=decaying_rate) ^^^^^^^^^^^^^^^ AttributeError: module 'tensorflow.python.keras.optimizers' has no attribute 'Adam' Process finished with exit code 1
05-22
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值