tf.argmax() 与 tf.argmin()

本文详细介绍了TensorFlow中的tf.argmax与tf.argmin函数的使用方法,包括如何通过设置axis参数来获取最大值或最小值所在的索引,适用于不同维度的数据处理需求。
部署运行你感兴趣的模型镜像

一、tf.argmax(input,axis,name=None) : 返回input中最大值的索引index。

        其中axis=0时表示比较每一的元素,返回最大元素所在行号(即第一维度);

        

        axis=1表示比较每一的元素, 返回最大元素所在列号(即第二维度);

         

二、tf.argmin(input,axis,name=None) : 返回input中最小值的索引index。

        其中axis=0时表示比较每一的元素,返回最小元素所在行号(即第一维度);

        

        其中axis=1时表示比较每一的元素,返回最小元素所在列号(即第二维度);

        

 

 

 

 

 

 

您可能感兴趣的与本文相关的镜像

TensorFlow-v2.15

TensorFlow-v2.15

TensorFlow

TensorFlow 是由Google Brain 团队开发的开源机器学习框架,广泛应用于深度学习研究和生产环境。 它提供了一个灵活的平台,用于构建和训练各种机器学习模型

根据Traceback (most recent call last): File "step10/TrainSaveLoadTest.py", line 16, in <module> import TrainSaveLoadForUsers File "/data/workspace/myshixun/step10/TrainSaveLoadForUsers.py", line 29, in <module> x = InitialPart(batchImgInput, BNTraining) TypeError: InitialPart() takes 1 positional argument but 2 were given 修改 import os os.environ["TF_CPP_MIN_LOG_LEVEL"]='3' import warnings warnings.filterwarnings('ignore') import tensorflow as tf import sys sys.path.append('step3') sys.path.append('step9') from generatorCompleted import batchGenerator from outputsUtilsCompleted import softmax, returnOneHot, computeAccuracy from prevModules import (Inception_traditional, Inception_parallelAsymmetricConv, Inception_AsymmetricConv,InitialPart,reduction,ResNetBlock) #********** Begin **********# batchImgInput = tf.placeholder(tf.float32, shape=[None, 224, 224, 3], name='batchImgInput') Labels = tf.placeholder(tf.float32, shape=[None, 10], name='Labels') dropout_keep_prob = tf.placeholder(tf.float32, name='dropout_keep_prob') BNTraining = tf.placeholder(tf.bool, name='BNTraining') InputBatchSize = tf.placeholder(tf.int32, name='InputBatchSize') # 构建模型 # 初始部分 x = InitialPart(batchImgInput, BNTraining) # Inception 模块 x = Inception_traditional(x, BNTraining) x = Inception_parallelAsymmetricConv(x, BNTraining) x = Inception_AsymmetricConv(x, BNTraining) # 降维模块 x = reduction(x, BNTraining) # ResNet 块 x = ResNetBlock(x, BNTraining) # 全局平均池化 x = tf.keras.layers.GlobalAveragePooling2D()(x) # Dropout x = tf.nn.dropout(x, keep_prob=dropout_keep_prob) # 全连接层 logits = tf.layers.dense(x, 10, activation=None) # 定义损失函数和优化器 loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=Labels)) optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(loss) # 计算准确率 predictions = tf.argmax(logits, 1) correct_predictions = tf.equal(predictions, tf.argmax(Labels, 1)) accuracy = tf.reduce_mean(tf.cast(correct_predictions, tf.float32)) # 初始化变量 init = tf.global_variables_initializer() # 创建会话 with tf.Session() as sess: sess.run(init) num_epochs = 10 batch_size = 32 for epoch in range(num_epochs): total_loss = 0 total_accuracy = 0 num_batches = 0 for batch_images, batch_labels in batchGenerator(batch_size): feed_dict = { batchImgInput: batch_images, Labels: batch_labels, dropout_keep_prob: 0.8, BNTraining: True, InputBatchSize: batch_size } _, batch_loss, batch_accuracy = sess.run([optimizer, loss, accuracy], feed_dict=feed_dict) total_loss += batch_loss total_accuracy += batch_accuracy num_batches += 1 avg_loss = total_loss / num_batches avg_accuracy = total_accuracy / num_batches print(f'Epoch {epoch + 1}, Loss: {avg_loss}, Accuracy: {avg_accuracy}') # 保存模型 saver = tf.train.Saver() saver.save(sess, 'step10/Model/FinalNet') #********** End **********#
最新发布
11-29
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值