测试链接外部图像 Higher order tensor structure

博客内容主要是对图像进行测试,查看其是否能正常显示,聚焦于信息技术领域的图像测试方面。

测试下面的图像是否显示正常







该python代码出现了下述问题,告诉我为什么会出现下述问题,并且对代码进行修改,确保其能够运行并实现原本的功能 C:\ProgramData\Miniconda3\python.exe D:\@biancheng@\pythonProject1\main.py 2025-05-26 10:35:10.096077: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 2025-05-26 10:35:11.060585: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 数据加载完成,共有5000个数据点 真实参数: alpha=0.12, beta=0.08, 噪声水平=0.02 2025-05-26 10:35:14.058990: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. WARNING:tensorflow:Calling GradientTape.gradient on a persistent tape inside its context is significantly less efficient than calling it outside the context (it causes the gradient ops to be recorded on the tape, leading to increased CPU and memory usage). Only call GradientTape.gradient inside the context if you actually want to trace the gradient in order to compute higher order derivatives. WARNING:tensorflow:Calling GradientTape.gradient on a persistent tape inside its context is significantly less efficient than calling it outside the context (it causes the gradient ops to be recorded on the tape, leading to increased CPU and memory usage). Only call GradientTape.gradient inside the context if you actually want to trace the gradient in order to compute higher order derivatives. WARNING:tensorflow:Calling GradientTape.gradient on a persistent tape inside its context is significantly less efficient than calling it outside the context (it causes the gradient ops to be recorded on the tape, leading to increased CPU and memory usage). Only call GradientTape.gradient inside the context if you actually want to trace the gradient in order to compute higher order derivatives. WARNING:tensorflow:Calling GradientTape.gradient on a persistent tape inside its context is significantly less efficient than calling it outside the context (it causes the gradient ops to be recorded on the tape, leading to increased CPU and memory usage). Only call GradientTape.gradient inside the context if you actually want to trace the gradient in order to compute higher order derivatives. Traceback (most recent call last): File "D:\@biancheng@\pythonProject1\main.py", line 169, in <module> loss = total_loss(model, batch_inputs, batch_outputs) File "D:\@biancheng@\pythonProject1\main.py", line 131, in total_loss physics_loss = model.get_physics_loss(inputs) File "D:\@biancheng@\pythonProject1\main.py", line 108, in get_physics_loss u_xx = tape.gradient(u_x, x) File "C:\ProgramData\Miniconda3\lib\site-packages\tensorflow\python\eager\backprop.py", line 1023, in gradient raise TypeError("Argument `target` should be a list or nested structure" TypeError: Argument `target` should be a list or nested structure of Tensors, Variables or CompositeTensors to be differentiated, but received None.
05-27
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值