A. Red Versus Blue

https://codeforces.com/contest/1659/problem/A
怎么调水印大小啊,寄

input

3
7 4 3
6 5 1
19 13 6

output

RBRBRBR
RRRBRR
RRBRRBRRBRRBRRBRRBR

input

6
3 2 1
10 6 4
11 6 5
10 9 1
10 8 2
11 9 2

output

RBR
RRBRBRBRBR
RBRBRBRBRBR
RRRRRBRRRR
RRRBRRRBRR
RRRBRRRBRRR

题意
T组询问,每次询问中给定 n n n a a a b b b,其中 n = a + b n = a + b n=a+b,让你构造一串长度为 n n n,由 a a a ‘ R ′ ‘R' R b b b ’ B ‘ ’B‘ B 组成的字符串,并且使得连续的 ’ R ’ ’R’ R 尽可能短,即: R R B RRB RRB 中连续的 R R R 的长度为 2 2 2, 而 R B R RBR RBR 中连续的 R R R 的长度为 1 1 1

思路
可以理解成每个字符 B B B 算作一个隔板,使得分隔出一个个区间,然后你需要将 a a a ‘ R ′ ‘R' R 放回其中
如下图:
三个 B B B 可以分隔出 4 4 4 个区间

在这里插入图片描述
所以我们可以这样构造:

  • 尽可能的将 R R R 平均分配给每个区间, 如果存在余数则再将余数均分配给部分格子中

AC代码
感觉我的代码有点抽象…

y u yu yu :表示余数是否剩余
k k k : 表示不算余数的情况下,每个区间应该存在多少个 R R R
s t r i n g q ( k + ( y u ? 1 : 0 ) , ′ R ′ ) ; string q(k + (yu ? 1 : 0),'R'); stringq(k+(yu?1:0),R); :是生成长度为 k + ( y u ? 1 : 0 ) k + (yu ? 1 : 0) k+(yu?1:0) ,全是 R R R 的字符串

#include <bits/stdc++.h>
#define endl '\n'
#define AC return 0;
using namespace std;
//#define ll long long
//#define int long long


void slove()
{
    int n,r,b;
    cin >> n >> r >> b;
    int k = r / (b + 1);
    int yu = r % (b + 1);
    string t(k + (yu ? 1 : 0),'R');
    yu = (yu ? yu - 1 : 0);
    cout << t;
    for(int i = 1; i <= b; i++)
    {
        cout << "B";
        string q(k + (yu ? 1 : 0),'R');
        yu = (yu ? yu - 1 : 0);
        cout << q; 
    }
    cout << endl;

    
}

signed main()
{
    ios::sync_with_stdio(false);cin.tie(0);cout.tie(0);
    int T;cin >> T; while(T--)
    slove();
    AC
}
ValueError Traceback (most recent call last) Cell In[3], line 274 270 break 272 plt.plot(max_capacity_data['循环周期'], max_capacity_data[target], 'o-', 273 linewidth=2, markersize=8, label='真实值', color='blue') --> 274 plt.plot(max_capacity_data['循环周期'], max_predictions, 's-', 275 linewidth=2, markersize=6, label='预测值', color='red') 276 plt.xlabel('循环周期') 277 plt.ylabel('容量/Ah') File D:\ML_PYTHON\.venv\Lib\site-packages\matplotlib\pyplot.py:3838, in plot(scalex, scaley, data, *args, **kwargs) 3830 @_copy_docstring_and_deprecators(Axes.plot) 3831 def plot( 3832 *args: float | ArrayLike | str, (...) 3836 **kwargs, 3837 ) -> list[Line2D]: -> 3838 return gca().plot( 3839 *args, 3840 scalex=scalex, 3841 scaley=scaley, 3842 **({"data": data} if data is not None else {}), 3843 **kwargs, 3844 ) File D:\ML_PYTHON\.venv\Lib\site-packages\matplotlib\axes\_axes.py:1777, in Axes.plot(self, scalex, scaley, data, *args, **kwargs) 1534 """ 1535 Plot y versus x as lines and/or markers. 1536 (...) 1774 (``'green'``) or hex strings (``'#008000'``). 1775 """ 1776 kwargs = cbook.normalize_kwargs(kwargs, mlines.Line2D) -> 1777 lines = [*self._get_lines(self, *args, data=data, **kwargs)] 1778 for line in lines: 1779 self.add_line(line) File D:\ML_PYTHON\.venv\Lib\site-packages\matplotlib\axes\_base.py:297, in _process_plot_var_args.__call__(self, axes, data, return_kwargs, *args, **kwargs) 295 this += args[0], 296 args = args[1:] --> 297 yield from self._plot_args( 298 axes, this, kwargs, ambiguous_fmt_datakey=ambiguous_fmt_datakey, 299 return_kwargs=return_kwargs 300 ) File D:\ML_PYTHON\.venv\Lib\site-packages\matplotlib\axes\_base.py:494, in _process_plot_var_args._plot_args(self, axes, tup, kwargs, return_kwargs, ambiguous_fmt_datakey) 491 axes.yaxis.update_units(y) 493 if x.shape[0] != y.shape[0]: --> 494 raise ValueError(f"x and y must have same first dimension, but " 495 f"have shapes {x.shape} and {y.shape}") 496 if x.ndim > 2 or y.ndim > 2: 497 raise ValueError(f"x and y can be no greater than 2D, but have " 498 f"shapes {x.shape} and {y.shape}") ValueError: x and y must have same first dimension, but have shapes (511,) and (510,)
最新发布
10-17
Code 分享 Notebook 保存成功 Python 3 (ipykernel) import os import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.preprocessing import MinMaxScaler from tensorflow.keras import layers,losses,optimizers, Sequential from tensorflow.keras.layers import InputLayer, Dense, LSTM, Dropout from tensorflow.keras.models import load_model from tensorflow.keras import Model 0秒 + Code + Markdown stock_data = pd.read_csv('IBM_stock_data.csv') 0秒 + Code + Markdown stock_data.head() 0秒 date Open High Low Close Volume Price Change % 0 1999/11/1 98.50 98.81 96.37 96.75 9551800 0.000000 1 1999/11/2 96.75 96.81 93.69 94.81 11105400 -2.005168 2 1999/11/3 95.87 95.94 93.50 94.37 10369100 -0.464086 3 1999/11/4 94.44 94.44 90.00 91.56 16697600 -2.977641 4 1999/11/5 92.75 92.94 90.19 90.25 13737600 -1.430756 + Code + Markdown scaler = MinMaxScaler(feature_range=(0, 1)) scaled_data = scaler.fit_transform(stock_data['Close'].values.reshape(-1, 1)) 0秒 + Code + Markdown scaled_data[0] 0秒 array([0.23131139]) + Code + Markdown len(scaled_data) 0秒 6293 + Code + Markdown training_data_len = int(np.ceil(len(scaled_data) * 0.8)) #向上取整 0秒 + Code + Markdown train_data = scaled_data[0:training_data_len] X_train, y_train = [], [] time_step = 10 # 时间窗口,模型基于前10个时间步长进行预测。可以尝试不同长度(如20、30)并观察效果变化。 0秒 + Code + Markdown for i in range(len(train_data) - time_step - 1): X_train.append(train_data[i:(i + time_step), 0]) y_train.append(train_data[i + time_step, 0]) 0秒 + Code + Markdown X_train, y_train = np.array(X_train), np.array(y_train) X_train = X_train.reshape(X_train.shape[0], X_train.shape[1], 1) 0秒 + Code + Markdown X_train.shape 0秒 (5024, 10, 1) + Code + Markdown X_test, y_test = [], [] test_data = scaled_data[training_data_len - time_step:] ​ for i in range(len(test_data) - time_step): X_test.append(test_data[i:(i + time_step), 0]) y_test.append(test_data[i + time_step, 0]) ​ X_test, y_test = np.array(X_test), np.array(y_test) X_test = X_test.reshape(X_test.shape[0], X_test.shape[1], 1) 0秒 + Code + Markdown model = Sequential() model.add(InputLayer(input_shape=(X_train.shape[1], 1))) model.add(LSTM(units=64, return_sequences=True)) # 调整Dropout: 当前设置为0.3,可以尝试在不同的层上使用不同的Dropout值,例如0.2~0.5之间。Dropout的作用是防止过拟合。 model.add(Dropout(0.3)) model.add(LSTM(units=64, return_sequences=True)) model.add(Dropout(0.3)) model.add(LSTM(units=32)) model.add(Dropout(0.2)) # 增加回归层: 如果希望更高的拟合精度,可以添加多个Dense层,例如在输出前再增加一层Dense。 model.add(Dense(units=1)) 3秒 2025-06-21 16:21:14.205566: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory 2025-06-21 16:21:14.205665: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303) 2025-06-21 16:21:14.205686: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (dsw-1161427-779497c87d-j6j54): /proc/driver/nvidia/version does not exist 2025-06-21 16:21:14.206190: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. + Code + Markdown def MyRNN(): model = Sequential([ layers.InputLayer(input_shape=(X_train.shape[1],1)), layers.SimpleRNN(units=64, dropout=0.5, return_sequences=True, unroll=True), layers.SimpleRNN(units=64, dropout=0.5, unroll=True), layers.Dense(1)] ) return model 0秒 + Code + Markdown model = MyRNN() model.compile(optimizer='adam', loss='mean_squared_error') 0秒 + Code + Markdown history = model.fit( X_train, y_train, epochs=2, # 批小(batch_size): 尝试不同的batch_size,例如16、32、64,以找到训练稳定性和准确性之间的平衡 batch_size=32, #callbacks=[early_stopping, lr_scheduler] ) 12秒 Epoch 1/2 WARNING:tensorflow:AutoGraph could not transform <function Model.make_train_function.<locals>.train_function at 0x7f02c8692040> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Constant' To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING: AutoGraph could not transform <function Model.make_train_function.<locals>.train_function at 0x7f02c8692040> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Constant' To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert 157/157 [==============================] - 9s 20ms/step - loss: 0.0396 Epoch 2/2 157/157 [==============================] - 3s 21ms/step - loss: 0.0266 + Code + Markdown model.save('my_model.keras') 0秒 + Code + Markdown train_loss = model.evaluate(X_train, y_train, verbose=0) test_loss = model.evaluate(X_test, y_test, verbose=0) 5秒 WARNING:tensorflow:AutoGraph could not transform <function Model.make_test_function.<locals>.test_function at 0x7f02cc2b5dc0> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Constant' To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING: AutoGraph could not transform <function Model.make_test_function.<locals>.test_function at 0x7f02cc2b5dc0> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Constant' To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert + Code + Markdown # 训练集上计算的损失值,数值越小表示模型在训练数据上的拟合效果越好 print(f"Training Loss: {train_loss:.4f}") # 测试集上计算的损失值,反映了模型在未见过的数据上的表现。测试损失略高于训练损失,但差距不,说明模型在新数据上的表现依然良好。 print(f"Testing Loss: {test_loss:.4f}") 0秒 Training Loss: 0.0412 Testing Loss: 0.0480 + Code + Markdown model = load_model('my_model.keras') predictions = model.predict(X_test) predictions = scaler.inverse_transform(predictions) # 反归一化预测值 2秒 WARNING:tensorflow:AutoGraph could not transform <function Model.make_predict_function.<locals>.predict_function at 0x7f0276f14040> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Constant' To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING: AutoGraph could not transform <function Model.make_predict_function.<locals>.predict_function at 0x7f0276f14040> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module 'gast' has no attribute 'Constant' To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert + Code + Markdown import matplotlib.pyplot as plt plt.rcParams['font.sans-serif'] = ['SimHei'] # 设置中文字体为黑体 plt.rcParams['axes.unicode_minus'] = False # 正确显示负号 0秒 + Code + Markdown %matplotlib inline 0秒 + Code + Markdown Code train = stock_data[:training_data_len] valid = stock_data[training_data_len:] valid.loc[:, 'Predictions'] = predictions ​ ​ # 绘制图像 plt.figure(figsize=(14, 5)) plt.title('股票价格预测', fontsize=20) # 图表标题改为中文 plt.xlabel('日期', fontsize=14) # X 轴标签改为中文 plt.ylabel('收盘价', fontsize=14) # Y 轴标签改为中文 plt.plot(train['date'], train['Close'], label='训练数据', color='blue') # 训练数据标签改为中文 plt.plot(valid['date'], valid['Close'], label='真实价格', color='green') # 真实价格标签改为中文 plt.plot(valid['date'], valid['Predictions'], label='预测价格', color='red') # 预测价格标签改为中文 plt.legend() # 添加图例 # 添加保存图像的代码 plt.savefig('stock_price_predictions.png') # 保存图像 plt.show() ​ # 计算和输出评估指标 rmse = np.sqrt(np.mean(np.square(predictions - y_test))) mae = np.mean(np.abs(predictions - y_test)) print(f'均方根误差 (RMSE): {rmse}, 平均绝对误差 (MAE): {mae}') # 输出信息改为中文 4分钟58秒 /opt/conda/lib/python3.8/site-packages/pandas/core/indexing.py:1667: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy self.obj[key] = value findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. findfont: Generic family 'sans-serif' not found because none of the following families were found: SimHei /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 32929 (\N{CJK UNIFIED IDEOGRAPH-80A1}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 31080 (\N{CJK UNIFIED IDEOGRAPH-7968}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 20215 (\N{CJK UNIFIED IDEOGRAPH-4EF7}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 26684 (\N{CJK UNIFIED IDEOGRAPH-683C}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 39044 (\N{CJK UNIFIED IDEOGRAPH-9884}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 27979 (\N{CJK UNIFIED IDEOGRAPH-6D4B}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. findfont: Generic family 'sans-serif' not found because none of the following families were found: SimHei findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans. findfont: Generic family 'sans-serif' not found because none of the following families were found: SimHei /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 26085 (\N{CJK UNIFIED IDEOGRAPH-65E5}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 26399 (\N{CJK UNIFIED IDEOGRAPH-671F}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 25910 (\N{CJK UNIFIED IDEOGRAPH-6536}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 30424 (\N{CJK UNIFIED IDEOGRAPH-76D8}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 35757 (\N{CJK UNIFIED IDEOGRAPH-8BAD}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 32451 (\N{CJK UNIFIED IDEOGRAPH-7EC3}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 25968 (\N{CJK UNIFIED IDEOGRAPH-6570}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 25454 (\N{CJK UNIFIED IDEOGRAPH-636E}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 30495 (\N{CJK UNIFIED IDEOGRAPH-771F}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /tmp/ipykernel_18835/4137420046.py:16: UserWarning: Glyph 23454 (\N{CJK UNIFIED IDEOGRAPH-5B9E}) missing from current font. plt.savefig('stock_price_predictions.png') # 保存图像 /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 26085 (\N{CJK UNIFIED IDEOGRAPH-65E5}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 26399 (\N{CJK UNIFIED IDEOGRAPH-671F}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 25910 (\N{CJK UNIFIED IDEOGRAPH-6536}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 30424 (\N{CJK UNIFIED IDEOGRAPH-76D8}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 20215 (\N{CJK UNIFIED IDEOGRAPH-4EF7}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 32929 (\N{CJK UNIFIED IDEOGRAPH-80A1}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 31080 (\N{CJK UNIFIED IDEOGRAPH-7968}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 26684 (\N{CJK UNIFIED IDEOGRAPH-683C}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 39044 (\N{CJK UNIFIED IDEOGRAPH-9884}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 27979 (\N{CJK UNIFIED IDEOGRAPH-6D4B}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 35757 (\N{CJK UNIFIED IDEOGRAPH-8BAD}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 32451 (\N{CJK UNIFIED IDEOGRAPH-7EC3}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 25968 (\N{CJK UNIFIED IDEOGRAPH-6570}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 25454 (\N{CJK UNIFIED IDEOGRAPH-636E}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 30495 (\N{CJK UNIFIED IDEOGRAPH-771F}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) /opt/conda/lib/python3.8/site-packages/IPython/core/pylabtools.py:152: UserWarning: Glyph 23454 (\N{CJK UNIFIED IDEOGRAPH-5B9E}) missing from current font. fig.canvas.print_figure(bytes_io, **kw) 均方根误差 (RMSE): 105.03158114646604, 平均绝对误差 (MAE): 104.21013102460681误差好,还有中文显示不成功
06-22
Code 分享 Notebook 保存成功 Python 3 (ipykernel) import os import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.preprocessing import MinMaxScaler from tensorflow.keras import layers,losses,optimizers, Sequential from tensorflow.keras.layers import InputLayer, Dense, LSTM, Dropout from tensorflow.keras.models import load_model from tensorflow.keras import Model 2秒 2025-06-21 17:14:21.232222: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 2025-06-21 17:14:21.246790: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered WARNING: All log messages before absl::InitializeLog() is called are written to STDERR E0000 00:00:1750497261.261784 789 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered E0000 00:00:1750497261.266275 789 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered W0000 00:00:1750497261.278129 789 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once. W0000 00:00:1750497261.278144 789 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once. W0000 00:00:1750497261.278145 789 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once. W0000 00:00:1750497261.278147 789 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once. 2025-06-21 17:14:21.282294: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. + Code + Markdown pip install tensorflow 26秒 Looking in indexes: https://mirrors.cloud.aliyuncs.com/pypi/simple Collecting tensorflow Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/ba/1c/370b5546cf7afc29649b2fb74c171ef2493a36f62cf901c1425ead4a56af/tensorflow-2.19.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (644.9 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 644.9/644.9 MB 8.5 MB/s eta 0:00:0000:0100:01 Requirement already satisfied: absl-py>=1.0.0 in /usr/local/lib/python3.11/site-packages (from tensorflow) (2.3.0) Collecting astunparse>=1.6.0 (from tensorflow) Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/2b/03/13dde6512ad7b4557eb792fbcf0c653af6076b81e5941d36ec61f7ce6028/astunparse-1.6.3-py2.py3-none-any.whl (12 kB) Requirement already satisfied: flatbuffers>=24.3.25 in /usr/local/lib/python3.11/site-packages (from tensorflow) (25.2.10) Collecting gast!=0.5.0,!=0.5.1,!=0.5.2,>=0.2.1 (from tensorflow) Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/a3/61/8001b38461d751cd1a0c3a6ae84346796a5758123f3ed97a1b121dfbf4f3/gast-0.6.0-py3-none-any.whl (21 kB) Collecting google-pasta>=0.1.1 (from tensorflow) Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/a3/de/c648ef6835192e6e2cc03f40b19eeda4382c49b5bafb43d88b931c4c74ac/google_pasta-0.2.0-py3-none-any.whl (57 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 57.5/57.5 kB 11.0 MB/s eta 0:00:00 Collecting libclang>=13.0.0 (from tensorflow) Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/1d/fc/716c1e62e512ef1c160e7984a73a5fc7df45166f2ff3f254e71c58076f7c/libclang-18.1.1-py2.py3-none-manylinux2010_x86_64.whl (24.5 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 24.5/24.5 MB 121.7 MB/s eta 0:00:0000:0100:01 Collecting opt-einsum>=2.3.2 (from tensorflow) Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/23/cd/066e86230ae37ed0be70aae89aabf03ca8d9f39c8aea0dec8029455b5540/opt_einsum-3.4.0-py3-none-any.whl (71 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 71.9/71.9 kB 16.5 MB/s eta 0:00:00 Requirement already satisfied: packaging in /usr/local/lib/python3.11/site-packages (from tensorflow) (24.2) Requirement already satisfied: protobuf!=4.21.0,!=4.21.1,!=4.21.2,!=4.21.3,!=4.21.4,!=4.21.5,<6.0.0dev,>=3.20.3 in /usr/local/lib/python3.11/site-packages (from tensorflow) (3.20.3) Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.11/site-packages (from tensorflow) (2.32.3) Requirement already satisfied: setuptools in /usr/local/lib/python3.11/site-packages (from tensorflow) (65.5.1) Requirement already satisfied: six>=1.12.0 in /usr/local/lib/python3.11/site-packages (from tensorflow) (1.17.0) Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.11/site-packages (from tensorflow) (3.1.0) Requirement already satisfied: typing-extensions>=3.6.6 in /usr/local/lib/python3.11/site-packages (from tensorflow) (4.12.2) Requirement already satisfied: wrapt>=1.11.0 in /usr/local/lib/python3.11/site-packages (from tensorflow) (1.17.2) Requirement already satisfied: grpcio<2.0,>=1.24.3 in /usr/local/lib/python3.11/site-packages (from tensorflow) (1.72.1) Requirement already satisfied: tensorboard~=2.19.0 in /usr/local/lib/python3.11/site-packages (from tensorflow) (2.19.0) Collecting keras>=3.5.0 (from tensorflow) Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/95/e6/4179c461a5fc43e3736880f64dbdc9b1a5349649f0ae32ded927c0e3a227/keras-3.10.0-py3-none-any.whl (1.4 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.4/1.4 MB 110.8 MB/s eta 0:00:00 Requirement already satisfied: numpy<2.2.0,>=1.26.0 in /usr/local/lib/python3.11/site-packages (from tensorflow) (1.26.4) Requirement already satisfied: h5py>=3.11.0 in /usr/local/lib/python3.11/site-packages (from tensorflow) (3.13.0) Collecting ml-dtypes<1.0.0,>=0.5.1 (from tensorflow) Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/cc/2a/5421fd3dbe6eef9b844cc9d05f568b9fb568503a2e51cb1eb4443d9fc56b/ml_dtypes-0.5.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.7/4.7 MB 144.7 MB/s eta 0:00:00 Collecting tensorflow-io-gcs-filesystem>=0.23.1 (from tensorflow) Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/66/7f/e36ae148c2f03d61ca1bff24bc13a0fef6d6825c966abef73fc6f880a23b/tensorflow_io_gcs_filesystem-0.37.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.1 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.1/5.1 MB 158.1 MB/s eta 0:00:00 Requirement already satisfied: wheel<1.0,>=0.23.0 in /usr/local/lib/python3.11/site-packages (from astunparse>=1.6.0->tensorflow) (0.45.1) Requirement already satisfied: rich in /usr/local/lib/python3.11/site-packages (from keras>=3.5.0->tensorflow) (13.9.4) Collecting namex (from keras>=3.5.0->tensorflow) Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/b2/bc/465daf1de06409cdd4532082806770ee0d8d7df434da79c76564d0f69741/namex-0.1.0-py3-none-any.whl (5.9 kB) Collecting optree (from keras>=3.5.0->tensorflow) Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/43/6e/3721bf455834a4cfef1ecd9410666ec1d5708b32f01f57da7c10c2297e09/optree-0.16.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (416 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 416.8/416.8 kB 69.7 MB/s eta 0:00:00 Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.11/site-packages (from requests<3,>=2.21.0->tensorflow) (3.4.1) Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.11/site-packages (from requests<3,>=2.21.0->tensorflow) (3.10) Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.11/site-packages (from requests<3,>=2.21.0->tensorflow) (2.3.0) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.11/site-packages (from requests<3,>=2.21.0->tensorflow) (2025.1.31) Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.11/site-packages (from tensorboard~=2.19.0->tensorflow) (3.8) Requirement already satisfied: tensorboard-data-server<0.8.0,>=0.7.0 in /usr/local/lib/python3.11/site-packages (from tensorboard~=2.19.0->tensorflow) (0.7.2) Requirement already satisfied: werkzeug>=1.0.1 in /usr/local/lib/python3.11/site-packages (from tensorboard~=2.19.0->tensorflow) (3.1.3) Requirement already satisfied: MarkupSafe>=2.1.1 in /usr/local/lib/python3.11/site-packages (from werkzeug>=1.0.1->tensorboard~=2.19.0->tensorflow) (3.0.2) Requirement already satisfied: markdown-it-py>=2.2.0 in /usr/local/lib/python3.11/site-packages (from rich->keras>=3.5.0->tensorflow) (3.0.0) Requirement already satisfied: pygments<3.0.0,>=2.13.0 in /usr/local/lib/python3.11/site-packages (from rich->keras>=3.5.0->tensorflow) (2.19.1) Requirement already satisfied: mdurl~=0.1 in /usr/local/lib/python3.11/site-packages (from markdown-it-py>=2.2.0->rich->keras>=3.5.0->tensorflow) (0.1.2) DEPRECATION: pytorch-lightning 1.7.7 has a non-standard dependency specifier torch>=1.9.*. pip 24.0 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of pytorch-lightning or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at https://github.com/pypa/pip/issues/12063 Installing collected packages: namex, libclang, tensorflow-io-gcs-filesystem, optree, opt-einsum, ml-dtypes, google-pasta, gast, astunparse, keras, tensorflow Successfully installed astunparse-1.6.3 gast-0.6.0 google-pasta-0.2.0 keras-3.10.0 libclang-18.1.1 ml-dtypes-0.5.1 namex-0.1.0 opt-einsum-3.4.0 optree-0.16.0 tensorflow-2.19.0 tensorflow-io-gcs-filesystem-0.37.1 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv [notice] A new release of pip is available: 23.3.2 -> 25.1.1 [notice] To update, run: pip install --upgrade pip Note: you may need to restart the kernel to use updated packages. + Code + Markdown stock_data = pd.read_csv('IBM_stock_data.csv') 0秒 + Code + Markdown stock_data.head() 0秒 date Open High Low Close Volume Price Change % 0 1999/11/1 98.50 98.81 96.37 96.75 9551800 0.000000 1 1999/11/2 96.75 96.81 93.69 94.81 11105400 -2.005168 2 1999/11/3 95.87 95.94 93.50 94.37 10369100 -0.464086 3 1999/11/4 94.44 94.44 90.00 91.56 16697600 -2.977641 4 1999/11/5 92.75 92.94 90.19 90.25 13737600 -1.430756 + Code + Markdown scaler = MinMaxScaler(feature_range=(0, 1)) scaled_data = scaler.fit_transform(stock_data['Close'].values.reshape(-1, 1)) 0秒 + Code + Markdown scaled_data[0] 0秒 array([0.23131139]) + Code + Markdown len(scaled_data) 0秒 6293 + Code + Markdown training_data_len = int(np.ceil(len(scaled_data) * 0.8)) #向上取整 0秒 + Code + Markdown train_data = scaled_data[0:training_data_len] X_train, y_train = [], [] time_step = 10 # 时间窗口,模型基于前10个时间步长进行预测。可以尝试不同长度(如20、30)并观察效果变化。 0秒 + Code + Markdown for i in range(len(train_data) - time_step - 1): X_train.append(train_data[i:(i + time_step), 0]) y_train.append(train_data[i + time_step, 0]) 0秒 + Code + Markdown X_train, y_train = np.array(X_train), np.array(y_train) X_train = X_train.reshape(X_train.shape[0], X_train.shape[1], 1) 0秒 + Code + Markdown X_train.shape 0秒 (5024, 10, 1) + Code + Markdown X_test, y_test = [], [] test_data = scaled_data[training_data_len - time_step:] ​ for i in range(len(test_data) - time_step): X_test.append(test_data[i:(i + time_step), 0]) y_test.append(test_data[i + time_step, 0]) ​ X_test, y_test = np.array(X_test), np.array(y_test) X_test = X_test.reshape(X_test.shape[0], X_test.shape[1], 1) 0秒 + Code + Markdown model = Sequential() model.add(InputLayer(input_shape=(X_train.shape[1], 1))) model.add(LSTM(units=64, return_sequences=True)) # 调整Dropout: 当前设置为0.3,可以尝试在不同的层上使用不同的Dropout值,例如0.2~0.5之间。Dropout的作用是防止过拟合。 model.add(Dropout(0.3)) model.add(LSTM(units=64, return_sequences=True)) model.add(Dropout(0.3)) model.add(LSTM(units=32)) model.add(Dropout(0.2)) # 增加回归层: 如果希望更高的拟合精度,可以添加多个Dense层,例如在输出前再增加一层Dense。 model.add(Dense(units=1)) 0秒 /usr/local/lib/python3.11/site-packages/keras/src/layers/core/input_layer.py:27: UserWarning: Argument `input_shape` is deprecated. Use `shape` instead. warnings.warn( 2025-06-21 17:14:34.072049: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (303) + Code + Markdown def MyRNN(): model = Sequential([ layers.InputLayer(input_shape=(X_train.shape[1],1)), layers.SimpleRNN(units=64, dropout=0.5, return_sequences=True, unroll=True), layers.SimpleRNN(units=64, dropout=0.5, unroll=True), layers.Dense(1)] ) return model 0秒 + Code + Markdown model = MyRNN() model.compile(optimizer='adam', loss='mean_squared_error') 0秒 + Code + Markdown history = model.fit( X_train, y_train, epochs=2, # 批小(batch_size): 尝试不同的batch_size,例如16、32、64,以找到训练稳定性和准确性之间的平衡 batch_size=32, #callbacks=[early_stopping, lr_scheduler] ) 2秒 Epoch 1/2 157/157 ━━━━━━━━━━━━━━━━━━━━ 2s 2ms/step - loss: 0.1122 Epoch 2/2 157/157 ━━━━━━━━━━━━━━━━━━━━ 0s 2ms/step - loss: 0.0278 + Code + Markdown model.save('my_model.keras') 0秒 + Code + Markdown train_loss = model.evaluate(X_train, y_train, verbose=0) test_loss = model.evaluate(X_test, y_test, verbose=0) 1秒 + Code + Markdown # 训练集上计算的损失值,数值越小表示模型在训练数据上的拟合效果越好 print(f"Training Loss: {train_loss:.4f}") # 测试集上计算的损失值,反映了模型在未见过的数据上的表现。测试损失略高于训练损失,但差距不,说明模型在新数据上的表现依然良好。 print(f"Testing Loss: {test_loss:.4f}") 0秒 Training Loss: 0.0459 Testing Loss: 0.0538 + Code + Markdown model = load_model('my_model.keras') predictions = model.predict(X_test) predictions = scaler.inverse_transform(predictions) # 反归一化预测值 0秒 40/40 ━━━━━━━━━━━━━━━━━━━━ 1s 8ms/step + Code + Markdown Code # 将数据分为训练集和验证集 train = stock_data[:training_data_len] valid = stock_data[training_data_len:] valid.loc[:, 'Predictions'] = predictions # 将预测结果添加到验证集的 DataFrame 中 ​ # 绘制图像 plt.figure(figsize=(14, 5)) plt.title('Stock Price Prediction', fontsize=20) # 图表标题改为英文 plt.xlabel('Date', fontsize=14) # X 轴标签改为英文 plt.ylabel('Close Price', fontsize=14) # Y 轴标签改为英文 plt.plot(train['date'], train['Close'], label='Training Data', color='blue') # 训练数据标签改为英文 plt.plot(valid['date'], valid['Close'], label='Actual Price', color='green') # 真实价格标签改为英文 plt.plot(valid['date'], valid['Predictions'], label='Predicted Price', color='red') # 预测价格标签改为英文 plt.legend() # 添加图例 # 添加保存图像的代码 plt.savefig('stock_price_predictions.png') # 保存图像 plt.show() ​ # 计算和输出评估指标 rmse = np.sqrt(np.mean(np.square(predictions - y_test))) # 计算均方根误差 mae = np.mean(np.abs(predictions - y_test)) # 计算平均绝对误差 print(f'RMSE (Root Mean Square Error): {rmse}, MAE (Mean Absolute Error): {mae}') # 输出信息改为英文 34秒 /tmp/ipykernel_789/422633826.py:4: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy valid.loc[:, 'Predictions'] = predictions # 将预测结果添加到验证集的 DataFrame 中 RMSE (Root Mean Square Error): 102.69734326363911, MAE (Mean Absolute Error): 101.96265451052737误差好在此代码基础上改进
06-22
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值