Performing User-Managed Database-18.7、Performing Complete User-Managed Media Recovery

本文详细介绍了Oracle数据库的介质恢复流程,包括关闭数据库恢复、打开数据库恢复的不同场景,并通过实例演示了具体操作步骤。

18.7、Performing Complete User-Managed Media Recovery
完成一致性备份,把数据库恢复到当前的scn是最好的结果。可以恢复整个数据库,恢复单个表空间,或恢复数据文件。一致性恢复不需要resetlogs打开数据库,非一致性恢复需要resetlogs打开数据库。Backup and Recovery Basics提供了关于介质恢复的信息。

18.7.1、Performing Closed Database Recovery
可以在一个操作中恢复所有损坏的数据文件,也可以分开操作恢复每个损坏的数据文件。

18.7.1.1、Preparing for Closed Database Recovery
(1)关闭数据库,检查出现问题的介质设备
(2)如果引起介质失败的问题是临时的,如果数据没有损坏(比如,磁盘或控制器掉电),不需要介质恢复:只需启动数据库,重新开始操作。如果不能修复,就进行以下步骤

18.7.1.2、Restoring Backups of the Damaged or Missing Files
(1)判断哪些数据文件需要恢复
(2)找到损坏的数据文件的最近备份。仅仅还原损坏的数据文件:不要还原没有损坏的数据文件或任何重做日志文件;如果没有任何备份,只能创建一个数据文件(有归档)
alter database create datafile 'xxx' as 'xxx' size xxx reuse
(3)使用操作系统命令把数据文件还原到默认位置或新的位置。
alter database rename file 'xxx' to 'xxx';

18.7.1.3、Recovering the Database
(1)使用系统管理员权限连接数据库,启动数据库到mount
(2)查询v$datafile获得数据文件名和状态
(3)需要恢复的数据文件必须是在线的,除了offline normal的表空间或read-only表空间
select 'alter dabase datafile ' || name || ' online;' from  v$datafile;
(4)执行recover database,recover tablespace xxx,recover datafile 'xxx'等语句
(5)没有自动地恢复,必须接受或拒绝每个指出的日志。如果自动地恢复,数据库自动地应用日志。
(6)介质恢复完成,数据库返回:Media recovery complete。
(7)alter database open

18.7.2、Performing Datafile Recovery in an Open Database
当数据库处于打开状态时,出现介质失败,不能被写时返回错误,一般表空间的只是损坏的数据文件离线,不能查询时返回错误,一般表空间的只是损坏的数据文件不会离线。
当数据库处于打开状态时,该恢复过程不能用做system表空间的完全介质恢复。如果system表空间的数据文件损坏,数据库自动关闭。

18.7.2.1、Preparing for Open Database Recovery
(1)数据库处于打开,发现需要恢复,把包含损坏的数据文件的表空间离线。
(2)如果引起介质失败的问题是临时的,如果数据没有损坏(比如,磁盘或控制器掉电),不需要介质恢复:只需启动数据库,重新开始操作。如果不能修复,就进行以下步骤

18.7.2.2、Restoring Backups of the Inaccessible Datafiles
(1)判断哪些数据文件需要恢复
(2)找到损坏的数据文件的最近备份。仅仅还原损坏的数据文件:不要还原没有损坏的数据文件或任何重做日志文件;如果没有任何备份,只能创建一个数据文件(有归档)
alter database create datafile 'xxx' as 'xxx' size xxx reuse
SQL> alter database create datafile '/oracle/oradata/boss/testtbs04_01.dbf' as '/oracle/oradata/boss/testtbs04_01.dbf' size 10m reuse;
(3)使用操作系统命令把数据文件还原到默认位置或新的位置。
alter database rename file 'xxx' to 'xxx';

18.7.2.3、Recovering Offline Tablespaces in an Open Database
(1)执行recover database,recover tablespace xxx,recover datafile 'xxx'等语句
(2)没有自动地恢复,必须接受或拒绝每个指出的日志。如果自动地恢复,数据库自动地应用日志。
(3)介质恢复完成,数据库返回:Media recovery complete。
SQL> recover automatic tablespace testtbs04; 
(4)alter database open

模拟1、创建表空间testtbs04,创建一个表,删除对应的数据文件,做关闭数据库的恢复
(1)
SQL> create tablespace testtbs04
  2    datafile '/oracle/oradata/boss/testtbs04_01.dbf' size 10m
  3    autoextend on next 1m maxsize unlimited
  4    logging
  5    extent management local autoallocate
  6    blocksize 8k
  7    segment space management auto
  8    flashback on;

(2)
SQL> create table test04(id number, name varchar2(30)) tablespace testtbs04;
SQL> insert into test04 values(1, 'xxxxx');
SQL> insert into test04 values(2, 'yyyyy');
SQL> commit;

(3)
SQL> select group#,members,sequence#,archived,status,first_change# from v$log;

    GROUP#    MEMBERS  SEQUENCE# ARC STATUS           FIRST_CHANGE#
---------- ---------- ---------- --- ---------------- -------------
         1          1          0 YES UNUSED                       0
         2          1          0 YES UNUSED                       0
         3          1          1 NO  CURRENT                 697986

SQL> alter system switch logfile;
SQL> alter system switch logfile;
SQL> alter system switch logfile;
SQL> select group#,members,sequence#,archived,status,first_change# from v$log;

    GROUP#    MEMBERS  SEQUENCE# ARC STATUS           FIRST_CHANGE#
---------- ---------- ---------- --- ---------------- -------------
         1          1          2 YES INACTIVE                707835
         2          1          3 YES INACTIVE                707837
         3          1          4 NO  CURRENT                 707840
(4)
$ rm -rf testtbs04_01.dbf

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup open;

SQL> col "文件名" for a40;
SQL> col "表空间名" for a10
SQL> set linesize 150
SQL>
select
  ts.name "表空间名"
  , df.file# "文件号"
  , df.checkpoint_change# "检查点"
  , df.name "文件名"
  , df.status "在线状态"
  , rf.error "恢复原因"
  , rf.change# "系统变更号"
  , rf.time
  from v$tablespace ts,v$datafile df,v$recover_file rf
where ts.ts#=df.ts# and df.file#=rf.file#
order by df.file#;

SQL> select
  2    ts.name "表空间名"
  3    , df.file# "文件号"
  4    , df.checkpoint_change# "检查点"
  5    , df.name "文件名"
  6    , df.status "在线状态"
  7    , rf.error "恢复原因"
  8    , rf.change# "系统变更号"
  9    , rf.time
 10    from v$tablespace ts,v$datafile df,v$recover_file rf
 11  where ts.ts#=df.ts# and df.file#=rf.file#
 12  order by df.file#;

表空间名       文件号     检查点 文件名                                   在线状  恢复原因           系统变更号 TIME
---------- ---------- ---------- ---------------------------------------- ------- ------------------ ---------- ------------
TESTTBS02           8     652783 /oracle/oradata/boss/testtbs02_01.dbf    OFFLINE OFFLINE NORMAL              0
TESTTBS04          10     707840 /oracle/oradata/boss/testtbs04_01.dbf    ONLINE  FILE NOT FOUND              0

(5)
SQL> alter database create datafile '/oracle/oradata/boss/testtbs04_01.dbf' as '/oracle/oradata/boss/testtbs04_01.dbf' size 10m reuse;

SQL> select file#,name,status,CHECKPOINT_CHANGE#,recover from v$datafile_header where file#=10;

     FILE# NAME                                     STATUS  CHECKPOINT_CHANGE# REC
---------- ---------------------------------------- ------- ------------------ ---
        10 /oracle/oradata/boss/testtbs04_01.dbf    ONLINE              707602 YES

(6)
SQL> recover automatic tablespace testtbs04; 
Media recovery complete.

SQL> alter database open;

SQL> select * from test04;

        ID NAME
---------- ----------------------------------------
         1 xxxxx
         2 yyyyy

### GCN-LSTM Framework in Deep Learning for Graph Data Processing and Time Series Prediction #### Introduction to GCN-LSTM The integration of Graph Convolution Networks (GCNs) with Long Short-Term Memory networks (LSTMs), referred to as the GCN-LSTM framework, combines spatial dependencies captured by graphs through GCNs and temporal dynamics processed via LSTMs. This combination allows for effective handling of spatiotemporal data where both structural relationships among entities and their evolving states over time need consideration[^1]. #### Architecture Overview In the context of dynamic network link prediction or traffic flow forecasting, each node within a graph can represent locations such as intersections while edges denote connections like roads between them. Over successive timestamps t_0,...t_n, these nodes may exhibit varying activity levels which form sequences suitable for analysis using recurrent neural architectures including LSTM units. For instance, when applied specifically towards predicting future links within social media platforms' follower-followee structures, one could employ GCN layers initially to encode static topological features from adjacency matrices into latent representations per vertex before feeding those embeddings sequentially alongside historical interaction records into stacked LSTM cells responsible for capturing longitudinal patterns across periods . Similarly, T-GCN extends beyond mere structure encoding; it incorporates additional mechanisms ensuring adaptability during training phases against non-stationary distributions often encountered real-world scenarios involving transportation systems management tasks requiring accurate short-term forecasts under fluctuating conditions [^2]. #### Implementation Example Using PyTorch Geometric Library Below demonstrates how to implement a basic version of this architecture leveraging popular libraries: ```python import torch from torch_geometric.nn import GCNConv from torch.nn import LSTM class GCNLSTM(torch.nn.Module): def __init__(self, input_dim, hidden_dim_gcn, num_layers_lstm=1, output_dim=1): super(GCNLSTM, self).__init__() # Define GCN layer(s) self.gcn = GCNConv(input_dim, hidden_dim_gcn) # Define LSTM layer(s) self.lstm = LSTM(hidden_dim_gcn, hidden_dim_gcn, batch_first=True, num_layers=num_layers_lstm) # Output layer mapping final state representation back down to desired dimensionality self.fc_out = torch.nn.Linear(hidden_dim_gcn, output_dim) def forward(self, x, edge_index, h=None, c=None): # Apply GCN transformation first z = torch.relu(self.gcn(x=x, edge_index=edge_index)) # Reshape Z so that we have sequence length along dim=1 expected by LSTM API conventionally. seq_len = 1 # Assuming single-step ahead predictions here but adjust accordingly otherwise! reshaped_z = z.view(-1, seq_len, z.size(1)) # Pass transformed inputs through LSTM unit now... out, _states_tuple = self.lstm(reshaped_z, (h,c)) # Finally map last timestep's hidden activation onto target space dimensions y_hat = self.fc_out(out[:, -1, :]) return y_hat.squeeze() ``` This code snippet outlines constructing a custom `torch.nn.Module` subclass named `GCNLSTM`, integrating operations performed separately yet cohesively by two distinct types of neural modules—namely, message-passing style convolutions operating directly upon attributed relational datasets represented implicitly via sparse connectivity patterns encoded inside `edge_index`; followed immediately thereafter by standard sequential modeling blocks adept at uncovering complex autocorrelations inherent amongst multivariate timeseries observations indexed chronologically according to position relative earliest point considered part lookback window spanning multiple discrete intervals leading up until current moment being evaluated [^3]. --related questions-- 1. How does incorporating attention mechanism improve performance in GCN-LSTM models? 2. What preprocessing steps should be taken prior to applying GCN-LSTM on raw sensor measurements collected periodically throughout urban infrastructure monitoring projects? 3. Can you provide examples illustrating differences between various pooling strategies employed after performing initial rounds propagation messages passing phase associated with underlying undirected cyclic subgraphs extracted automatically during pretraining stage preceding fine-tuning transfer learning procedures targeting specific downstream applications areas? 4. In what ways do hyperparameter tuning practices differ significantly when optimizing configurations settings pertaining either exclusively spatial aspects handled solely means local neighborhood aggregation schemes versus purely chronological facets managed entirely separate recurrent pipeline stages independently without any cross-talk whatsoever occurring simultaneously within same computational graph execution path?
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值