微软云azure 数据迁移expdp报错ORA-01110: data file 201: '/home/oradata/powerdes/temp01.dbf'

Azure云Oracle expdp报错解决方法

1,expdp的时候报错:

在idc的oracle服务器上正常导入导出都ok的,但是到了azure云上就出故障了:

[oracle@pldb1 oracle]$ expdp  \'powerdesk/testcrmfile\' directory=DIR_DUMP tables=bis_floor,bis_shop,bis_shop_conn,bis_store,bis_cont dumpfile=zhengyin.qu_bak_$(date +%Y%m%d)_02.dmp


Export: Release 11.2.0.1.0 - Production on Sat Feb 6 20:00:00 2016


Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-01187: cannot read from file  because it failed verification tests
ORA-01110: data file 201: '/home/oradata/powerdes/temp01.dbf'
ORA-06512: at "SYS.DBMS_LOB", line 664
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3901
ORA-06512: at line 1


[oracle@pldb1 oracle]$

查看temp01.dbf表空间,是online的,貌似没有坏:

SQL> select name,bytes/1024/1024,status from v$tempfile;


NAME
--------------------------------------------------------------------------------
BYTES/1024/1024 STATUS
--------------- -------
/home/oradata/powerdes/temp01.dbf
	     68 ONLINE




SQL>

2,google说起是drop一个temp01.dbf

SQL> alter tablespace tempfile "/home/oradata/powerdes/temp01.dbf" drop;
alter tablespace tempfile "/home/oradata/powerdes/temp01.dbf" drop
                          *
ERROR at line 1:
ORA-00972: identifier is too long




SQL> 


SQL> ALTER TABLESPACE tempfile '/home/oradata/powerdes/temp01.dbf' drop;
ALTER TABLESPACE tempfile '/home/oradata/powerdes/temp01.dbf' drop
                          *
ERROR at line 1:
ORA-02142: missing or invalid ALTER TABLESPACE option




SQL> 

删除报错了,无效,google不出来比较有效的解决方案


3,尝试添加新的temp02.dbf,这样试试看。

SQL> 
SQL> 
SQL> alter tablespace temp add tempfile '/home/oradata/powerdes/temp02.dbf' size 60M reuse;


Tablespace altered.


SQL> 


[oracle@pldb1 oracle]$ expdp  \'powerdesk/testcrmfile\' directory=DIR_DUMP tables=bis_floor,bis_shop,bis_shop_conn,bis_store,bis_cont dumpfile=zhengyin.qu_bak_$(date +%Y%m%d)_02.dmp


Export: Release 11.2.0.1.0 - Production on Sat Feb 6 20:00:00 2016


Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-01187: cannot read from file  because it failed verification tests
ORA-01110: data file 201: '/home/oradata/powerdes/temp01.dbf'
ORA-06512: at "SYS.DBMS_LOB", line 664
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3901
ORA-06512: at line 1
[oracle@pldb1 oracle]$ 

添加之后还是报错,所以这种方案没有生效。


4,尝试offline离线temp01.dbf,让expdp走temp02.dbf试试,结果成功了

设置离线状态:

SQL> alter database tempfile '/home/oradata/powerdes/temp01.dbf' offline;


Database altered.


SQL> 
SQL> select name,bytes/1024/1024,status from v$tempfile;


NAME
--------------------------------------------------------------------------------
BYTES/1024/1024 STATUS
--------------- -------
/home/oradata/powerdes/temp01.dbf
	      0 OFFLINE


/home/oradata/powerdes/temp02.dbf
	     60 ONLINE




SQL> 

再执行expdp成功导出备份结果集:

[oracle@pldb1 oracle]$ expdp  \'powerdesk/testcrmfile\' directory=DIR_DUMP tables=bis_floor,bis_shop,bis_shop_conn,bis_store,bis_cont dumpfile=zhengyin.qu_bak_$(date +%Y%m%d)_02.dmp


Export: Release 11.2.0.1.0 - Production on Sat Feb 6 20:13:00 2016


Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "POWERDESK"."SYS_EXPORT_TABLE_01":  "powerdesk/********" directory=DIR_DUMP tables=bis_floor,bis_shop,bis_shop_conn,bis_store,bis_cont dumpfile=zhengyin.qu_bak_20160206_02.dmp 
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 76.25 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/COMMENT
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/TRIGGER
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "POWERDESK"."BIS_SHOP"                      25.62 MB   49633 rows
. . exported "POWERDESK"."BIS_CONT"                      17.53 MB   19836 rows
. . exported "POWERDESK"."BIS_STORE"                     9.833 MB   25941 rows
. . exported "POWERDESK"."BIS_SHOP_CONN"                 5.708 MB   30758 rows
. . exported "POWERDESK"."BIS_FLOOR"                     186.6 KB    1082 rows
Master table "POWERDESK"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for POWERDESK.SYS_EXPORT_TABLE_01 is:
  /oracle/dump/zhengyin.qu_bak_20160206_02.dmp
Job "POWERDESK"."SYS_EXPORT_TABLE_01" successfully completed at 20:13:14


[oracle@pldb1 oracle]$ 

OK,最终导入成功了。

看下这个报错 Started by user Devops CRD Running in Durability level: MAX_SURVIVABILITY [Pipeline] Start of Pipeline [Pipeline] echo Original Release Version: 3 [Pipeline] node Running on Jenkins in /root/.jenkins/workspace/video-analysis-algorithm-pipeline [Pipeline] { [Pipeline] withEnv [Pipeline] { [Pipeline] timestamps [Pipeline] { [Pipeline] stage [Pipeline] { (Sync Up Release) [Pipeline] echo 19:14:57 Sync up video-analysis-algorithm release ... [Pipeline] script [Pipeline] { [Pipeline] echo 19:14:57 CLOUD:azure [Pipeline] echo 19:14:57 Major Deploy Version: 1.10.22 [Pipeline] sh 19:14:57 + helm3 registry login prdtplinkhelmchartzau1.azurecr.io -u bee8c203-08f1-4e0c-9799-a729a7fbfdd7 -p xWy8Q~Kdn~7ayTJL54lMJmyOdmiJVYc8zUksvaE2 19:14:57 WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config 19:14:57 WARNING: Using --password via the CLI is insecure. Use --password-stdin. 19:15:00 Login Succeeded [Pipeline] sh 19:15:00 + helm3 show chart oci://prdtplinkhelmchartzau1.azurecr.io/video-analysis-algorithm --version 1.10.22 19:15:00 WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config [Pipeline] sh 19:15:03 + cat ./temp 19:15:03 + grep appVersion 19:15:03 + head -1 19:15:03 + cut -d : -f 2 [Pipeline] echo 19:15:03 RELEASE_VERSION: 1.10.22 [Pipeline] echo 19:15:03 Latest Release Version: 1.10.22 [Pipeline] } [Pipeline] // script [Pipeline] echo 19:15:03 Creating build directories ... [Pipeline] sh 19:15:03 + mkdir ./build-3 [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Authorization) Stage "Authorization" skipped due to when conditional [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Promote Stage) [Pipeline] input 19:15:03 Promote to azure.uat-v2.azure-brazil-1? 19:15:03 Proceed or Abort 19:15:10 Approved by Devops CRD [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Deploy Stage) [Pipeline] echo 19:15:11 Deploying video-analysis-algorithm Release v1.10.22 to azure.uat-v2.azure-brazil-1 ... [Pipeline] dir 19:15:11 Running in /root/.jenkins/workspace/video-analysis-algorithm-pipeline/build-3 [Pipeline] { [Pipeline] script [Pipeline] { [Pipeline] sh 19:15:11 + git ls-remote ssh://cicdtplinknbu@pdgerrit.tp-link.com:29418/tplinknbu/devops_cicd feature/20251110-tapocare-ai 19:15:11 + cut -f1 [Pipeline] echo 19:15:11 commit id: 2706169fa15101f001dfd146d48f61f657c8d7ef [Pipeline] sh 19:15:11 + rm -rf devops_cicd 19:15:11 + mkdir devops_cicd [Pipeline] sh 19:15:11 + git clone --depth 1 -b feature/20251110-tapocare-ai ssh://cicdtplinknbu@pdgerrit.tp-link.com:29418/tplinknbu/devops_cicd 19:15:11 Cloning into 'devops_cicd'... 19:15:13 Total 26483 (delta 8660), reused 21568 (delta 8660) 19:15:15 Updating files: 79% (15074/18954) Updating files: 80% (15164/18954) Updating files: 81% (15353/18954) Updating files: 82% (15543/18954) Updating files: 83% (15732/18954) Updating files: 84% (15922/18954) Updating files: 85% (16111/18954) Updating files: 86% (16301/18954) Updating files: 87% (16490/18954) Updating files: 88% (16680/18954) Updating files: 89% (16870/18954) Updating files: 90% (17059/18954) Updating files: 91% (17249/18954) Updating files: 92% (17438/18954) Updating files: 93% (17628/18954) Updating files: 94% (17817/18954) Updating files: 95% (18007/18954) Updating files: 96% (18196/18954) Updating files: 97% (18386/18954) Updating files: 98% (18575/18954) Updating files: 99% (18765/18954) Updating files: 100% (18954/18954) Updating files: 100% (18954/18954), done. [Pipeline] sh 19:15:15 + rm -rf k8s-values 19:15:15 + mkdir k8s-values [Pipeline] sh 19:15:15 + cp -rf ./devops_cicd/k8s-values/video-analysis-algorithm ./k8s-values/ [Pipeline] sh 19:15:15 + rm -rf ./devops_cicd [Pipeline] echo 19:15:16 ENV_FOLDER_NAME:azure-pet [Pipeline] echo 19:15:16 deploy() begins... [Pipeline] sh 19:15:16 + yq read ./k8s-values/video-analysis-algorithm/azure-pet/azure-brazil-1/values.yaml app.namespace [Pipeline] sh 19:15:16 + kubectl get deployment -n pet-app-ipc --context azure.uat-v2.azure-brazil-1 19:15:18 No resources found. [Pipeline] sh 19:15:18 + grep -E video-analysis-algorithm-[0-9]+(\.[0-9,a-z,A-Z]+)+(-[a-z,A-Z]+)* 19:15:18 + cat ./temp 19:15:18 + cut -d -f 1 [Pipeline] echo 19:15:18 PREVIOUS_DEPLOY: [Pipeline] echo 19:15:18 azure-pet PREVIOUS_VER: [Pipeline] echo 19:15:18 Deploying to azure-pet: azure-brazil-1 [Pipeline] sh 19:15:19 + [ -d ./k8s-values/video-analysis-algorithm/azure-pet ] 19:15:19 + echo true [Pipeline] sh 19:15:19 + [ -d ./k8s-values/video-analysis-algorithm/global ] 19:15:19 + echo true [Pipeline] sh 19:15:19 + [ -d ./k8s-values/video-analysis-algorithm/azure-pet/common ] 19:15:19 + echo true [Pipeline] sh 19:15:20 + [ -d ./k8s-values/video-analysis-algorithm/azure-pet/azure-brazil-1/config ] 19:15:20 + echo true [Pipeline] sh 19:15:20 + kubectl create configmap video-analysis-algorithm-azure-pet-azure-brazil-1-tmp-config --from-file=./k8s-values/video-analysis-algorithm/global --from-file=./k8s-values/video-analysis-algorithm/azure-pet/common --from-file=./k8s-values/video-analysis-algorithm/azure-pet/azure-brazil-1/config --dry-run -o=yaml [Pipeline] sh 19:15:20 + yq read video-analysis-algorithm-azure-pet-azure-brazil-1-tmp.yaml data [Pipeline] sh 19:15:20 + yq prefix -i video-analysis-algorithm-azure-pet-azure-brazil-1-config-map-data.yaml configMap.data [Pipeline] sh 19:15:21 + [ -d ./k8s-values/video-analysis-algorithm/azure-pet/azure-brazil-1 ] 19:15:21 + echo true [Pipeline] sh 19:15:21 + echo NEW_RELEASE_NAME video-analysis-algorithm-1.10.22 19:15:21 NEW_RELEASE_NAME video-analysis-algorithm-1.10.22 [Pipeline] sh 19:15:21 + helm list --namespace pet-app-ipc --kube-context azure.uat-v2.azure-brazil-1 [Pipeline] sh 19:15:23 + cat ./temp 19:15:23 + grep video-analysis-algorithm 19:15:23 + head -1 19:15:23 + cut -f 1 [Pipeline] sh 19:15:24 + echo networking init 19:15:24 networking init [Pipeline] sh 19:15:24 + kubectl apply -f ./k8s-values/video-analysis-algorithm/azure-pet/azure-brazil-1/networking/service.yaml --context azure.uat-v2.azure-brazil-1 19:15:27 service/video-analysis-algorithm-zbr1 created [Pipeline] sh 19:15:27 + kubectl apply -f ./k8s-values/video-analysis-algorithm/azure-pet/azure-brazil-1/networking/istio.yaml --context azure.uat-v2.azure-brazil-1 19:15:30 gateway.networking.istio.io/video-analysis-algorithm-zbr1-gw created 19:15:32 virtualservice.networking.istio.io/video-analysis-algorithm-zbr1-vs created 19:15:32 virtualservice.networking.istio.io/video-analysis-algorithm-internal-zbr1-vs created 19:15:33 gateway.networking.istio.io/video-analysis-algorithm-internal-grpc-zbr1-gw created 19:15:34 virtualservice.networking.istio.io/video-analysis-algorithm-internal-grpc-zbr1-vs created [Pipeline] sh 19:15:34 + echo networking init successfully 19:15:34 networking init successfully [Pipeline] sh 19:15:34 + echo networking init successfully 19:15:34 networking init successfully [Pipeline] sh 19:15:35 + AWS_DEFAULT_REGION=ap-southeast-1 AWS_PROFILE=uat helm3 upgrade --install video-analysis-algorithm-1.10.22 oci://prdtplinkhelmchartzau1.azurecr.io/video-analysis-algorithm --version 1.10.22 --values ./k8s-values/video-analysis-algorithm/azure-pet/azure-brazil-1/values.yaml --values video-analysis-algorithm-azure-pet-azure-brazil-1-config-map-data.yaml --namespace pet-app-ipc --kube-context azure.uat-v2.azure-brazil-1 --timeout 1200s --wait 19:15:35 WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /root/.kube/config 19:15:37 Release "video-analysis-algorithm-1.10.22" does not exist. Installing it now. 19:15:51 W1117 19:15:51.108993 2395101 warnings.go:70] metadata.name: this is used in Pod names and hostnames, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] 19:35:58 Error: Get "https://uat-k8s-nbu-zbr1-dns-3wzszvat.hcp.brazilsouth.azmk8s.io:443/apis/apps/v1/namespaces/pet-app-ipc/deployments/video-analysis-algorithm-1.10.22": context deadline exceeded [Pipeline] } [Pipeline] // script [Pipeline] } [Pipeline] // dir [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Blue-Green Release) Stage "Blue-Green Release" skipped due to earlier failure(s) [Pipeline] stage [Pipeline] { (Promote To BLUE-GREEN-ALL) Stage "Blue-Green Release" skipped due to earlier failure(s) [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Deploy To BLUE-GREEN-ALL) Stage "Blue-Green Release" skipped due to earlier failure(s) [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (SCALE-DOWN-BLUE) Stage "Blue-Green Release" skipped due to earlier failure(s) [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Promote To BLUE-GREEN-END) Stage "Blue-Green Release" skipped due to earlier failure(s) [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Deploy To BLUE-GREEN-END) Stage "Blue-Green Release" skipped due to earlier failure(s) [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // stage [Pipeline] stage [Pipeline] { (Test stage) Stage "Test stage" skipped due to earlier failure(s) [Pipeline] } [Pipeline] // stage [Pipeline] } [Pipeline] // timestamps [Pipeline] } [Pipeline] // withEnv [Pipeline] } [Pipeline] // node [Pipeline] End of Pipeline ERROR: script returned exit code 1 Finished: FAILURE
最新发布
11-18
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值