[tips]T_SQL @ 2000 AND 2005

本文提供了一系列针对 SQL Server 的实用技巧,包括创建数据库图表、管理存储过程、使用游标进行数据处理、链接服务器操作、解决插入数据失败问题、XML 控制、PIVOT 查询、重建索引以减少碎片化、更新数据表策略,以及更名 SQL Server 后的故障排除。这些技巧覆盖了 SQL Server 的多个方面,旨在帮助用户提高数据库管理和查询效率。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

if you can't create database diagrams on SQL server, execute the following T-SQL

None.gifUSE [master]
None.gif
GO
None.gif
EXEC dbo.sp_dbcmptlevel @dbname='CMDB'@new_cmptlevel=90
None.gif
GO
None.gif
None.gif
ALTER AUTHORIZATION ON database::dbname TO sa
None.gif


Cursor

None.gif-- Declare a cursor
None.gif
DECLARE custom_cursor CURSOR
None.gif 
None.gif
FOR    SELECT EmployeeID, LastName, FirstName FROM Employees
None.gif
None.gif
-- declare a temporary various
None.gif
DECLARE @tmopEmployeeID INT
None.gif
DECLARE @LastName varchar(50)
None.gif
DECLARE @FirstName varchar(50)
None.gif
None.gif
-- open the cursor
None.gif
OPEN custom_cursor
None.gif
None.gif
--GET THE NEXT RECORD
None.gif
FETCH NEXT FROM custom_cursor INTO @tmopEmployeeID , @LastName@FirstName
None.gif
None.gif
-- GET SUCCESSFULLY
None.gif
WHILE @@FETCH_STATUS = 0
None.gif
None.gif
BEGIN 
None.gif    
PRINT CONVERT(VARCHAR(20), @tmopEmployeeID+''+ @LastName + '.'+ @FirstName
None.gif    
FETCH NEXT FROM custom_cursor INTO @tmopEmployeeID , @LastName@FirstName
None.gif
END
None.gif
None.gif
--DELETE THE CURSOR
None.gif
DEALLOCATE custom_cursor

 

None.gif--查询某一个库的所有存贮过程
None.gif
select * from sysobjects where type='P'     这是查出所有存储过程的名称等信息 
None.gifsp_helptext 
[存储过程名]                            可以看到存储过程定义语句 
None.gif
select * from sysobjects where type='V'     这是查出所有视图的名称等信息 取出某个视图的生成该视图的SQL语句 

 

None.gif--链接服务器,用于不同种类或者服务器之间操作数据
None.gif
Use DBNAME
None.gif
GO 
None.gif
--add link server--
None.gif
Exec sp_addlinkedserver @server = 'linkedservername'@srvproduct=N'SQL Server' 
None.gif
GO 
None.gif
-- list linked server--
None.gif
Exec sp_linkedservers 
None.gif
Go 
None.gif
--delete data before insert
None.gif
delete from tablename
None.gif
GO 
None.gif
-- insert data to the table
None.gif
insert tablename select distinct * from linkedservername.DBNAME.dbo.tablename
None.gif
GO 
None.gif
-- delete the linked server--
None.gif
Exec sp_dropserver 'hkhkpapt01' 


when insert data, if you can't do successfully, pls note IDENTITY, change the column not as IDENTITY first, after finished to insert, change back

 

 

---处理sql for xml时的控制问题

当我们把查询结果作为xml输出的时候,如果有值为空值,应该怎么办呢?

 

 

ContractedBlock.gifExpandedBlockStart.gifCode
select * from tablename
for xml auto, elements xsinil

 

 

--pivot

 

ContractedBlock.gifExpandedBlockStart.gifCode
select * from
(
    
select Currency, sum(cost) as CurCount 
    
from dataTable
    
group by Currency
AS H
PIVOT 

Sum(H.CurCount) for 
H.Currency 
in ([RMB],[USD])
)
AS C

 

 

如果还想获取更详细的信息,可以添加 type,或者xmlschema参数,获取更为详细的信息

 

 

 

---index: view the internal and external index fragmentation, and then rebuild or reorganize the index

 

ContractedBlock.gifExpandedBlockStart.gifCode
--Determine index fragmentation for all tables in the AdventureWorks database
SELECT OBJECT_NAME(dt.object_id), si.name,
dt.avg_fragmentation_in_percent, dt.avg_page_space_used_in_percent
FROM
(
SELECT object_id, index_id, avg_fragmentation_in_percent, avg_page_space_used_in_percent
FROM sys.dm_db_index_physical_stats (DB_ID('DB_NAME'), NULLNULLNULL'DETAILED')
WHERE index_id <> 0as dt --does not return information about heaps
INNER JOIN sys.indexes si
ON si.object_id = dt.object_id
AND si.index_id = dt.index_id


alter index index_name on table_name reorganize/rebuild

 

 

不使用游标更新数据表的思路是

1 移动当前表的数据到临时表

2 根据条件获取需要更新的数据

3 一个select,结合1和2,然后insert到目标表中去

ok

--------2009华丽分割线---- 

如果你更改多SQL server的名字,那么在有的时候,会出现14262的错误,比如在操作ship log的时候。

查看sql server的名字和现在你的服务器的名字是否一致

select * from sys.sysservers

如果不一致,执行下面两个存储过程,然后重启Windows

sp_dropserver 'servername'

go

sp_addserver 'servername'

 

ok, 问题搞定

 

对于上面这个问题的进一步更新

SELECT @@SERVERNAME

USE master
GO

--Drop old server name
EXEC sp_dropserver 'currentservername'
GO

--Add correct/ current server name
EXEC sp_addserver 'newservername', 'local'
GO

--restart the sql server service

SELECT @@SERVERNAME

 

 

转载于:https://www.cnblogs.com/yang_sy/archive/2008/03/25/1121215.html

<think>Hmm, the user is asking about configuring or using Hoodie query results as Read-Optimized (RO) tables. Interesting, they seem to be working with Apache Hudi's query layer. First, I notice this is a follow-up to previous Hudi discussions - the user appears to be an engineer implementing data lake solutions. They're likely troubleshooting performance issues since RO tables optimize for faster reads at the cost of slightly stale data. The core question has two aspects: usage patterns and configuration details. For usage, I should explain when to choose RO over RT (real-time) tables - probably when their use case tolerates minutes-old data. For configuration, they'll need the key settings like compaction schedules and storage types. I recall they mentioned Spark earlier, so the response should focus on Spark SQL syntax. Better include concrete examples for both querying and table creation. The "as of time" syntax is particularly important for time-travel queries. Wait - should also warn about common pitfalls. Many users forget that RO tables require regular compaction jobs. And the merge-on-read vs copy-on-write distinction is crucial. Maybe add a performance comparison table too? The reference request suggests they might be debugging something. Perhaps include that "hoodie.compact.inline" flag - it's often the culprit when RO queries don't return fresh data. For the questions section... optimization strategies would help since they seem performance-conscious. The Hudi versions note is critical too - the syntax changed around 0.11.0. Better double-check the latest Hudi docs on time travel syntax. Last month's update modified the timestamp format requirements. Done - looks like the ISO format example is still valid. This feels like a production implementation question. I'll keep the tone professional but include troubleshooting tips they won't find in docs. The compaction frequency formula might save them some trial-and-error.</think>以下为 Apache Hudi 中 **Read-Optimized (RO) 表** 的配置与使用详解: --- ### 一、RO 表的核心概念 1. **设计目标** RO 表通过列式存储(Parquet)提供高性能查询,数据延迟约为 **5-10分钟**(依赖压缩调度),适用于对数据时效性要求不高的分析场景[^1]。 2. **存储结构** ``` /table_path |-- .hoodie/ # 元数据 |-- 2023/08/01/ # 分区 |-- .delta_commits # 增量日志(写入时) |-- .compacted # 压缩后的列式文件(查询时) ``` --- ### 二、关键配置参数 在 `hoodie.datasource.write` 或 Spark 配置中设置: | 参数 | 默认值 | 说明 | |------|--------|------| | `hoodie.compact.inline` | `false` | 开启在线压缩(`true` 时自动生成 RO 数据) | | `hoodie.compact.inline.max.delta.commits` | `5` | 触发压缩的增量提交次数阈值 | | `hoodie.parquet.compression.ratio` | `0.1` | Parquet 文件压缩率阈值 | | `hoodie.cleaner.policy` | `KEEP_LATEST_COMMITS` | RO 文件清理策略 | --- ### 三、使用示例 #### 1. 创建 RO 表(Spark SQL) ```sql CREATE TABLE orders_ro USING hudi TBLPROPERTIES ( 'hoodie.table.name' = 'orders_ro', 'hoodie.datasource.write.table.type' = 'COPY_ON_WRITE', -- 或 MERGE_ON_READ 'hoodie.compact.inline' = 'true' -- 启用自动压缩 ) LOCATION '/data/orders_ro' AS SELECT * FROM source_orders; ``` #### 2. 查询 RO 表 ```python # 读取最新压缩版本 df = spark.read.format("hudi").load("/data/orders_ro") # 时间旅行查询(需 Hudi 0.10.0+) time_travel_df = spark.read \ .option("as.of.time", "20230801120000") \ .format("hudi") \ .load("/data/orders_ro") ``` #### 3. 手动触发压缩 ```bash spark-submit --class org.apache.hudi.utilities.HoodieCompactor \ hudi-utilities-bundle.jar \ --base-path /data/orders_ro \ --schema-file schema.avsc \ --compaction-schedule ``` --- ### 四、性能优化建议 1. **压缩策略调整** - 增大 `hoodie.compact.inline.max.delta.commits` 减少压缩频率 - 设置 `hoodie.compact.inline.trigger.strategy=NUM_COMMITS` 按提交次数触发 2. **查询加速** ```sql SET hoodie.orders_ro.consume.mode=read_optimized; -- 强制使用 RO 模式 ``` 3. **小文件治理** 配置 `hoodie.cleaner.commits.retained=10` 保留最近 10 次提交,避免文件碎片[^2]。 --- ### 五、RO vs RT 查询模式对比 | 特性 | RO 表 | RT 表 | |------|-------|-------| | 数据延迟 | 分钟级 | 秒级 | | 查询性能 | ⭐⭐⭐⭐ (列存) | ⭐⭐ (合并日志) | | 存储成本 | 低 (无冗余日志) | 高 (日志+列存) | | 适用场景 | BI 报表、批量分析 | 实时监控、流读 | ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值