SQL Server DO's and DON'Ts

本文给出SQL Server数据库设计和性能优化建议,适用于其他DBMS。建议包括了解工具、不使用游标、规范化表、不使用SELECT *、合理设计索引、使用事务、避免死锁等,遵循这些建议可减少未来使用数据库时的问题。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Even if you are not using SQL Server, most of these design guidelines apply to other DBMS, too: Sybase is a very similar environment for the programmer, and Oracle designs may benefit from this too. I won't show here how to use specific T-SQL tricks, nor won't give you miracle solutions for your SQL Server problem. This is by no means a complete, closed issue. What I intend to do is give you some advices for a sound design, with lessons learned through the last years of my life, seeing the same design errors being done again and again. DO know your tools. Please, don't underestimate this tip. This is the best of all of those you'll see in this article. You'd be surprised of how many SQL Server programmers don't even know all T-SQL commands and all of those effective tools SQL Server has. "What? I need to spend a month learning all those SQL commands I'll never use???" you might say. No, you don't need to. But spend a weekend at MSDN and browse through all T-SQL commands: the mission here is to learn a lot of what can and what can't be done. And, in the future, when designing a query, you'll remember "Hey, there's this command that does exactly what I need", and then you'll refer again to MSDN to see its exact syntax. In this article I'll assume that you already know the T-SQL syntax or can find about it on MSDN. DON'T use cursors Let me say it again: DON'T use cursors. They should be your preferred way of killing the performance of an entire system. Most beginners use cursors and don't realize the performance hit they have. They use memory; they lock tables in weird ways, and they are slow. Worst of all, they defeat most of the performance optimization your DBA can do. Did you know that every FETCH being executed has about the same performance of executing a SELECT? This means that if your cursor has 10,000 records, it will execute about 10,000 SELECTs! If you can do this in a couple of SELECT, UPDATE or DELETE, it will be much faster. Beginner SQL programmers find in cursors a comfortable and familiar way of coding. Well, unfortunately this lead to bad performance. The whole purpose of SQL is specifying what you want, not how it should be done. I've once rewritten a cursor-based stored procedure and substituted some code for a pair of traditional SQL queries. The table had only 100,000 records and the stored procedure used to take 40 minutes to process. You should see the face of the poor programmer when the new stored procedure took 10 seconds to run! Sometimes it's even faster to create a small application that gets all the data, proccess it and update the server. T-SQL was not done with loop performance in mind. If you are reading this article, I need to mention: there is no good use for cursors; I have never seen cursors being well used, except for DBA work. And good DBAs, most of the time, know what they are doing. But, if you are reading this, you are not a DBA, right? DO normalize your tables There are two common excuses for not normalizing databases: performance and pure laziness. You'll pay for the second one sooner or later; and, about performance, don't optimize what's not slow. Often I see programmers de-normalizing databases because "this will be slow". And, more frequent than the inverse, the resulting design is slower. DBMSs were designed to be used with normalized databases, so design with normalization in mind. DON'T SELECT * This is hard to get used, I know. And I confess: often I use it; but try to specify only the columns you'll need. This will: 1. Reduce memory consumption and network bandwidth 2. Ease security design 3. Gives the query optimizer a chance to read all the needed columns from the indexes DO know how your data will be/is being acessed A robust index design is one of the good things you can do for your database. And doing this is almost an art form. Everytime you add an index to a table, things get faster on SELECT, but INSERT and DELETE will be much slower. There's a lot of work in building and mantaining indexes. If you add several indexes to a table to speed your SELECT, you'll soon notice locks being held for a long time while updating indexes. So, the question is: what is being done with this table? Reading or Updating data? This question is tricky, specially with the DELETE and UPDATE, because they often involve a SELECT for the WHERE part and after this they update the table. DON'T create an index on the "Sex" column This is useless. First, let's understand how indexes speed up table access. You can see indexes as a way of quickly partitioning a table based on a criteria. If you create an index with a column like "Sex", you'll have only two partitions: Male and Female. What optimization will you have on a table with 1,000,000 rows? Remember, mantaining an index is slow. Always design your indexes with the most sparse columns first and the least sparse columns last, e.g, Name + Province + Sex. DO use transactions Specially on long-running queries. This will save you when things get wrong. Working with data for some time you'll soon discover some unexpected situation which will make your stored procured crash. DO beware of deadlocks Always access your tables on the same order. When working with stored procedures and transactions, you may find this soon. If you lock the table A then table B, always lock them in this very same order in all stored procedures. If you, by accident, lock the table B and then table A in another procedure some day you'll have a deadlock. Deadlocks can be tricky to find if the lock sequence is not carefully designed. DON'T open large recordsets A common request on programming forums is: "How can I quickly fill this combo with 100,00 items?". Well, this is an error. You can't and you shouldn't. First, your user will hate browsing through 100,000 records to find the right one. A better UI is needed here, because you should ideally show no more that 100 or 200 records to your users. DON'T use server side cursors Unless you know what your are doing. Client side cursors often (not always) put less overhead on the network and on the server, and reduce locking time. DO use parametrized queries Sometimes I see in programming forums, questions like: "My queries are failing with some chars, e.g. quotes. How can I avoid it?". And a common answer is: "Replace it by double quotes". Wrong. This is only a workaround and will still fail with other chars, and will introduce serious security bugs. Besides this, it will trash the SQL Server caching system, which will cache several similar queries, instead of caching only one. Learn how to use parameterized queries (in ADO, through the use of the Command Object, or in ADO.NET the SqlCommand) and never have these problems again. DO always test with large databases It's a common pattern programmers developing with a small test database, and the end user using large databases. This is an error: disk is cheap, and performance problems will only be noticed when it's too late. DON'T import bulk data with INSERT Unless strictly necessary. Use DTS or the BCP utility and you'll have both a flexible and fast solution. DO beware of timeouts When querying a database, the default timeout is often low, like 15 seconds or 30 seconds. Remember that report queries may run longer than this, specially when your database grows. DON'T ignore simultaneous editing Sometimes two users will edit the same record at the same time. When writing, the last writer wins and some of the updates will be lost. It's easy to detect this situation: create a timestamp column and check it before you write. If possible, merge changes. If there is a conflict, prompt the user for some action. DON'T do SELECT max(ID) from Master when inserting in a Detail table. This is another common mistake, and will fail when two users are inserting data at the same time. Use one of SCOPE_IDENTITY, IDENT_CURRENT, and @@IDENTITY. Avoid @@IDENTITY if possible because it can introduce some nasty bugs with triggers. DO Avoid NULLable columns When possible. They consume an extra byte on each NULLable column in each row and have more overhead associated when querying data. The DAL will be harder to code, too, because everytime you access this column you'll need to check I'm not saying that NULLs are the evil incarnation, like some people say. I believe they can have good uses and simplify coding when "missing data" is part of your business rules. But sometimes NULLable columns are used in situations like this: CustomerName1 CustomerAddress1 CustomerEmail1 CustomerName2 CustomerAddress2 CustomerEmail3 CustomerName1 CustomerAddress2 CustomerEmail3 This is horrible. Please, don't do this, normalize your table. It will be more flexible and faster, and will reduce the NULLable columns. DON'T use the TEXT datatype Unless you are using it for really large data. The TEXT datatype is not flexible to query, is slow and wastes a lot of space if used incorrectly. Sometimes a VARCHAR will handle your data better. DON'T use temporary tables Unless strictly necessary. Often a subquery can substitute a temporary table. They induce overhead and will give you a big headache when programming under COM+ because it uses a database connection pool and temporary tables will last forever. In SQL Server 2000, there are alternatives like the TABLE data type which can provide in-memory solutions for small tables inside stored procedures too. DO learn how to read a query execution plan The SQL Server query analyzer is your friend, and you'll learn a lot of how it works and how the query and index design can affect performance through it. DO use referential integrity This can be a great time saver. Define all your keys, unique constraints and foreign keys. Every validation you create on the server will save you time in the future. Conclusion As I've said before, this is by no means a complete SQL Server performance and best practices guide. This would take a complete book to cover. But I really believe that this is a good start, and if you follow these practices, surely you will have much less trouble in the future.
内容概要:本文详细阐述了DeepSeek大模型在服装行业的应用方案,旨在通过人工智能技术提升服装企业的运营效率和市场竞争力。文章首先介绍了服装行业的现状与挑战,指出传统模式难以应对复杂的市场变化。DeepSeek大模型凭借其强大的数据分析和模式识别能力,能够精准预测市场趋势、优化供应链管理、提升产品设计效率,并实现个性化推荐。具体应用场景包括设计灵感生成、自动化设计、虚拟试衣、需求预测、生产流程优化、精准营销、智能客服、用户体验提升等。此外,文章还探讨了数据安全与隐私保护的重要性,以及技术实施与集成的具体步骤。最后,文章展望了未来市场扩展和技术升级的方向,强调了持续优化和合作的重要性。 适用人群:服装行业的企业管理层、技术负责人、市场和销售团队、供应链管理人员。 使用场景及目标:①通过市场趋势预测和用户偏好分析,提升设计效率和产品创新;②优化供应链管理,减少库存积压和生产浪费;③实现精准营销,提高客户满意度和转化率;④通过智能客服和虚拟试衣技术,提升用户体验;⑤确保数据安全和隐私保护,建立用户信任。 阅读建议:此资源不仅涵盖技术实现的细节,还涉及业务流程的优化和管理策略的调整,建议读者结合实际业务需求,重点关注与自身工作相关的部分,并逐步推进技术的应用和创新。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值