Stephanie wrote in with two questions:
- Is there a gain in getting 2 LDF (log) files for one database? My opinion is no, because log file access is sequential, so no performance gain can be obtained and this will complicate maintenance.
- Is there a gain in separating the data & indexes onto 2 specific filegroups? My opinion is no, because the way SQL Server handles indexes (as opposed to Oracle) does not help if you split them from the data.
First things first, multiple log files - I agree that you don’t want to do two LDF files for a single database unless there’s special circumstances. At one shop, we had a pre-grown full-size log file for each database (with autogrow turned off), but we kept an “emergency” log file for each database on an empty drive. The “emergency” log files were 1mb, and set to autogrow. If a database completely ran out of log space in its normal pre-grown log files, then it would start using that “emergency” log file and growing it. That was better than stopping the database completely, and gave us enough time to kill transactions and roll things back.
That certainly wasn’t a best practice, and it didn’t buy us any performance, but we had to do it based on the unpredictable nature of those applications and the way they would sometimes load huge amounts of data in a single transaction. In that case, multiple LDF files served a purpose. (When our Microsoft contact found out what we were doing, they hit the roof and told us to just go buy enough disk to be done with it, and the business guys finally decided to shell out the dough.)
Adding a second file adds complexity, like you said, and the same holds true whether it’s a log file or a data file, which segues into your second question. Adding a second data file for indexes adds more design overhead - suddenly the DBA has to pay attention with where they’re placing objects.
In the days of my youth (ahh, for the sweet days of 2004), I liked this idea for my 1tb data warehouse because I could rebuild indexes faster with less impact on the end users. Now that I’m old (okay, maybe just older) and SQL Server 2005 Enterprise Edition lets us rebuild most indexes online, I don’t bother with that anymore.
Plus, when I started managing storage area networks, I had a different reason. Storage is getting faster and faster every year, but DBAs aren’t getting any more hours in their day. Keeping data and indexes on different filegroups requires careful attention during development. All it takes is one knucklehead adding an index without paying attention, and bam, you’ve got indexes on the data filegroup. That means you have to introduce a new task into your daily/weekly/monthly routine: running a query to identify which objects are on which filegroups.
What a pain.
I’ve struggled with finding and hiring good DBAs, and if I have a choice between making my database 5% faster or making my DBAs 5% faster, I choose that latter option. I can get 5% more storage performance by adding a few more drive spindles, but it’s harder - and more expensive - to get 5% more good DBA time.
On the other hand, if I was in a shop where my DBAs had plenty of time (or worked for peanuts, whate, then I would take a sandbox system, set it up the same way as production, and do performance benchmarking to see how much of a difference a separate index filegroup made on my exact hardware, storage and application.
You’re asking me, though, so I bet you don’t have the time to do that research either. In that case, keep it simple!