Shrinking the DB file after externalisation

Nov 10, 2011 at 9:59 AM

We have a large existing SP content DB that has now been externalised to our blob storage platform.

The problem is that the DB files remain large Clearly we should now perform a DBCC SHRINKDATABASE or a DBCC SHRINKFILE however this MS whitepaper suggests that we cannot use the TRUNCATEONLY option . This means that we cannot release the freed space to the OS.

Is there a recommended alternative? Do we just do it and pray? For what reason is the TRUNCATEONLY option "not recommended" for SHRINKDATABASE and "not supported" for SHRINKFILE?

Nov 10, 2011 at 2:06 PM
Edited Nov 10, 2011 at 2:06 PM

DBCC SHRINKDATABASE is clearly not something that we should be doing on a regular basis.  However, there are times when it's use is warranted and this is one of those times.

Most likely you have a single database MDF file.  If this is the case, then the most efficient way to reduce your database file size while optimizing performance would be this:

  1. Add 2 additional data files to the database file group.  Pre-size each one to half of the total USED space in your database.  (The USED space should be much smaller that the database storage size now that you've externalized)
  2. Use the SQL Mgmt Studio UI to Shrink the primary database (mdf) file.  Use the "Empty file by migrating the data to the other files in the same filegroup" option.  Or if you prefer SQL Script, execute this script:

    USE [DatabaseName]

    Either Way, this operation could still take a long time to complete, but it could go much faster depending on the number of rows in your database.

    NOTE: You will likely get an error that says "Cannot move all contents to other places to complete the emptyfile operation".  This is because the MDF file can't be completely emptied.  But most of it will be gone and you will have 99% free space.
  3. You'll then need to go back through the UI and run one more Shrink on the primary "MDF" file.  Just select "Release Unused Space" and click "OK".  The operation will be really quick and the MDF file will now be very small.

After the process is complete, you will have a very small MDF file and 2 evenly loaded NDF files in your file group.  It is also wise to kick off a database maintenance job (or run the SP2010 health rule) to defragment your statistics and indexes.

In review, there are very few scenarios where you want to shrink a database because of the tremendous fragmentation that can occur.  This is one of those times.  But this method does not perform a standard shrink which reorganizes and compresses content (and fragments indexes).  This method redistributes the content into new files which will probably also help reduce I/O contention on your database.

There is a bit more background on some of these concepts mentioned in a SharePoint 2007 whitepaper I wrote which can be found here:

Russ Houberg
SharePoint 2007/2010 MCM

Nov 14, 2011 at 9:49 AM

Thanks Russ,

A very useful answer. The next problem then comes when we try to explain this to our internal & external customers, some of whom will only go by MS canon... Without supporting whitepapers / technet articles etc. it is a hard sell. After a search (and a read through your whitepaper's references) I cannot find any MS documentation that supports your recommendation (although I do agree it's a good solution). Have I overlooked something? If not, how do we go about getting MS to update their literature?