A Faster Alternative to sp_helpdb

Sometimes we find things that are hidden that are very good….Easter eggs in video games can be fun, the unexpected $20 bill in the dryer or even the pot of gold at the end of the rainbow.  And then there are the times that we find something hidden that isn’t as good as we would like.  This is where the hidden processing in the sp_helpdb system stored procedure lands.

On the surface sp_helpdb looks great, and for the most part it is.  It returns some great information with one line of code.  However, when looking at the actual execution plan, this is where it starts to get ugly.

Let’s first take a look at the data sp_helpdb does return.  It returns some pretty useful information.  Such as database name when it was created and owner.  For me, one of the most important data points it returns is the database size.  Although you don’t see it in the image below, sp_helpdb also returns the compatibility level.

All columns return a single data point except for the  status column. This column returns a number of important data points.  The text below is an example of what you might find in this column.

Status=ONLINE, Updateability=READ_WRITE, UserAccess=MULTI_USER, Recovery=SIMPLE, Version=904, Collation=SQL_Latin1_General_CP1_CI_AS, SQLSortOrder=52, IsAutoCreateStatistics, IsAutoUpdateStatistics, IsFullTextEnabled

I was curious how long it would take to run this process. To get the duration of this process I ran the query below.  The reason for the temporary table is because of the GO just before the EXEC sp_helpdb.

CREATE TABLE #variables
(
StartDate DateTime
)
GO
INSERT INTO #variables
SELECT GETDATE();

GO
EXEC sp_helpdb

DECLARE @t2 DATETIME;
SET @t2 = GETDATE();
DECLARE @t1 DATETIME;

SELECT @t1 = StartDate FROM #variables

SELECT DATEDIFF(MILLISECOND,@t1,@t2) AS ‘TotalTime’;

DROP TABLE #variables

When I run this on my computer, it usually takes between 500 and 1000 MS. More on this later.

Now let’s take a look what is happening behind the scenes with sp_helpdb. The first step is to populate a temporary table, #spdbdesc with  database name, owner, when it was created and compatibility level.  The code for the first step is below.

if not exists (select * from master.dbo.sysdatabases
where (@dbname is null or name = @dbname))
insert into #spdbdesc (dbname, owner, created, dbid, cmptlevel)
select name, isnull(suser_sname(sid),‘~~UNKNOWN~~’), convert(nvarchar(11), crdate),
dbid, cmptlevel from master.dbo.sysdatabases
where (@dbname is null or name = @dbname)
fetch ms_crs_c1 into @name

Then you will see a bunch of statements similar to below.  This step calculates the size of the database and completes an update of the temporary table.

update #spdbdesc
/*
** 8 KB pages is 128 per MB. If we ever change page size, this
** will be variable by DB or file or filegroup in some manner
** unforseeable now so just hard code it.
*/
set dbsize = (select str(sum(convert(dec(17,2),size)) / 128,10,2)
+ N’ MB’ from [master].dbo.sysfiles)
WHERE current of ms_crs_c1
fetch ms_crs_c1 into @name

With a more in depth look at the execution plan there are few things that could be leading to the performance of this stored procedure.  You will see an update of the temporary table and then a FETCH from the cursor for the next database.  With the FETCH you will also see an index scan on the temporary table.  The combination of multiple index scans and the cursor are contributing to the less than ideal performance of this stored procedure.

The final step is a SELECT from the temporary table, #spdbdesc.  Notice that this is the 111th query in the execution plan, even though there are only 32 databases on the server.

Now let’s take a look at an alternative to sp_helpdb.  This alternative utilizes the sysdatabases table, master_files table and a number of server properties.  Below you will see a comparison of sp_helpdb and the alternative below.  Consistently the alternative performed significantly faster, up to 88% faster.  Below you will find the data of a few tests on different servers with a different number of databases on each.  This is in Milliseconds.

Below is the code for the alternative.  Like the code for sp_helpdb, I included the code to populate the temporary table to get the duration of the query.  This query returns all the data in the stored procedure as well a number of additional data points. This code removes the need for a temporary table and the cursor.

DECLARE @t1 DATETIME;
DECLARE @t2 DATETIME;

SET @t1 = GETDATE();

CREATE TABLE #DBSize
( DatabaseName VARCHAR(200)
, Size BIGINT)

INSERT INTO #DBSize
SELECT d1.name,
CONVERT(VARCHAR,SUM(m.size)*8/1024) AS ‘Total disk space’
FROM sys.databases d1
INNER JOIN sys.master_files m
ON d1.database_id = m.database_id
GROUP BY d1.name
ORDER BY d1.name

SELECT CONVERT(VARCHAR(50),d.name) AS ‘Name’
, s.size AS ‘DatabaseSize(MB)’
, d.Create_date
, d.compatibility_level
, CASE WHEN d.is_auto_create_stats_on = 1 THEN ‘True’ ELSE ‘False’ END AS ‘AutoStatsOn’
, CASE WHEN d.is_auto_update_stats_on = 1 THEN ‘True’ ELSE ‘False’ END AS ‘AutoUpdateStatsOn’
, b.name AS ‘DBOwner’
, CASE WHEN d.state = 0 THEN ‘ONLINE’
WHEN d.state =1 THEN ‘RESTORING’
WHEN d.state =2 THEN ‘RECOVERING’
WHEN d.state =3 THEN ‘RECOVERY_PENDING’
WHEN d.state =4 THEN ‘SUSPECT’
WHEN d.state =5 THEN ‘EMERGENCY’
WHEN d.state =6 THEN ‘OFFLINE’
WHEN d.state =7 THEN ‘COPYING’
WHEN d.state =10 THEN ‘OFFLINE_SECONDARY’
ELSE ‘Unknown State’
END AS ‘State’
, SERVERPROPERTY(‘ProductMajorversion’) AS ‘ProductMajorVersion’
, ISNULL(DB_NAME(d.source_database_id), ‘Not A Snapshot’) AS ‘SourceDBName’
, create_date
, collation_name
, user_access_desc
, CASE WHEN is_read_only = 1 THEN ‘True’ ELSE ‘False’ END AS ‘IsReadOnly’
, CASE WHEN is_auto_close_on = 1 THEN ‘True’ ELSE ‘False’ END AS ‘IsAutoCloseOn’
, CASE WHEN is_auto_shrink_on = 1 THEN ‘True’ ELSE ‘False’ END AS ‘IsAutoShrinkOn’
, state_desc
, DATABASEPROPERTYEX(d.name, ‘Recovery’) AS ‘RecoveryModel’
, log_reuse_wait_desc
, containment_desc  –This column will need be removed for older versions.
, d.delayed_durability_desc
, CASE WHEN is_memory_optimized_enabled = 1 THEN ‘True’
ELSE ‘False’ END AS ‘IsMemoryOptimizedEnabled’ –This column will need to be removed for older versions.
, DATABASEPROPERTYEX(d.name, ‘Updateability’) AS ‘UpdateAbility’
, DATABASEPROPERTYEX(d.name, ‘SQLSortOrder’) AS ‘SQLSortOrder’
, CASE WHEN DATABASEPROPERTYEX(d.name, ‘IsFulltextEnabled’) = 1 THEN ‘True’ ELSE ‘False’ END AS ‘IsFulltextEnabled’
, DATABASEPROPERTYEX(d.name, ‘Version’) AS ‘Version’
FROM sys.databases d
INNER JOIN sys.syslogins b
ON d.owner_sid = b.sid
INNER JOIN #dbsize s
ON d.name = s.databasename

DROP TABLE #DBSize

SET @t2 = GETDATE();
SELECT DATEDIFF(millisecond,@t1,@t2) AS elapsed_ms;

While the improvement in performance is small, the lessons learned I think are really the key.  Even though this post was about a system stored procedure, the same thought process can be applied to any query.  As far as troubleshooting goes, these are only a few of the important steps.

      1. Look for hidden processingIn this case I looked at the execution plan of sp_helpdb and found the cursor. I did this mostly because I was curious.  My first thought was there has to be a better way.  
      2. Many times, cursors can and should be avoidedI knew there had to be a better way using T-SQL, I just needed to find it.  We should always be looking for ways to improve our code.
      3. Review the Execution plans for problem queries, you never know what you might find. This applies even for system stored procedures sometimesOf course this is how I found the potential issues

Performance tuning is a critical responsibility of the DBA.  I had a student years ago tell me that milliseconds don’t matter.  Of course the entire class kind of shook their head after that statement.  In reality, they do. A few milliseconds saved on a small query can equate to a really noticeable time improvement for larger queries or queries that might run thousands of times a day.

Thanks for visiting my blog!!!

 

 

 

100th Blog Post: What I learned along the Way

I have now been blogging about SQL Server for almost 18 months.  Along the way I have learned a great deal and hopefully have also helped a few folks along the way.  From time to time I will go back and look at older posts.  When I did that recently I came to realize that I recently posted my 100th blog post.

Along the way I learned so many things.  Of course this increased my SQL Server skill set as well as a few other things.  Below are my lessons learned from the past 100 blog posts.

    • There is always more to learn – While writing each of the blog posts, I realized that there is so much more to learn. Given the vast landscape that is SQL Server, it is very easy to get humbled in this way.  Oddly enough, I have the same feeling when I attend a SQL Saturday event.  There are so many great speakers, just sitting in the speaker room is a great time to learn. The more I learn, the more I get excited about learning more.
    • It is important to keep up with what the industry is doing. It is very easy to get comfortable and use just what we know.  Also, DBAs are no different than any other IT professional in the sense that we have a ton of things that we are working on at the same time.  Blogging and presenting helps me keep current.  This is in addition to seeing what sessions are at SQL Saturday events.  I personally think the sessions offered at SQL Saturday’s are a good indication of where the industry is headed.
    • It is great to be part of the SQL Server community.  As I wrote sessions and blog posts, of course I researched the topic at hand.  While doing so, I really realized how giving the SQL Community really is.  There are so many great blogs and articles written by other members of the community, some of which I visit on a regular basis.  Especially when researching for a new post.
    • Face challenges head on.  This is something I have always tried to do in my career.  By facing challenges head on, things can never get boring!  This brings about another great part of the SQL community, you are never alone!!!  There is always someone willing to take a few moments and help.  Just use the hashtag, #SQLHelp on Twitter and you will see how many folks are willing to help.  There are many posts that I have written and never published.  Many times these are topics I wanted to learn and decided the writing a blog post was a good way to kind of force me to learn it.
    • Embrace change  Change in the technology industry is a constant.  As new versions of SQL Server come out, things will change!  There are usually some new great features that should be reviewed and if appropriate utilized in production.  While we all resist change at some point, embracing it makes life just a little easier and allows us to grow our skill set.  Blogging has really helped me embrace change by exposing me to the advantages of newer technology.
    • Blog on what you want!  This is some advice I received very early in my blogging career.  This came out of a conversation I was having with another presenter at the SQL Saturday in Wausau, WI.  I had expressed my concern about blogging about something that so many others already have blogged about.  He said “Blog what you want to blog about”.  He suggested that I might say something in a completely different way and it might be a way that clicks with a reader like no other blog past has.  So that is what I did, blogged about what I wanted.

If you are thinking about starting a blog, my suggestion is to do it.  It is a great way to learn and help others at the same time.  There are many great blogs in the SQL community that would be great role models.

Here are a few that I find myself visiting quite often.

Pinal Dave – https://blog.sqlauthority.com

Monica Rathbun – https://sqlespresso.com/

Kevin Feasel – curatedsql.com

www.sqlshack.com

www.sqlskills.com

www.sqlperformance.com

Kendra Little – www.littlekendra.com

Burt Wagner – This is more of a video blog, https://www.youtube.com/channel/UC6uYLOg8N_NzjkikbJ_vrtg

Denny Cherry – https://www.dcac.co/author/admin

David Klee – https://www.davidklee.net/blog/

Brent Ozar – www.brentozar.com

www.sqlservercentral.com – There are a number of great blogs here.

There are many great blogs in the SQL community.  Above are just some of them.  I am honored to have my blog selected as one of the Top 50 SQL Server blogs(#27) at https://blog.feedspot.com/sql_blogs/.  This is a great place to see many great blogs!

My only hope is that with each blog post, I am able to help at least one person.   While I really enjoy blogging, helping others is something I enjoy even more.  If each post helps a single person…then it was worth the time to write it.

Thanks for visiting my blog!!!