SSRS and SSMS Part 2

This is a blog about some additions I made to the reports I talked about in another blog post.

That blog post can be found here:  https://davebland.com/sql-server-assessment-using-ssms-and-ssrs

I added a number of new reports as well as monitoring functionality.  These reports can be found by following the link below.

Reports and Monitoring

While this is short, I will be adding more detail in the next few days.  I wanted to get this uploaded so the folks that attended by sessions SQL Saturday Cleveland.  Speaking there was so much fun, it has been way too long!!!

Summary of 2020

As everyone knows 2020 was a tough year.  However, there were still many good things that happened, although sometimes it was a bit more challenging to remember them. I thought I would do a simple review of my year.

Here are the numbers:

C# Corner Blog Posts – 7

C# Corner Page Views – 19,000

Blog Posts – 23

Blog Page Views – 89,493

Total Page views – 108,483

SSRS Reports created to simplify gathering information about a server – 139

Speaking Engagements – 17

6 SQL Saturdays

10 User Groups, including 1 PASS virtual group

1 Idera Geek Sync

1st time presenting at 3 SQL Saturdays, Nashville, Chattanooga and Albany.

2 new sessions added to my library of presentations.

1 Case of COVID defeated!!!!!!

During 2020 I did not speak as often as I have in past years.  COVID-19 had much to do with that.  Not only were many speaking opportunities cancelled, I had a bout with COVID myself that prevented me from speaking for a few months.  While COVID prevented me from speaking as much as I would have liked, I also did not blog as much as I did the previous year.  I completed 23 blog posts this year which is down from over 60 the year before, however I feel that is still a pretty good amount.  I would like to blame COVID for the reduced number of blog posts as well but I can’t.  In isolation, I chose to spend more time with my family and away from the computer.  I even found a campground in southern Wisconsin where my personal and work phones do not work!!!  I was very excited and will be making more trips there this year I hope.

While many of items in the list are in my mind, good things, there is one very large bad thing that isn’t listed above.  That is the demise of PASS.  PASS has been a great organization for me personally as well as to many other folks.  The amount of opportunity it presented for people to learn and grow professionally was simply amazing.  In addition to all the learning, I had the opportunity to meet many great people along the way, which to me is the best part.  As PASS winds down, I am very encouraged by the SQL Server community in how it is working together to find a way for many of the learning opportunities to continue.  Once the dust has settled, I think the SQL community will be stronger and I look forward to seeing how things will evolve over the year.

During the year I also saw a number of well deserving SQLFamily members becoming first time MVPs.  It was great to see their hard work and dedication to the community be rewarded.  The community would not be as strong as it is with out them.

Speaking of the SQLFamily, I was also very encouraged by the dedication of many folks to improving diversity and inclusion in the SQL community.  While we are not perfect, I think the best part was seeing folks recognizing where we fell short and taking steps to improve.  A community that takes steps to help people feel included is a community I want to continue to be part of.

As I battled COVID for the last 2 and half months of 2020, I was constantly reminded of how great the SQL Community it.  While I was in the hospital recovering I received many tweets and emails of encouragement from members of the SQL community.  COVID is a very lonely illness and these small gestors meant so much to me.   I am very thankful to all that sent messages, especially David Klee, Rafael Colon and Pat Phalen.  In my first presentation back, my voice was still not as it should be and the organizers of the event could not have been more understanding.  They said that if I got too tired and couldn’t speak anymore they would wrap it up early.  I was determined to make it to the end, but knowing that the organizers were willing to end the event early just shows how strong this community is.  I did make it to the end, but needed a long nap after!!!

In the end, I think the SQL community will be stronger and the opportunities for ALL will continue.  I feel very fortunate to be a part of this community.  While my contributions were small compared to some, I do hope that somewhere along the way, I helped out a few folks.

 

 

A Faster Alternative to sp_helpdb

Sometimes we find things that are hidden that are very good….Easter eggs in video games can be fun, the unexpected $20 bill in the dryer or even the pot of gold at the end of the rainbow.  And then there are the times that we find something hidden that isn’t as good as we would like.  This is where the hidden processing in the sp_helpdb system stored procedure lands.

On the surface sp_helpdb looks great, and for the most part it is.  It returns some great information with one line of code.  However, when looking at the actual execution plan, this is where it starts to get ugly.

Let’s first take a look at the data sp_helpdb does return.  It returns some pretty useful information.  Such as database name when it was created and owner.  For me, one of the most important data points it returns is the database size.  Although you don’t see it in the image below, sp_helpdb also returns the compatibility level.

All columns return a single data point except for the  status column. This column returns a number of important data points.  The text below is an example of what you might find in this column.

Status=ONLINE, Updateability=READ_WRITE, UserAccess=MULTI_USER, Recovery=SIMPLE, Version=904, Collation=SQL_Latin1_General_CP1_CI_AS, SQLSortOrder=52, IsAutoCreateStatistics, IsAutoUpdateStatistics, IsFullTextEnabled

I was curious how long it would take to run this process. To get the duration of this process I ran the query below.  The reason for the temporary table is because of the GO just before the EXEC sp_helpdb.

CREATE TABLE #variables
(
StartDate DateTime
)
GO
INSERT INTO #variables
SELECT GETDATE();

GO
EXEC sp_helpdb

DECLARE @t2 DATETIME;
SET @t2 = GETDATE();
DECLARE @t1 DATETIME;

SELECT @t1 = StartDate FROM #variables

SELECT DATEDIFF(MILLISECOND,@t1,@t2) AS ‘TotalTime’;

DROP TABLE #variables

When I run this on my computer, it usually takes between 500 and 1000 MS. More on this later.

Now let’s take a look what is happening behind the scenes with sp_helpdb. The first step is to populate a temporary table, #spdbdesc with  database name, owner, when it was created and compatibility level.  The code for the first step is below.

if not exists (select * from master.dbo.sysdatabases
where (@dbname is null or name = @dbname))
insert into #spdbdesc (dbname, owner, created, dbid, cmptlevel)
select name, isnull(suser_sname(sid),‘~~UNKNOWN~~’), convert(nvarchar(11), crdate),
dbid, cmptlevel from master.dbo.sysdatabases
where (@dbname is null or name = @dbname)
fetch ms_crs_c1 into @name

Then you will see a bunch of statements similar to below.  This step calculates the size of the database and completes an update of the temporary table.

update #spdbdesc
/*
** 8 KB pages is 128 per MB. If we ever change page size, this
** will be variable by DB or file or filegroup in some manner
** unforseeable now so just hard code it.
*/
set dbsize = (select str(sum(convert(dec(17,2),size)) / 128,10,2)
+ N’ MB’ from [master].dbo.sysfiles)
WHERE current of ms_crs_c1
fetch ms_crs_c1 into @name

With a more in depth look at the execution plan there are few things that could be leading to the performance of this stored procedure.  You will see an update of the temporary table and then a FETCH from the cursor for the next database.  With the FETCH you will also see an index scan on the temporary table.  The combination of multiple index scans and the cursor are contributing to the less than ideal performance of this stored procedure.

The final step is a SELECT from the temporary table, #spdbdesc.  Notice that this is the 111th query in the execution plan, even though there are only 32 databases on the server.

Now let’s take a look at an alternative to sp_helpdb.  This alternative utilizes the sysdatabases table, master_files table and a number of server properties.  Below you will see a comparison of sp_helpdb and the alternative below.  Consistently the alternative performed significantly faster, up to 88% faster.  Below you will find the data of a few tests on different servers with a different number of databases on each.  This is in Milliseconds.

Below is the code for the alternative.  Like the code for sp_helpdb, I included the code to populate the temporary table to get the duration of the query.  This query returns all the data in the stored procedure as well a number of additional data points. This code removes the need for a temporary table and the cursor.

DECLARE @t1 DATETIME;
DECLARE @t2 DATETIME;

SET @t1 = GETDATE();

CREATE TABLE #DBSize
( DatabaseName VARCHAR(200)
, Size BIGINT)

INSERT INTO #DBSize
SELECT d1.name,
CONVERT(VARCHAR,SUM(m.size)*8/1024) AS ‘Total disk space’
FROM sys.databases d1
INNER JOIN sys.master_files m
ON d1.database_id = m.database_id
GROUP BY d1.name
ORDER BY d1.name

SELECT CONVERT(VARCHAR(50),d.name) AS ‘Name’
, s.size AS ‘DatabaseSize(MB)’
, d.Create_date
, d.compatibility_level
, CASE WHEN d.is_auto_create_stats_on = 1 THEN ‘True’ ELSE ‘False’ END AS ‘AutoStatsOn’
, CASE WHEN d.is_auto_update_stats_on = 1 THEN ‘True’ ELSE ‘False’ END AS ‘AutoUpdateStatsOn’
, b.name AS ‘DBOwner’
, CASE WHEN d.state = 0 THEN ‘ONLINE’
WHEN d.state =1 THEN ‘RESTORING’
WHEN d.state =2 THEN ‘RECOVERING’
WHEN d.state =3 THEN ‘RECOVERY_PENDING’
WHEN d.state =4 THEN ‘SUSPECT’
WHEN d.state =5 THEN ‘EMERGENCY’
WHEN d.state =6 THEN ‘OFFLINE’
WHEN d.state =7 THEN ‘COPYING’
WHEN d.state =10 THEN ‘OFFLINE_SECONDARY’
ELSE ‘Unknown State’
END AS ‘State’
, SERVERPROPERTY(‘ProductMajorversion’) AS ‘ProductMajorVersion’
, ISNULL(DB_NAME(d.source_database_id), ‘Not A Snapshot’) AS ‘SourceDBName’
, create_date
, collation_name
, user_access_desc
, CASE WHEN is_read_only = 1 THEN ‘True’ ELSE ‘False’ END AS ‘IsReadOnly’
, CASE WHEN is_auto_close_on = 1 THEN ‘True’ ELSE ‘False’ END AS ‘IsAutoCloseOn’
, CASE WHEN is_auto_shrink_on = 1 THEN ‘True’ ELSE ‘False’ END AS ‘IsAutoShrinkOn’
, state_desc
, DATABASEPROPERTYEX(d.name, ‘Recovery’) AS ‘RecoveryModel’
, log_reuse_wait_desc
, containment_desc  –This column will need be removed for older versions.
, d.delayed_durability_desc
, CASE WHEN is_memory_optimized_enabled = 1 THEN ‘True’
ELSE ‘False’ END AS ‘IsMemoryOptimizedEnabled’ –This column will need to be removed for older versions.
, DATABASEPROPERTYEX(d.name, ‘Updateability’) AS ‘UpdateAbility’
, DATABASEPROPERTYEX(d.name, ‘SQLSortOrder’) AS ‘SQLSortOrder’
, CASE WHEN DATABASEPROPERTYEX(d.name, ‘IsFulltextEnabled’) = 1 THEN ‘True’ ELSE ‘False’ END AS ‘IsFulltextEnabled’
, DATABASEPROPERTYEX(d.name, ‘Version’) AS ‘Version’
FROM sys.databases d
INNER JOIN sys.syslogins b
ON d.owner_sid = b.sid
INNER JOIN #dbsize s
ON d.name = s.databasename

DROP TABLE #DBSize

SET @t2 = GETDATE();
SELECT DATEDIFF(millisecond,@t1,@t2) AS elapsed_ms;

While the improvement in performance is small, the lessons learned I think are really the key.  Even though this post was about a system stored procedure, the same thought process can be applied to any query.  As far as troubleshooting goes, these are only a few of the important steps.

      1. Look for hidden processingIn this case I looked at the execution plan of sp_helpdb and found the cursor. I did this mostly because I was curious.  My first thought was there has to be a better way.  
      2. Many times, cursors can and should be avoidedI knew there had to be a better way using T-SQL, I just needed to find it.  We should always be looking for ways to improve our code.
      3. Review the Execution plans for problem queries, you never know what you might find. This applies even for system stored procedures sometimesOf course this is how I found the potential issues

Performance tuning is a critical responsibility of the DBA.  I had a student years ago tell me that milliseconds don’t matter.  Of course the entire class kind of shook their head after that statement.  In reality, they do. A few milliseconds saved on a small query can equate to a really noticeable time improvement for larger queries or queries that might run thousands of times a day.

Thanks for visiting my blog!!!

 

 

 

100th Blog Post: What I learned along the Way

I have now been blogging about SQL Server for almost 18 months.  Along the way I have learned a great deal and hopefully have also helped a few folks along the way.  From time to time I will go back and look at older posts.  When I did that recently I came to realize that I recently posted my 100th blog post.

Along the way I learned so many things.  Of course this increased my SQL Server skill set as well as a few other things.  Below are my lessons learned from the past 100 blog posts.

    • There is always more to learn – While writing each of the blog posts, I realized that there is so much more to learn. Given the vast landscape that is SQL Server, it is very easy to get humbled in this way.  Oddly enough, I have the same feeling when I attend a SQL Saturday event.  There are so many great speakers, just sitting in the speaker room is a great time to learn. The more I learn, the more I get excited about learning more.
    • It is important to keep up with what the industry is doing. It is very easy to get comfortable and use just what we know.  Also, DBAs are no different than any other IT professional in the sense that we have a ton of things that we are working on at the same time.  Blogging and presenting helps me keep current.  This is in addition to seeing what sessions are at SQL Saturday events.  I personally think the sessions offered at SQL Saturday’s are a good indication of where the industry is headed.
    • It is great to be part of the SQL Server community.  As I wrote sessions and blog posts, of course I researched the topic at hand.  While doing so, I really realized how giving the SQL Community really is.  There are so many great blogs and articles written by other members of the community, some of which I visit on a regular basis.  Especially when researching for a new post.
    • Face challenges head on.  This is something I have always tried to do in my career.  By facing challenges head on, things can never get boring!  This brings about another great part of the SQL community, you are never alone!!!  There is always someone willing to take a few moments and help.  Just use the hashtag, #SQLHelp on Twitter and you will see how many folks are willing to help.  There are many posts that I have written and never published.  Many times these are topics I wanted to learn and decided the writing a blog post was a good way to kind of force me to learn it.
    • Embrace change  Change in the technology industry is a constant.  As new versions of SQL Server come out, things will change!  There are usually some new great features that should be reviewed and if appropriate utilized in production.  While we all resist change at some point, embracing it makes life just a little easier and allows us to grow our skill set.  Blogging has really helped me embrace change by exposing me to the advantages of newer technology.
    • Blog on what you want!  This is some advice I received very early in my blogging career.  This came out of a conversation I was having with another presenter at the SQL Saturday in Wausau, WI.  I had expressed my concern about blogging about something that so many others already have blogged about.  He said “Blog what you want to blog about”.  He suggested that I might say something in a completely different way and it might be a way that clicks with a reader like no other blog past has.  So that is what I did, blogged about what I wanted.

If you are thinking about starting a blog, my suggestion is to do it.  It is a great way to learn and help others at the same time.  There are many great blogs in the SQL community that would be great role models.

Here are a few that I find myself visiting quite often.

Pinal Dave – https://blog.sqlauthority.com

Monica Rathbun – https://sqlespresso.com/

Kevin Feasel – curatedsql.com

www.sqlshack.com

www.sqlskills.com

www.sqlperformance.com

Kendra Little – www.littlekendra.com

Burt Wagner – This is more of a video blog, https://www.youtube.com/channel/UC6uYLOg8N_NzjkikbJ_vrtg

Denny Cherry – https://www.dcac.co/author/admin

David Klee – https://www.davidklee.net/blog/

Brent Ozar – www.brentozar.com

www.sqlservercentral.com – There are a number of great blogs here.

There are many great blogs in the SQL community.  Above are just some of them.  I am honored to have my blog selected as one of the Top 50 SQL Server blogs(#27) at https://blog.feedspot.com/sql_blogs/.  This is a great place to see many great blogs!

My only hope is that with each blog post, I am able to help at least one person.   While I really enjoy blogging, helping others is something I enjoy even more.  If each post helps a single person…then it was worth the time to write it.

Thanks for visiting my blog!!!

SQL Server Assessment Using SSMS and SSRS

Over the years I have been asked to complete assessments on SQL Servers many times.  The need for these assessments have varied from gathering information needed to address a current performance issue to gathering evidence for and audit to just wanting to know more information about a particular server.

I have found some great information at a number of really amazing blogs.  This is in addition to getting a great deal of information from Microsoft documentation.  I took this information and collected them in several scripts that I used for a while to gather information.  However, I was looking for something just a little bit easier to work with.  With this in mind, I took the script and developed a number of SSRS reports and can be opened as Custom Reports in SQL Server Management Studio.

Why did I do this?  Well my initial reason was to help my team get information needed to answer the questions that typically come when there is a performance issue. The questions I typically get are below.

        • Is there blocking?
        • Any deadlocks?
        • Any issues with CPU utilization?
        • Are there any open transactions?
        • What does memory utilization look like?
        • Have there been any recent changes that could be causing the issue?
        • Are any jobs running?
        • What is the state of Index Fragmentation?
        • Any issues with Statistics?

Once I created the initial page I thought, well more information can be added.  So I just kept creating reports and before I knew there we over 100 reports.  Some you may never use, but hopefully some will be very useful.  As I creating reports, it also gave me a great opportunity to learn.

While my initial motivation was to make my job easier, by sharing these I am hoping they may help other DBA professionals as well.

Add reports to get information about Security Policies on July 15, 2020. You can download the reports here: Custom Reports.

Once you download them you can put them on your computer and then open them in SSMS.  To do so, find the server or database you are looking to get the information on and right click and go to Custom Report, just as in the image below.

You then navigate to the _LaunchingPage.rdl file. Once opened it will detect the version of SQL Server and will enable buttons based on the version. While we are looking for the _LaunchingPage.rdl, you can also open any of the reports as well.  Just keep in mind not all reports will work on all versions.

In the image below there are four buttons.  The top three will take you to different locations depending on the version.  There are three options, 2012 or newer, 2008r2 or older and Azure SQL Database.

When you click the SQL Server 2012 or newer you will be taken to a report that will return a great deal of information that will be useful when doing an assessment.  This is very similar to the report you will be taken to if you click the 2008r2 or older button.  Since many of these data points are not options in an Azure SQL DB, that button will take you to the Report Library.

Below are the 21 data points that will be returned in just a few seconds.  A number of these will be helpful when doing an assessment for performance reasons.  The boxes will also change color to yellow or red if issues are found, just like the image above.

      • Date of Last restart of the service and the number of days since.  If more than 60 days, the color will change to either yellow or red.
      • The server name and database name
      • Current CPU utilization
      • CPU utilization for the past 30 minutes
      • Current Page Life Expectancy
      • Percent of memory being used
      • Amount of available memory
      • Drive size and space left for all the drives on the system that have SQL Server data or log files on them
      • Is the server in a windows cluster?
      • Is the server part of an availability group?
      • Are there any backups missing?
      • How many job failures in the past 24 hours
      • Are there any jobs running at this time?
      • Amount of memory on the server
      • Is there a pending restart of the service
      • Number of encrypted databases
      • Number of blocked processes
      • Number of open transactions
      • Current settings for Cost Threshold for Parallelism and Max Degree of Parallelism

In the upper right corner, you will see, “Click to see Report Library”.  By clicking this you will be taken to the Report Library. As example of what it looks like is below.  As you can see there are a number of reports, 138 to be exact. A complete list can be seen at the end of this post.  These links will be enabled or disabled based on the version of SQL Server.

When you click a report link, the report will look like this. You can sort the data by most of the columns.  In addition, the report can easily be exported to Excel by right clicking anywhere on the report.  There is also options to export to Work or PDF, but the reports do no export nicely to these two formats.

Every report will have a set of links near the top.  These links will take you one of three places.  The names are pretty descriptive of where the links will take you.

The “Click Here to see the query” will take you to another report that will show you the query that was used to run the report.  An example is below.

Sadly, SSMS does not allow for the copying of the code.  To do this, export this report to Word and you can copy it there.

A few of the reports, Index Fragmentation and Statistics information could take a long time if it is a large database.  Because of this I have added a warning page.

This a list of all the report that includes a description and rdl file name.  This can be found on the Report Library page at the bottom of the third column.

I have testing these on SQL Server 2008 or newer, and some reports will also work on 2000 or 2005.  The version of SSMS just needs to be one of the later versions.  As for database compatibility level, it must be set to 90(2008) or higher.

In order to run these reports you will need to be a member of the Sysadmin server role.  Some will still work if you are not.  Ideally, being a member of the local Administrators group in Windows, however there are only a few reports that may not work if you are not.

The database you are connected to cannot be changed once the report is open.  If you are looking for information about a specific database, you will need to right click on that database then open the _LaunchingPage.rdl file.

I have had several people ask if these can be used as a monitoring tool.  The answer is, not really.  These were not designed to be a monitoring tool, just something you can you to capture information about your server.  There is a refresh setting in SSRS that I tried.  It just didn’t seem to work as I had hoped.  There are many great tools out there for monitoring and not all cost money.

An unexpected benefit of these reports is training.  These reports I think point out things that a DBA might look at.  This is especially true for the MainReport.  If you have team member or you are new to the DBA world, these reports could be a guide to training.  My suggestion is to research each of the data points in the reports.  Remember, there are 21 data points on the main report.  This is where I would start.

One very important thing to remember, TEST these in a non-production environment first.

Also, you are free to make any changes as you see fit to make these reports more useful in your environment.

Thanks for visiting my blog and I hope you find these reports useful.

Report Name Purpose Of The Report
Additional Memory Counters More memory related counters such as Free List Stalls and Lazy Writes
Alert Information General information about the Alerts, including the last occurence.
All Permissions On The Database Returns the permissions on the database.  This excludes the system database roles, the Public role and the Guest account
Authentication Information Windows or Mixed authentication.  How is the server configured?
Auto Delete Jobs Are there any jobs configured to delete automatically? This report will tell you.
Availability Group Information What Availability Groups exist and what databases are involved
Azure Database Resource Stats Returns Performance related information on an Azure SQL DB
Azure SQL DB Service Tiers You can use this report to find the service level for each Azure SQL DB on your instance.
Backup (All Other Types) Information History of all other types of backups
Backup (TLog Only) Information History of transaction log backups
Backup Speed Rate The MB\Sec for each backup
Backup(Full and Differential) Information History of Full and Differential backups, including the device backed up to
Buffer Pool Extension Information Returns the file path, size and number of pages in the bufferpool extension
CDC and Change Tracking Status Returns the status of CDC and Change Tracking
CDC Tables What tables are using Change Data Capture
Change Tracking Tables (D) What tables are configured for Change Tracking
CheckDB Status Returns the date of the last successful CheckDB
Cluster Node Information Is the server in a cluster?
Column List for Current Databases  (D) Simple list of all the columns in each table
Column Store Index Information Returns information about the Column Store indexes includes type, size and row count
Columns with BLOB data types Lists all columns in the current database that use XML, VARCHAR(MAX), NVARCHAR(MAX), VARBINARY(MAX), IMAGE and TEXT data types
Compressed Objects   (D) Compression status of each object
Computed Columns in Database (D) Location and definition of each computed column in the current database
Connection Count and Memory Used Many connections to the server and how much memory is being consumed
Connection IP Addresses IP addresses of existing connections
CPU Utilization CPU Utilization by minute for the last few days.  Returns CPU usage for both “SQL Server” and “Other Processes”
Credential Information Are there any credentials on the server?
Currently Running Jobs What jobs are currently running.
Custom Error Messages Looks for custom error messages, anything with a number above 50,000
Database Audit Specification Information Database Audit information, including what Server Audit each is using
Database Auto Settings Returns setting status for “Auto Update Stats”, “Auto Create Stats” and a few others
Database Auto Tune Settings Returns current status of the Auto Tune settings
Database File Information Location of each database related file
Database Filegroup Information Filegroup information for each database
Database Functions   (D) Any user defined functions?
Database Information(Stats Settings\Compat Level) Compatibility level and addition settings for each database
Database Mail Status and History Returns information about the history of database mail, including who it was sent to
Database Role Members Returns the members of each database role for all the databases
Database Settings Settings on the databases that might be useful to know
Database Snapshot Information Are there any snapshots?
Database Triggers   (D) Find those hidden triggers with this report
Database Users Who is in what databases
Database Virtual Log Files Virtual Log File count for each database.  Will provide a warn if the count is too high
Databases Not in Offline\Online State State of each database is on Offline or Online
Default Constraint Info.  (D) Default constraints and where they are being used
Duplicate Indexes Returns all indexes that have a duplicate on the same table
Dynamic Masked Columns Lists the columns that are masked
Email In Job Steps Email addresses found in job steps.
Enabled Trace Flags Currently enabled trace flags
Encrypted Database Information Any encrypted databases?
Error Log File Size Returns the size of each of the archived error log files, as well as the current error log
Extended Events Information What Extended Event Sessions exist
Extended Events Session Status Returns the running status for each of the Exended Event sessions
Failed Logins What failed logins are happening.  Only looks at the current log
File Size\Space Used The name, size and free space of each file for all the databases.
Forwarded Records   (D) The number of forwarded records since the service was last started and the number of forwarded records in tables
Full Text Index Info.  (D) Full text indexes….are there any?
General Information General information about the SQL Server and Windows Server
Guest Account Permissions and Status (D) Status of the Guest account for the current database
Guest Account Status All Databases Status of the Guest account for each database
Index Fragmentation   (D) Index fragmentation percentage
Index Fragmentation for an Object   (D) This is the code if you want to check the fragmentation of indexes on a specific object
Index Information  (D) Name, type and location of each index in the database
Index To Column Ratio by Table  (D) The ratio of Indexes to Columns.  Could help us identify if there are too many indexes
Index Usage Returns user seeks, scans, lookups and updates for all indexes in the database
Indexes That are unused Returns indexes where there are not any user seeks, scans and lookups in the database
Job Failure Rate Past 90 Days This returns the percentage of failures for each job for the past 90 days.
Job Failures Last 30 Days Returns information about job failures for the past 30 days.
Job Information General information about each job, including schedules and notifications.
Job Step Information Provides some detail about each jobs step, including what type of step
Jobs Using Schedules What schedules are being used and what jobs are using them
Jobs with No Configured Notification If there are jobs that do not have any notifications configured, this report will help you find them.
Linked Server Information Information about all the linked servers, most importantly is it configured to use Self credentials
Log Shipping Databases Databases with log shipping configured
Login Information Information about each login on the server
Mail Configuration Information about the mail confiuration including profiles and mail servers
Mail Configuration Parameters Returns the parameter configuration for database mail
Memory Dump File Information Location, when created and size of each memory dump file.
Memory Grants Pending Returns the SPIDs with pending memory grants with the query
Memory Optimized Tables Returns information about all the memory optimized tables in all the databases.  This also includes the allocated and used sizes
Memory Settings and Counters Configuration of important memory settings
Mirrored Database Information Mirrored Databases, do they exist?
Missing Backups Any database without a full backup in past few days or no log backup as well
Nonsargable Searches in Stored Procedures Returns the list of stored procedures that have Nonsargable searches.  This include functions in the WHERE clause and leading wildcard searches
Objects with Schemabinding Returns a list of all the objects, with code, that have schemabinding
Open Transactions Information about open transactions on the server.
Operator Information Information about each operators, including email address
Orphan Database Users Returns database user accounts that do not have a matching login, either by SID or by name
Partitioned Tables Report on what tables are partitioned
Plan Guides Returns the query defined in the plan guide, if any exist
Plans In Cache with Implicit Convert Returns the TOP 100 query plans based on execution count that have an Implicit Convert
Procedures With Execute As   (D) Returns any procedure that has the keywords “Execute As” in the definition
Processes Lists all the user processes and identifies blocking information.
Processor Information How many processors on the server?
Proxy Information General information about each proxy
Public Role Permissions  (D) Permissions for the public role
Queries with High Logical Reads  (D) Returns the top 10 queries based on the number of logical reads
Query Store Status and Settings Query store status
Read Write Latency Returns Read\Write latency for each file for all databases.  This includes both data and log files.
Replication Information Any replication?
Restore Information Any restores of databases?
SA Audit and Password Review Will tell you if any accounts have security issues, passwords same as account name or no password.  Will also return settings of the SA account
Security Policy Information Returns information about the security policies in the database
Server Audit Specification Information Server Audit information
Server Permissions other than Connect Logins that have permissions at the server level other than connect
Server Role Information and Membership Who is in each server role
Server Triggers Returns information about server triggers if any exist
Service Account Information What account is being used to run the services.  Also will return the status of the service
Spills To TempDB Executes sys.dm_exec_query_stats to get the number of spills to TempDB for completed queries that still have a plan in the cache
Spinlock Information Returns information about spinlocks including backoffs
Start up Parameters Definition of each of the start up parameters
Statistics Information  (D) Information about statistics, including last updated date, sample rate and modification counts
Stored Procedures that have a “Select *” Returns all stored procedures that have a “Select *” in the code somewhere
Stored Procedures that have CURSOR” Returns all stored procedures that have a CURSOR in the code somewhere
Stored Procedures that start with SP_ Returns any stored procedure that starts with SP_.  Proces with this naming convention can present issues if MS comes out with a proc of the same name.
Stored Procs Created or Modified Last 90 Days  (D) Stored procs created or modified in past 90 days
Stored Procs with Deadlock Priority Set Returns the list of stord procedures that have the DEADLOCK_PRIORITY set
Stored Procs with NOCOUNT Set to OFF Returns the list of stord procedures that have NOCOUNT set to OFF or not set at all
Stored Procs with Recompile Returns all the stored procedures that have the keyword recompile in the definition
Stretch Database Information Returns information about the stretch tables in the database
Synonym Information  (D) Checks for the presence of Synonyms and the definition of each
Sys.Configurations Returns the contents of the Configurations table.  Will identify which settings are causing a pending reboot
Table Row Counts and Size  (D) Returns row counts for all tables in the database as well as the size
Tables Created or Modified Last 90 Days  (D) List of tables that were created or modified in the past 90 days
Tables That are Heaps   (D) Where are those heaps….use this report to find them
TempDB  Information How many files in the TempDB including file size
Temporal Table Information Returns all temporal tables as well as the create date and retention period
Trace Flags Being Used This report reviews the past three logs looking for any trace flag that was turned on.  This does not return current status of enabled trace flags.
Trace Information Any traces running?
User Defined Data Type Usage Returns the columns the are using User Defined Datatypes
User Defined Data Types   (D) What user defined database exist in the database and where are they used
Users DMLRights other than SELECT (D) Who can change data or object definition
Views Created or Modified Last 90 Days   (D) Views created or modified in the past 90 days
Wait Information Information about some of the common important wait types.
What is Using the TempDB with Code Returns what processes are currently using the TempDB.

 

Ambiguous column name error when only using one Table

Every once in a while I see something in SQL Server that has what I think are “Unexpected Results”.  This is the case recently.  I received the dreaded ambiguous column error when only using one table.  My first thought was how can that be when I am only using one table.  Just a bit of background, Ambiguous column errors are usually something we see when joining two tables and there is a column with the same name as both.

If you run the below code against the AdventureWorks2014 database it will run just fine.

SELECT Name
, ProductNumber
, Color
, ReorderPoint
FROM production.product

And when I run this, it also works just fine.  No issues.

SELECT Name
, ProductNumber
, Color
, ReorderPoint
FROM production.product
ORDER BY name

However, during the development process I needed to get the list of columns to help me determine if there are any additional columns that I need to add.  I know there are several ways to get the columns, but sometimes I find it easier to just at the “*” to return all columns.  Just a note:  this is in development on a development server.  The “*” will not make it’s way into production code.

When I added the “*”, this is where I received unexpected results.  All I did was add the “*”.  Looking at the code below, you can see SQL Server is having issues with the Name column in the ORDER BY.

And sure enough, when I executed the code I did indeed get an error.  Below is that error.

Msg 209, Level 16, State 1, Line 8
Ambiguous column name ‘name’.

I found this to be a bit odd.  When I run the code without the ORDER BY, it runs just fine.  Although I do get two columns with the same name,  Name.  Well, I still needed to get the list of columns.  So I had a choice, I could use one of the other methods to get the list of columns, Object Explorer or T-SQL using the system objects.

I decided to try something before going to those, something just a bit easier.  I decided to try using a table alias.  Similar code is below.

SELECT p.Name
, p.ProductNumber
, p.Color
, p.ReorderPoint
, p.*
FROM production.product p
ORDER BY p.name

Once the alias was added, all was good.  The query successfully ran and I was able to get the information I was looking for.  Of course, the “*” never made it into production code.

Just a small tidbit I ran across today.

Thanks for visiting my blog.

SSRS: Action setting To a URL or Report

Over the years I have created a number of SQL Server Reporting Services reports.  I remember when SSRS first came out, I believe somewhere around 2002, I was thinking that this will be a great product to work with.  And indeed, it has turned out to be a great tool when used in the right situation.

From time to time I run a across something I have never done or seen before.  That is the case with a project I am currently working on.  This project is just a series of reports that will make use of the “Go to Report” functionality.  In this case, a set of criteria will be evaluated to determine if a jump to another report or simply do nothing.

The criteria was based on the version of SQL Server and whether or not the connection is to an Azure SQL DB.  The requirement was that if the criteria is met, allow for navigation to another report. On the other hand, if the criteria is not met do nothing.  In addition to doing nothing, another requirement is to not have the cursor change when hovering over the link, in this case it is actually a text box.  While this post is working with a text box on a report, the same process can be applied to text that will be used to jump to another report.

If you look at the bit of code below, you will see the criteria.  You can use the code below to determine the version.

SELECT SERVERPROPERTY(‘ProductMajorversion’) AS ‘ProductMajorVersion’

This will return a number that will represent the major version of SQL Server.  For example, 10 will be returned for SQL Server 2008 and 2008 r2.  While running on the same code on a SQL Server 2019, you get 15 returned.

The requirement is if the SQL Server is 2012 or newer AND the ProductMajorVersion is not equal to 5, disable the link.  Product Major Version of 5 means an Azure SQL Database.  Any other number will represent a different version of SQL Server.

Below is the expression that I used.  In this case if the version is greater than 10, 2012 or newer AND the Engine Edition of not equal to 5.  The report will navigate to another report.  The report in the block of code is MainReport.

IIF(Fields!ProductMajorVersion.Value > 10 and Fields!EngineEdition.Value <> 5, “MainReport”, Nothing)

Just a reminder.  The IIF function in SQL Server Reporting Services can be used to make a change based on some type of criteria.  This function accepts three parameters.  The first being the expression to be evaluated.  If it is evaluated to True, then the report will navigate to the “MainReport”.  When you float your cursor over the link, the arrow will turn into a hand showing that there is a link. The third argument is what will be used if the expression is evaluated to False.

Using the above example, if major version is 14 and the Engine Edition is 4, then the link to allow the user to navigate to the “MainReport”.  If both criteria are not met, it will use the keyword NOTHING.

It is this keyword NOTHING that will cause the Go To action to do nothing.  The cursor won’t change and when clicked, nothing will happen.

Thanks for visiting my blog!!!

 

 

Auditors are at your desk, Now what? – Sysadmin

This is the third in a series of how to help the DBA survive an audit.  The post will focus on how to get the information about who has Administrative rights on the server.

The first two can be found here.

Job Failures

Database Backups

Having worked for both a casino and a publicly traded company, audits have been a big part of my job for quite a few years.  These audits come in many different shapes and sizes, however there are some things that they all have in common, the auditors want evidence!!!  And lots of it!

Surviving an audit and passing an audit are two distinctly different things.  While this blog post is more about surviving, many of these same tools can also be used to help you pass an audit. To me surviving an audit is more about keeping your sanity and being as relaxed as possible.

Please keep this in mind as your read this post.  This is solely based on my experience of working with auditors.  You may have different experiences that might be a bit outside what I am going to talk about here.

Before getting into these useful tools, I want to cover a few basics that I have followed for a number of years.

Get auditors out ASAP

The longer the auditors are in your environment the more opportunity there is for them to find something.  While we all work very hard to do things the right way, we are human and we do make mistakes.  We just don’t want the auditors to find our mistakes. You might be wondering, how can to accomplish this?   The best way to do this is to have the evidence readily available.

Don’t volunteer information

Only provide the evidence they are asking for, nothing more.  You want to avoid giving them things that they do not ask for.  By doing so, you might expose something that you don’t want them to see.  The last thing you want to do, is give them a reason for documenting a finding. Another thing to remember is the audits have a defined scope.

Be honest

Don’t attempt to hide something from the auditors.  They will find what ever the issue is eventually.  Most importantly, we don’t want them thinking we are hiding something from them.  If the auditor doesn’t trust your work it is NEVER a good thing.

Auditors don’t want to hear what you are going to do, they want to see what you have done

If you find something that you feel might be an issue during an audit, start the steps to rectify the situation.  Sometimes if you can show the auditors that you have already taken steps to address the issue, they may be more understanding and things might work out a bit better for you and your company.

Do the right thing every day.

Sadly, doing this might make your job a bit more difficult on a daily basis, but it will make audits much easier to get through.  When the auditors walk in the door, it is too late.  Any thing that might need to be fixed will more than likely not be able to be addressed before they find the issue.  However, if we do the right thing daily, there won’t be much for them to find.

Address Issues in a timely manner

Things fail, that is a reality that we must except. Many times the audits I have been part of, the auditors not only look for documentation that the issue was addressed, but also the it is addressed in a timely manner.  It is important to document what you do, including timestamps.

Remember they are just doing their jobs

Just like you, auditors are held accountable by their management. While audits can be challenging, it is important to remember that the auditors are just simply doing their jobs.

Over the years auditors have asked for lots of information. Sometimes this information is very easy to get, sometimes it might be a bit more challenging.  What I have found that works for me is to have a process already defined and the code ready to go.

So what types of things have auditors asked for?  Below are a few of the common items I have been asked to present to the auditors over the years..  Of course there are a few more items, but these are the main ones that seem to be part of most of the audits I have participated in.  I will go over each of these a bit more in depth in this post and in future posts.

      • Key SQL Server job failures
      • Backup history
      • Orphaned Logins
      • Orphaned database users
      • Who has sysadmin rights
      • Who has db_owner, DDLAdmin and DML rights on key databases
      • Separation of duties

There are a number of tools that you can utilize to capture the required information. However, the focus of this series of blog posts is to utilize native SQL Server functionality.  Below are some of the tools that are available in the SQL Server environment.

      • SQL Server Management Studio
      • Azure Data Studio
      • Azure Data Studio Notebooks
      • Transact SQL
      • Data Management Views
      • SQL Server Audit

Elevated Server Permissions

This is a topic that has come up many times in audits.  These requests usually are about who has server administrative rights and who has the equivalent of db_owner rights for each of the databases.  Usually what we have offered to the auditors has been the list of members of the sysadmin server role and the db_owner database role.

There are a few ways to get this information.  The first maybe the easiest.  That is to use screenshots.

To do this we will need to use SQL Server Management Studio.  Open Object Explorer and connect to the server.  Once you do, navigate through the tree, Security – Server Roles.  It is here that you will find the server roles, including the one we are looking for, sysadmin.

If you right click on the role and go to properties, you will see a screen similar to below.

Notice in the screenshot the date and time in the lower right hand corner are included.  Auditors usually require that the time stamp of when the screenshot was taken be included.

Another way to capture the requested information is to use T-SQL. To do this we will query a few system tables, sys.server_role_members and sys.server_principals.

SELECT CONVERT(VARCHAR(150),p.name) AS ‘RoleName’
, ISNULL(s.name, ‘No Members’) AS ‘MemberName’
FROM sys.server_role_members r
RIGHT OUTER JOIN sys.server_principals AS p
ON r.role_principal_id = p.principal_id
LEFT OUTER JOIN sys.server_principals AS s
ON r.member_principal_id = s.principal_id
WHERE CONVERT(VARCHAR(150),p.name) = ‘SysAdmin’

There is only one item in the WHERE clause, rolename.  In this case we are looking for the SysAdmin role. Just like the screenshot mentioned earlier in this post, the timestamp should be included.  In this case I have included three things.

      • The query
      • The results
      • The row counts

The context in which audits might ask questions about this revolves around “who can make changes to objects” and “who can make changes to jobs”.

Another thing to think about is the scroll bars.  If you look at the image below, you will see the vertical scroll bar from SSMS.  Notice in the red box there is room to scroll down. Several times over the years I have had auditor ask what is lower.  So ideally if you can design the query so that a scroll bar is not needed, that would make things much easier.

You really have nothing to fear in audits as long as you are prepared and do the right thing daily.  As mentioned earlier, this is the third post in the series.

Thanks for visiting my blog!!!

 

Auditors are at your desk, Now what? – Backup History

This is the second in a series of posts about how to survive an audit.  These posts will include some basic guidelines I have learned over the years as well as how to get the information I have been asked to provide.

Having worked for both a casino and a publicly traded company, audits have been a big part of my job for quite a few years.  These audits come in many different shapes and sizes, however there are some things that they all have in common, the auditors want evidence!!!  And lots of it!

Surviving an audit and passing an audit are two distinctly different things.  While this blog post is more about surviving, many of these same tools can also be used to help you pass an audit. To me surviving an audit is more about keeping your sanity and being as relaxed as possible.

Please keep this in mind as your read this post.  This is solely based on my experience of working with auditors.  You may have different experiences that might be a bit outside what I am going to talk about here.

Before getting into these useful tools, I want to cover a few basics that I have followed for a number of years.

Get auditors out ASAP

The longer the auditors are in your environment the more opportunity there is for them to find something.  While we all work very hard to do things the right way, we are human and we do make mistakes.  We just don’t want the auditors to find our mistakes. You might be wondering, how can to accomplish this?   The best way to do this is to have the evidence readily available.

Don’t volunteer information

Only provide the evidence they are asking for, nothing more.  You want to avoid giving them things that they do not ask for.  By doing so, you might expose something that you don’t want them to see.  The last thing you want to do, is give them a reason for documenting a finding. Another thing to remember is the audits have a defined scope.

Be honest

Don’t attempt to hide something from the auditors.  They will find what ever the issue is eventually.  Most importantly, we don’t want them thinking we are hiding something from them.  If the auditor doesn’t trust your work it is NEVER a good thing.

Auditors don’t want to hear what you are going to do, they want to see what you have done

If you find something that you feel might be an issue during an audit, start the steps to rectify the situation.  Sometimes if you can show the auditors that you have already taken steps to address the issue, they may be more understanding and things might work out a bit better for you and your company.

Do the right thing every day.

Sadly, doing this might make your job a bit more difficult on a daily basis, but it will make audits much easier to get through.  When the auditors walk in the door, it is too late.  Any thing that might need to be fixed will more than likely not be able to be addressed before they find the issue.  However, if we do the right thing daily, there won’t be much for them to find.

Address Issues in a timely manner

Things fail, that is a reality that we must except. Many times the audits I have been part of, the auditors not only look for documentation that the issue was addressed, but also the it is addressed in a timely manner.  It is important to document what you do, including timestamps.

Remember they are just doing their jobs

Just like you, auditors are held accountable by their management. While audits can be challenging, it is important to remember that the auditors are just simply doing their jobs.

Over the years auditors have asked for lots of information. Sometimes this information is very easy to get, sometimes it might be a bit more challenging.  What I have found that works for me is to have a process already defined and the code ready to go.

So what types of things have auditors asked for?  Below are a few of the common items I have been asked to present to the auditors over the years..  Of course there are a few more items, but these are the main ones that seem to be part of most of the audits I have participated in.  I will go over each of these a bit more in depth in this post and in future posts.

      • Key SQL Server job failures
      • Backup history
      • Orphaned Logins
      • Orphaned database users
      • Who has sysadmin rights
      • Who has db_owner, DDLAdmin and DML rights on key databases
      • Separation of duties

There are a number of tools that you can utilize to capture the required information. However the focus of this series of blog posts is to utilize native SQL Server functionality.  Below are some of the tools that are available in the SQL Server environment.

      • SQL Server Management Studio
      • Azure Data Studio
      • Azure Data Studio Notebooks
      • Transact SQL
      • Data Management Views
      • SQL Server Audit

Backup History

On a number of occasions, auditors have asked for evidence that the financially sensitive databases were being backed up daily.  Since an Agent job is usually what is used to back up the database, logic would say we could use the history of that job to demonstrate that the databases are being backed up.  In some cases, that would be very accurate.  However, let’s say there are 100 databases on the server and only one is under audit, it might be a bit challenging to use job history to capture the requested evidence for the auditor. It would all depend on the design of the job.

This is where T-SQL comes in handy.  There are two tables in the MSDB database that we can utilize, backupset and backupmediafamily.

The backupset table has a single record for each successful backup.  The key is successful backup.  This table contains some great information, such as the following:

        • Is it a COPY ONLY backup
        • Backup Start time
        • Backup End time
        • Type of Backup
        • Is backup password protected
        • Backup size
        • As well as many more

For our purposes, while much of the above is great information, our goal is to find the backup history.  Having said that, many of the columns will not need to be added to our result set.  However, you are more than welcome to add them if you like.

The second table we need is the backupmediafamily table.  While this table is not technically needed, it does have a data point that could be useful.  This is where we can find the destination of the backup.

SELECT s.database_name
, m.physical_device_name ‘DestinationLocation’
, s.backup_start_date
, CASE s.[type]
WHEN ‘D’ THEN ‘Full’
WHEN ‘I’ THEN ‘Differential’
WHEN ‘L’ THEN ‘Transaction Log’
END ‘BackupType’
FROM msdb.dbo.backupset s
INNER JOIN msdb.dbo.backupmediafamily m
ON s.media_set_id = m.media_set_id
WHERE s.backup_start_date >= ‘04/01/2020′
AND s.backup_start_date <= ’06/30/2020′
AND s.type IN (‘D’, ‘I’)
ORDER BY s.database_name, s.backup_start_date

Keeping in line with the suggestion to not volunteer information to the auditors, there are a number of columns that are not included that could be useful in other situations.  These situations might include reviewing the duration of backups, looking for missing backups and the destination of backups.

Sometimes the above data night not be enough for the auditors.  They may want to see the history of your backup jobs, success and failures.  I have been able to show this in one of two ways.  The first method is to use the code below.  This will return all the backup job executions.  You will have to enter the name of your backup job and change the two dates.  This block of code removes the criteria for failed jobs.

SELECT dbo.agent_datetime(h.run_date,h.run_time) AS ‘Date Of Failure’
, j.name AS ‘Job Name’
, h.message AS ‘Error’
FROM msdb.dbo.sysjobs j
INNER JOIN msdb.dbo.sysjobhistory h
ON h.job_id = j.job_id
WHERE h.step_id = 0
AND dbo.agent_datetime(h.run_date,h.run_time) >= ’04/01/2020′
AND dbo.agent_datetime(h.run_date,h.run_time) <= ’06/30/2020′
WHERE j.name = ‘<<Insert name of Backup job’
ORDER BY dbo.agent_datetime(h.run_date,h.run_time) DESC

And even that might not be enough, so I have to go to the third method of showing backup history.  In this case, using SQL Server Management Studio, right click on the job and go to view history.

What you might end up giving the auditors will looks something like the above image.  Notice that the time of the screenshot is included but the date is not.  This might not be enough, auditors usually have asked me to include the date as well.  Something to be careful of is an accidental capture of information. Notice that there is some code to the right of the job history.  While this particular screenshot might not reveal anything that would catch the eye of the auditors, the potential exists to do just that.  So be careful and review the screenshot before giving it to them.

Thanks for visiting my blog!!!

 

 

Auditors are at your desk, Now what? – Job Failures

Having worked for both a casino and a publicly traded company, audits have been a big part of my job for quite a few years.  These audits come in many different shapes and sizes, however there are some things that they all have in common, the auditors want evidence!!!  And lots of it!

Surviving an audit and passing an audit are two distinctly different things.  While this blog post is more about surviving, many of these same tools can also be used to help you pass an audit. To me surviving an audit is more about keeping your sanity and being as relaxed as possible.

Please keep this in mind as your read this post.  This is solely based on my experience of working with auditors.  You may have different experiences that might be a bit outside what I am going to talk about here and in future posts.

Before getting into these useful tools, I want to cover a few basics that I have followed for a number of years.

Get auditors out ASAP

The longer the auditors are in your environment the more opportunity there is for them to find something.  While we all work very hard to do things the right way, we are human and we do make mistakes.  We just don’t want the auditors to find our mistakes. You might be wondering, how can to accomplish this?   The best way to do this is to have the evidence readily available.

Don’t volunteer information

Only provide the evidence they are asking for, nothing more.  You want to avoid giving them things that they do not ask for.  By doing so, you might expose something that you don’t want them to see.  The last thing you want to do, is give them a reason for documenting a finding. Another thing to remember is the audits have a defined scope.

Be honest

Don’t attempt to hide something from the auditors.  They will find what ever the issue is eventually.  Most importantly, we don’t want them thinking we are hiding something from them.  If the auditor doesn’t trust your work, it is NEVER a good thing.

Auditors don’t want to hear what you are going to do, they want to see what you have done

If you find something that you feel might be an issue during an audit, start the steps to rectify the situation.  Sometimes if you can show the auditors that you have already taken steps to address the issue, they may be more understanding and things might work out a bit better for you and your company.

Do the right thing every day.

Sadly, doing this might make your job a bit more difficult on a daily basis, but it will make audits much easier to get through.  When the auditors walk in the door, it is too late.  Any thing that might need to be fixed will more than likely not be able to be addressed before they find the issue.  However, if we do the right thing daily, there won’t be much for them to find.

Address Issues in a timely manner

Things fail, that is a reality that we must except. Many times the audits I have been part of, the auditors not only look for documentation that the issue was addressed, but also the it is addressed in a timely manner.  It is important to document what you do, including timestamps.

Remember they are just doing their jobs

Just like you, auditors are held accountable by their management. While audits can be challenging, it is important to remember that the auditors are just simply doing their jobs.

Over the years auditors have asked for lots of information. Sometimes this information is very easy to get, sometimes it might be a bit more challenging.  What I have found that works for me is to have a process already defined and the code ready to go.

So what types of things have auditors asked for?  Below are a few of the common items I have been asked to present to the auditors over the years..  Of course there are a few more items, but these are the main ones that seem to be part of most of the audits I have participated in.  I will go over each of these a bit more in depth in this post and in future posts.

      • Key SQL Server job failures
      • Backup history
      • Orphaned Logins
      • Orphaned database users
      • Who has sysadmin rights
      • Who has db_owner, DDLAdmin and DML rights on key databases
      • Separation of duties

There are a number of tools that you can utilize to capture the required information. However the focus of this series of blog posts is to utilize native SQL Server functionality.  Below are some of the tools that are available in the SQL Server environment.

      • SQL Server Management Studio
      • Azure Data Studio
      • Azure Data Studio Notebooks
      • Transact SQL
      • Data Management Views
      • SQL Server Audit

Key SQL Server job failures

So exactly what is a key job?  In most of the audits I have been a part of, a key job is defined as any job that could impact the financial reporting or financial decision making of the business.  As a DBA, there are a number of jobs that we might have as part of our daily maintenance routine.  For the most part, these usually are not considered key jobs from an audit perspective, with the sole exception of database backups.  We will cover backups a bit more in depth in a later post.

On numerous occasions the auditors have not only asked for job failures, they have also looked for automated ways for notifications that the job failed.  This is where operators can help.  Having the job email the DBA team that it failed has been very helpful.  We have a PowerShell script that automatically creates an incident in Service Now.  However, sometimes that will fail because of something beyond our control.  In these rare cases, having the email that was automatically generated when the job failed was critical to meeting the requests of the auditors.  It is important to also be aware of the company email retention policy.

We can use Transact SQL to gather this information. However, before getting to the TSQL, we need to take a moment to go over the job history retention settings.

Another method you could use is to query the registry.  To do this you will need to use the xp_instance_regread Extended Stored procedure.  The maximum for the job history log size is 99,999 with the minimum for the job history rows per job is 2.  Here is a very good post on this topic.

DECLARE @JobHistMaxTotalRows INT = NULL,
@JobHistMaxTotalRowsPerJob INT = NULL

EXECUTE master.dbo.xp_instance_regread N’HKEY_LOCAL_MACHINE’,
N’SOFTWARE\Microsoft\MSSQLServer\SQLServerAgent’,
N’JobHistoryMaxRows’,
@JobHistMaxTotalRows OUTPUT

EXECUTE master.dbo.xp_instance_regread N’HKEY_LOCAL_MACHINE’,
N’SOFTWARE\Microsoft\MSSQLServer\SQLServerAgent’,
N’JobHistoryMaxRowsPerJob’,
@JobHistMaxTotalRowsPerJob OUTPUT

SELECT @JobHistMaxTotalRows AS ‘Max Rows’
, @JobHistMaxTotalRowsPerJob AS ‘Max Rows Per Job’

This code will return a data set similar to what is below.

Just a couple of gotchas on this.  If you have a maintenance plan that cleans just job history, that may delete what could be essential data for an audit.  Also, if you make it too big and you have a lot of job executions, looking at the history using the GUI could time out.  You will need to adjust these settings to values that are appropriate for your environment.  My suggestion is to establish a standard and use that on all the servers.

Now let’s get to the actual job failures.  In order to pull this information using TSQL you will need to use the sysjobs table, sysjobhistory table and the agent_datetime function. All three of these objects can be found in the MSDB database.

Most of the audits I have been involved in have been for a specific quarter and we have only had to pull job failures for that quarter. Since the date and time of the job execution are in separate columns and are INT data type, we can use the agent_datetime function to convert it to a date. Since this is an undocumented function, it is recommended to not use this in production code in case in changes in future versions of SQL Server. If you look at the data types of the run_date and run_time columns you will see that that are the INT datatype.  This function will convert them from INT to Datetime. Pinal Dave has a nice alternative you might want to check out as well, click here.  It is always good to have more than one option.

Below is the code you can use to return job failures for a quarter.

SELECT dbo.agent_datetime(h.run_date,h.run_time) AS ‘Date Of Failure’
, j.name AS ‘Job Name’
, h.message AS ‘Error’
FROM msdb.dbo.sysjobs j
INNER JOIN msdb.dbo.sysjobhistory h
ON h.job_id = j.job_id
WHERE h.run_status = 0
AND h.step_id = 0
AND dbo.agent_datetime(h.run_date,h.run_time) >= ’04/01/2020′
AND dbo.agent_datetime(h.run_date,h.run_time) <= ’06/30/2020′
ORDER BY dbo.agent_datetime(h.run_date,h.run_time) DESC

In the WHERE clause you will see both the run_status and step_id are used as part of the criteria.  The run_status column is in the sysjobhistory table. Notice that we are looking for a value of 0.  This is the value for a failure.  Below are the other options and more information can be found here.

0 = Failed

1 = Succeeded

2 = Retry

3 = Canceled

4 = In Progress

This brings us to the step_id column.  Like the run_status column, this column can also be found in the sysjobhistory column.  The values in this column will range from 0 to the number of steps in any job.  If you have a job that has 10 steps, there will be 11 rows entered into the sysjobhistory table every time the job executes. When a job executes, SQL Server will place a record for each step of the job and one record for the over all execution of the job.  It is this over all execution that we are looking for.  That record will have a step_id value of 0.  If you look below you will see three job failures.  You also see step ID 0 for each of the jobs.

Notice that the ClassDemo job also has two rows, but only has the step ID’s of 0 and 2.  This is because the job actually has two steps, the first step was skipped during the execution of the job.  Remember that we are only looking for the jobs that have failed, not necessarily what step it failed on.  Because of this, we only need the over all status, step_id 0.

While I prefer to use TSQL to get the job failures, some auditors may still want a screenshot in SQL Server Management Studio.  To get this simply right click on the job and go to Job History.  You will probably need to apply a filter to the results.  When you do, it will look similar to what is below.

Occasionally auditors don’t just look for the job failures, they also looked for incidents in our ticketing system that showed that the failure was properly addressed. So you should be prepared to show them those as well.

This is the first of several posts on “How to Survive an Audit using Native SQL Server Tools”.  I will cover these topics in future posts.

            • Backup history
            • Orphaned Logins
            • Orphaned database users
            • Who has sysadmin rights
            • Who has db_owner, DDLAdmin and DML rights on key databases
            • Separation of duties

Audits are not as painful if we are prepared!

Thanks for visiting my blog!!!