Azure Data Studio – Explain Button

One for the first things that jumped out at me in Azure Data Studio was the “Explain” button.  After you make your connection, you will see in the upper right hand corner of the query window the Explain button.

At first I was thinking maybe it is a short cut to help.  Nope, it is actually an easy way to get the estimated execution plan. The key is estimated execution plan. While in SQL Server Management Studio we have been able to easily get both the estimated or actual execution plan.

Just as a reminder, the actual execution plan requires the query to actually run. While the estimated plan will use statistics to generate the plan.  Generally, if the actual and estimated are capture very close to each other, you are more than likely not going to see a difference between the tow.

Now let’s talk about the Explain button.  It will return two things, the XML of the execution plan and a graphical representation of the execution plan.

You will notice in the above image, the Results tab.  Just as in SSMS, this is where you can find the results of the query.  In the image above I simply clicked the Run button and the results were returned.

When you click the Explain button, the look of what is returned will be a bit different.  You will still see the Results and Messages items, but there are now two additional items, Query Plan and Top Operators.

The query plan will be the item code that is selected, revealing the estimated execution plan.  There are two ways to show that it is indeed the estimated plan.  The first is that under the results tab you will not see the results of the query.  Rather, you will see the XML of the plan, no data.  Remember, SQL Server does not execute the query when returning the estimated execution plan.

Another area to look at to show that the plan is an estimated plan rather than actual is by looking at the properties of some of the operators.  Notice in the image below there are a number of Estimated properties but no actual.  If this was the Actual plan, you will see both estimated number of rows and actual number of rows.

Another feature that I like is the Top Operators item.  This does exactly what you expect it to do, return the top operators.  If you have used SentryOne Plan Explorer this is not new to you.  But for the others who have not used Plan Explorer, this will be a welcome feature.  And if you haven’t use the SentryOne Plan Explorer…..you SHOULD. It is a great tool and can be downloaded for free. To make is easy for you…here is a link to download it, Click Here.

What if I want the Actual execution plan and not the estimated plan.  There is not a button for this, but there is a short cut.  Just hit ctrl + m.  This will execute the query and return the actual plan.  As you can see below, we have both Actual number of rows and estimated number of rows.

When running this I did find something interesting.  If I have two queries such as below in SSMS and I capture the actual execution plan, I will see both plans.

SELECT * FROM Production.Product
SELECT * FROM Production.ProductCategory
Both plans returned in SSMS.
This is not how it works is Azure Data Studio.  While the results for both queries are returned as well as the XML for the plan, the graphical plan is returned only for the first query.
As you can see….both statements have the XML of the plan.  However only one graphical plan is returned, for the first query in the batch.
Another item that I noticed that is different in Azure Data Studio is the percent of batch in the plan.  In the image below from SSMS you can clear see which of the two queries was the most costly. In the other image for Azure Data Studio, notice no percent for the statement in the batch.  I really liked using this in SSMS.
I think overall, Azure Data Studio appears to be a pretty cool tool.  Thank you for stopping to visit my blog!!

Azure Data Studio Introduction

I recently downloaded and installed Azure Data Studio and so far it looks pretty good.  As I learn more about it I blog about the features and what is different from SQL Server Management Studio.  Azure Data Studio is a free offering by Microsoft and the install is very simple.

Once installed, you will find the menu item in a slightly differently location.  You will not find it under any of the SQL Server related groupings.  As you can see below, there is a new menu grouping, “Azure Data Studio”.  Under this is where you can find the menu item to launch the application.

Once you open Azure Data Studio obviously you do not have a connection to any data source.  At this point you will asked to establish a connection.

You are asked to provide much of the same information you would in SSMS.  This includes server name, type of authentication and well as the database.  You can also provide a Server Group if you like.  This provides nothing more than a logical grouping of servers for easier management.

Once you connect you will then see the dashboard, similar to below. Here you are able to confirm the connection information, see a list of all the objects in the database,

Here you will see a list of all the objects in the database.  You will also see that there are widgets.  The default widgets are to perform such tasks as database backup, database restore, new query and new notebook.  The first three of these have been around for years.  What might be new to some is the notebook.  We will get into that later in this post. This dashboard also includes information about the connection including the compatibility level of the database.

If you click the server name in the breadcrumbs across the top as below.  You will see a few widgets that apply to the server level.

As in the screenshot below, you can see the backup status of all the databases and the currently size.

If you click the upper left corner of the backup status, the current state of all the databases will be returned.

One of the biggest reasons of why we might be using Azure Data Studio is to read data.  This is very similar to SQL Server Management Studio.  Click the “New Query” item on the dashboard or New Query under the File menu.  When you do it is similar to SSMS.  There are tabs across the top and you can also change databases.  Both of these are the same as SSMS, although the database list is in a slightly different location.  You will see the connect\disconnect button and change connection. These do exactly as you would expect them to do.

Results and Messages are still there, just as in SSMS.  Now let’s take a quick look a few of the things that are different than SSMS.  The most obvious difference is the “Explain” button. This will simply return the estimated execution plan of the query you are looking to run.

When you click the “Explain” button you will see what is above.  The query plan and messages as well as the Top Operators.  The image below is what you will see on the Top Operators tab.

You might think that the results table would have results in it.  The answer is yes and no.  It does return results, just not data. You will find the XML of the Execution Plan.  If you click it, a new tab will open with the XML in it.

Another item that is different is on the far right of the results tab, as below.

The top four buttons are to save the data.  In order, they are for CSV, Excel, Json and XML.  The bottom two are for visualizing the data using charts.

Now we will move to the upper right corner.  The icon in the box below is for connections.

This is where we can add new connections.  In the upper right corner you will see three buttons.  The farthest on the left is the new connections button.  The middle button is for creating server groups.  This is what you see below in the red bar.

 

When adding a server group you are asked only to provide a group name, description and a color.  When you create a new connection, you will be able to pick a server group to add the new server to.

All the items covered above are also in SSMS in some form or another.  What is really cool about Azure Data Studio is something that cannot be found in SSMS.  That is SQL Notebook.  So far I am a big fan of this new feature.  It basically allows you to store notes and executable T-SQL code in the same document.

To add a new Notebook, you can find the Add Notebook item under the file menu.  There are two types of items that can be added to a notebook, code and text.  To add either one, you can click the buttons in the upper left or the links in the middle.

When the Text is clicked you will then have the opportunity to enter notes.  Below is an image of what you will see. It looks like there are duplicate lines, but in reality there is not. Once you click away from the note, the top one will disappear, leaving on the note.

If you click Code, you will then be able to enter T-SQL code just as you would in SSMS.

Notice the icon on the left side that looks like a play button.  Well that is exactly what it does, it runs or the query.

In the image above, you can see the text as well as the results of the query.  Once you are done with the results, you can clear them by clicking the ellipsis in the upper right corner of the results.  The bottom option is Clear Output.

Once you have everything in the Notebook you can save it.  The Notebook, the file extension is ipynb.

In the coming weeks I will be doing more posts on Azure Data Studio.  Although so far I really like it, there are a few things that I have not been able to find yet, like Actual Execution plans.

 

 

 

 

 

 

 

 

 

 

 

How to capture an Execution Plan

Since in the coming weeks I will be presenting a session on Execution plans at a few SQL Saturday events, I thought I would do a post on how to capture execution plans.  Execution plans are a critical tool when attempting to troubleshoot a performance issue.  SQL Server Management Studio is one of the common tools we use to get the plan, if we have the query. You can also use Extended Events and Query Store.

Using SSMS

In SSMS there are two options to capture the execution plan, Actual and Estimated plan. The biggest difference between the two is that the Actual plan needs to execute the query, while the estimated plan uses statistics to create the plan and does not execute the query.

To get the Estimated Execution plan, you can use the Query menu. There you will find the Display Estimated Execution Plan menu item.

There a two ways to get the actual execution plan.  One is on the tool bar and the other is on the Query menu.

When looking to capture the estimated execution plan, as soon as you click on the menu item, the plan will be returned.  If you are looking for the actual execution plan, SQL Server will execute the query and when it is finished the plan will be returned.  For simple queries the estimated execution plan will be returned very fast, however for more complex queries it might take some time.

When in SSMS, how can I tell if the plan is actual or estimated? There are a few ways. The easiest way is to check the number of tabs.  As you can see below there are two tabs, one for messages and another for Execution Plan.  There is a missing table, results.

Estimated Execution Plan:  Two tabs

Actual Execution Plan: Three tabs

Another way to determine if the plan is actual or estimated will require us to dig a little deeper.  We will need to look at the properties of a few of the operators.

When you float the cursor over the Index Scan you will see this popup.  The items in the green boxes are for the actual execution while the items in the red boxes is for estimated execution.  Although having the options for estimated execution, you might think this is for the estimated execution, it is not.

When both estimated and actual number of rows can be found in the pop up, this means that it is an actual execution plan.  If it were an estimated execution plan, you will not see the actual items, as you can see in the image below.

If the estimated and actual plans are captured very close to each other, you will not see much difference.  However, if there are a large period of time between them, there could be a number of differences.

This will work with both an on-prem instance of SQL Server as well as an Azure SQL Database.

Extended Events

Using SSMS works great if we have the query, however it is a manual process.  What if you want to capture the plans after hours or automatically.  This is where Extended Events comes into play.  Just a word of warning, when capturing execution plans with Extended Events, you could actually cause a performance problem so be careful.

This is what Microsoft has placed in the description of the events related to Execution Plans. I think it is a pretty clear warning about using Extended Events to capture Execution plans.

“Using this event can have a significant performance overhead so it should only be used when troubleshooting or monitoring specific problems for brief periods of time.”

When working with an On-Prem instance of SQL Server, Extended Events can be found under “Management”.  This is a server level setting, although you can limit the activity captured to a single database.

The location is slightly different for an Azure SQL Database.  Extended Events can be found under the database.

You can right click on the “Sessions” folder under Extended Events and then click “New Session”.

When creating the session there are three events we will want to consider. These events can be found in both an on-prem instance as well as Azure SQL Database.

query_pre_execution_showplan – this event is the equivalent of an estimated execution plan

query_post_execution_showplan – this event can be used to capture the actual Execution plan.

query_post_compilation_showplan – Occurs after a SQL statement is compiled.

To create an Extended Event session you can either use SQL Server Management Studio by right clicking “Session” under the  Extended Events item or using the CREATE SESSION statement.

If using SSMS graphical tool, you can right click on Sessions and then click “New Session” you will see something similar to this.

Once you give the session a name, the you are ready to move on to picking the proper events.  Remember, there are three options for events, query_pre_execution_showplan and query_post_execution_showplan.

If you enter “showplan” into the Event Library search box.  You will see four events, the three mentioned earlier and a fourth related to Query Store.  We will discuss this one later.

Move the events over by clicking the “>” button and then you are ready to add the target.  Once you do, save the session, start it and you will now be collecting execution plans.

You can also use the CREATE EVENT SESSION statement. The statement below collects all three events, again NOT recommended.  Notice that this statement also has a filter for the AdventureWorks2014 database.  If you do decide to attempt, what even Microsoft says you should use with caution, try to reduce activity by limiting the collection to a single database.

CREATE EVENT SESSION [ExecutionPlan] ON SERVER
ADD EVENT sqlserver.query_post_compilation_showplan,
ADD EVENT sqlserver.query_post_execution_showplan,
ADD EVENT sqlserver.query_pre_execution_showplan(SET collect_database_name=(1)
    ACTION(sqlserver.sql_text)
    WHERE ([database_name]=N’AdventureWorks2014′))
ADD TARGET package0.event_file(SET filename=N’ExecutionPlan’),
ADD TARGET package0.ring_buffer
WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=ON)
GO

Now that we are collecting the plans all we have to do is right click on the target and then click view data.  You will see something similar to below.  Notice the Query Plan tab highlighted.

If you right click on the plan you will see this context menu.  You can now save the plan, view the XML and even open the SentryOne Plan Explorer, if you have it installed.  If you don’t have it installed…..go get it.  It is a great tool.

 

Extended Events are a great tool to collect information about what is going on in your server.  As stated several times, if you are looking to Extended Events to collect your execution plans, do so very cautiously.  Also, limit the time the sessions is running and limit the activity to only the database you are looking to find a solution to a performance problem.

Query Store

The last option we will cover is Query Store.  This is a great tool!!!  Query Store came out with SQL Server 2016.  What I like about Query Store is that it allows me to collected the needed execution plans without the overhead of Extended Events.

To enable Query Store, simply go to the properties of the database and select Query Store.  The Operation Mode Property is Off by default. Change this to either Read or Read Write.  Now Query Store is enabled.

Notice that there are two Operation Mode properties.  One is actual and the other is Requested.  If you change Requested to something different than Actual, once you click OK, the requested setting will now become the actual setting.

Per Microsoft, here are a number of scenarios of why you might want to utilize Query Store.

Once you enable it and are collecting data, there are a number of reports that you can use to view the plans. Under that database, in this case, AdventureWorks2014, you will now see an option for Query Store.  When you expand it, you will see the reports that are available to you.

When you right click on one of the reports you will see something similar to this. By clicking on the bar in the chart on the upper left, the other two panels will update.  As you can see, the execution plan is in the bottom panel.

Once you have the plan visible, it works just like all the option methods of collecting the plan.  You can view the properties of each operator as well as save the plan for analysis later.

If you utilize query store, there is an event in Extended Events you might want to consider.  This event is query_store_generate_showplan_failure and will fire when the Query Store has an error creating the plan.

Here is the description from the event:

“Fired when Query Store failed to store a query plan because the showplan generation failed”

This post is about how to collect the Execution Plans, including Query Store.  In this post, I am not going to get indepth about all the options of Query Store.  There is some really great information out there, including some great videos.  Eric Stellato has some great content on Query Store in a number of locations, including SQLSkills.com, SQLPerformance.com and yes, even YouTube.

Query Store on SQLPerformance

There are number of options of collecting Execution plans.  My preference would be to utilize Query Store. Mostly because it captures the plans automatically, has really good reporting and has very low overhead.

Thanks for visiting my blog!!!

Extended Events – Reading Multiple Session Files

Extended Events are a very useful tool when trying to troubleshoot performance issues. When setting up an Extended Event session there are multiple options for the storage of the data.  One of the most common targets for storage is the event_file target.  This type of storage can be used for both an on-prem instance of SQL Server as well as an Azure SQL database.

This post will go over how to read from both locations, on-prem and an Azure storage account.  They are similar, but different enough to cover here. The biggest difference as you will see, is setting everything up properly so you will be able to read multiple files with one TSQL statement.

When setting up a event_file there are a few options that can be set.  Of course the most logical is the path to the location where the files will be placed.  You will also have the opportunity to set the size of the files, maximum number of files as well as whether or not to allow file roll over.

Since you might have more than one file,  you may want to read from multiple files with the same query, that is the focus of this blog post.  Of course in order to read from a file, the file name is important, even if reading from multiple files.  If you look at the image below, you will see that there is a large number attached to the end of the file name.  Just a side note, I like to name the file the same as the Extended Event session name.  This makes it easier to match the files to the session.

According to Microsoft the number at the end of the file name is calculated using the following method:

“The integer value is calculated as the number of milliseconds between January 1, 1601 and the date and time the file is created.”

Reading On-Prem Files

When reading from a event_file target, the fn_xe_file_target_read_file function is used.  The first parameter of this function is the file path, which includes the file name.

Using the two files in the above image as an example, if I want to read from a single file, I would include the full path including the name of the file.

SELECT event_data = CONVERT(xml, event_data)
INTO #eeTable
FROM sys.fn_xe_file_target_read_file(N’C:\temp\ExecutionTImeOut_0_131920765279670000.xel’, null, null, null);

Notice that in order to read from the file, I included the full name of the file.  However, there are two files due to the rollover.  What do I need to do to read from both files in one query?

In order to do this, there is one simple change that needs to be made.  Notice in the code below, the file name has been modified to use the “*” following the name.  The number at the end of the file is no longer needed.

SELECT event_data = CONVERT(xml, event_data)
INTO #eeTable
FROM sys.fn_xe_file_target_read_file(N’C:\temp\ExecutionTImeOut*.xel’, null, null, null);

This will now return the data from all the XEL files in the location specified that start with “ExecutionTimeOut”.

Reading from Azure Storage Account

This works great for an on-prem instance of SQL Server, however there are some changes that must be made when quering the file targets of an Azure SQL database.

Before we get into the change that needs to be made, let’s go over a few prerequisites on using Extended Events for an Azure SQL Database and using an Azure storage account.  This blog will not get very deep into a few of the prerequisites, however they do need to be mentioned.

The first prerequisites is having an Azure Storage account.  When you do this, you will given a few options for the type of storage.  This can be obtained in your Azure portal.  You will need to decide on a few things as well, including what subscription you will use for the storage account and what region it is in.

To create the storage account, simply click the “Storage Account” menu item in the Azure portal.  You will need to fill in all the requested information.

Now that you have provided all the requested information, you can now utilize the storage.  When picking a target type in the Extended Events “wizard”, my preference is the event_file target type.

In order to read the Extended Event files, you will also need to get the EndPoint for the storage account.  As you can see above, you will need to go to the properties of the storage account.  When you do you will then have to capture the Endpoint for the type of storage account you chose.  In this case, the option would be between a file storage account or a Blob storage. Notice that the two Endpoints only have one small difference, right before the word “core”, you will see either File or Blog.  You just need to copy the path to the clip board.  You will need this path when reading the Extended Event files.

The next item you will need is a Shared Access Signature for the storage account. This can be obtained a few different ways.  One is to use the Azure Storage Explorer and the other is to use the Azure portal.

To obtain this signature using the Azure portal navigate to the storage account and then click “Shared account Signature”.  The screen on the right will then be populated with the settings of the signature.  One important setting you want to be aware of is the End option under “Start and Expiry data\time” section.  Make sure you pick the appropriate duration. When this data\time has past you will no longer be able to read the files in the storage account. When you tried to read an Extended Event file and the End date has past, you will not receive an error, you will just see no records returned.

Towards the bottom of the above screen, after you scroll down a bit, you will see a button that looks like the blue button at the top of image below.  When you click that, the SAS token will appear.  This is what we will need to read the Extended Event files.  To capture this, just click the blue button to the right of the box under SAS token.  In the blue box below, you will see two date, these are the start and end date for the token.  When the end date has past the token will no longer work and your queries to read from the files will not return any data.  In addition, you will also not get an error.

We are almost done with the prerequisites. Once you have configured the Extended Event session to use a Storage Account and the session is running, you a ready to read the files.  There is a lot of really good information out there on how to create an Extended Event session..

The last requirement is to create a database scoped credential in your Azure SQL Database.  To do this, you will need to SAS token mentioned about.  However, notice that there is a “?” at the front of it.  This “?” you will need to remove prior to creating the credential.

Although not a requirement, I would also highly recommend that you download the Azure Storage Explorer.  It is a very nice tool that can make our lives much easier when working with storage accounts.

To create the credential, you will use SQL Server Management Studio.  Below you will see the code used to create the credential.  First thing to remember is that you must run this code when connected to your Azure SQL Database.  Notice the path right after the CREATE line.  This is the path to your storage account and must start with HTTPS.  When you read this code, you will find the “SECRET”, this is where you place the SAS token you captured earlier.  Please remember to take the “?” out before executing the code.

CREATE DATABASE SCOPED CREDENTIAL [https://MyStorageAccountName.blob.core.windows.net/eefiles]
  — this name must match the container path, start with https and must not contain a forward slash at the end, in this case, the container is eefiles
WITH IDENTITY=‘SHARED ACCESS SIGNATURE’
  — this is a mandatory string and should not be changed  
 , SECRET = ‘sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-01T09:53:56Z&st=2019-07-01T01:53:56Z&spr=tldodr43465kadfuJRsCYUmi1OmYywUjw%3D’
   — this is the shared access signature key that you obtained from the portal, this shared access signature has been changes for security reasons
GO

You can run this code querying the  sys.database_credentials view to confirm the credential was actually created.  Make sure to be connected to your Azure SQL Database.

SELECT * FROM sys.database_credentials

Finally we have all the prerequisites done, now we can move on to actually reading the files.

As you can see in the image below, the file names follow the same pattern as an on-prem instance of SQL Server.

Now that we are ready to read the files, one change you will need to make is put the path to your storage account.  The code below will read just from the file hard coded in the path.

SELECT event_data = CONVERT(xml, event_data)
INTO #eeTable
FROM sys.fn_xe_file_target_read_file(
    ‘https://MyStorageAccountName.blob.core.windows.net/eefiles/efiles_0_132044728780700000.xel’,
                null, null, null
            );;

However, we need to read from all the files.  When reading multiple files with an on-prem instance of SQL Server, we simply removed the number at the end of the file name and replaced it will an “*”.  Logic would say that should work when reading from an Azure Storage account, but it doesn’t work.  You will not get an error either. It will just not return any rows.

This will work, but will not generate an error either, just no rows.

‘https://MyStorageAccountName.blob.core.windows.net/eefiles/efiles*.xel’,

The only change we need to make to the above code, is remove the numbers at the end of the file.  Make sure to include the front part of the name and remove the file extension as well.

‘https://MyStorageAccountName.blob.core.windows.net/eefiles/efiles’,

Now, you can read from all the files with one query.  However, you need to make sure you are not pulling too much data, it could time out.  As it has for me a few times.

Using SSMS Menu items

Another way to read from multiple target files is inside SSMS.  However, it only works for an on-prem instance of SQL Server.  On the File menu, Open you will then see a pop out menu with “Merge Extended Event Files…” item.

When you click, you will have the option to navigate to the location of the files you wish to merge and display the data.  You will see the form like the one below.

You will then click the Add button and select the desired files.  You can use Control+ click to select more than one file.  Once you do, the files will be listed and you just click OK for the data to be displayed in SSMS.  Remember, this does not work for an Azure SQL Database.

The files don’t have to be from the same session to merge them, although many times they will be.  As you can see below there are files from two different sessions, the deadlock session and the ExecutionTimeout session.

Once the data is merged and displayed in SSMS, you can group the data by event. This can be done by right clicking on the column in the top half of the results and going to “Group by this column”.  The results will then look similar to the image below.  Notice that the top four rows are from the Deadlock session, while the bottom row, the Attention event, is from the QueryTimeout session.

 

Additional Code to Read the XML Data

While this post is about how to read mulitple files, I also want to include the code to parse through the resulting XML from the above code.  If you look, the above code simply takes the data and converts it to XML.  While the code below will parse the XML into a more readable format. You will need to make a few changes.  You will need to use your path to your storage account and you will need to update the columns you want to extract from the XML.

If you look at the code below, there are three steps.  Well actually only two, the middle step is optional.

–Extract the XML

SELECT event_data = convert(xml, event_data)
INTO #eeTable
 FROM sys.fn_xe_file_target_read_file(
                — Fill in Storage Account name, and the associated Container name.
                ‘https://Mystorageaccount.blob.core.windows.net/eefiles/efiles’,
                null, null, null
            );;
–SELECT statement just to confirm there are rows, this is not needed
SELECT * FROM #eeTable
–Parse the XML and put it into the temporary table
SELECT
  ts  = event_data.value(N'(event/@timestamp)[1]’, N’datetime’),
  [sql]  = event_data.value(N'(event/action[@name=”sql_text”]/value)[1]’, N’nvarchar(max)’),
  duration  = event_data.value(N'(event/data[@name=”duration”]/value)[1]’, N’nvarchar(max)’),
  spid  = event_data.value(N'(event/action[@name=”session_id”]/value)[1]’, N’int’),
  user_nm = event_data.value(N'(event/action[@name=”username”]/value)[1]’, N’nvarchar(max)’),
  dbname = event_data.value(N'(event/action[@name=”database_name”]/value)[1]’, N’nvarchar(max)’),
  explan  = event_data.value(N'(event/data[@name=”showplan_xml”]/value)[1]’, N’nvarchar(max)’)
FROM #eeTable
ORDER BY ts
DROP TABLE #eeTable

 

You can get the column names from the Extended Events GUI.  When you are looking at the Action tab, you would use ‘event\action’.  When you want to pull a column in the “event fields” table.  Something to remember, if you misspell a column name, you will not get an error when you run the query.  The column will just be NULL for every row.

Here is an example of where to get the Action column names. Notice that they can be found on the Global Fields(Actions) table.

Below is an example of the “Data” fields.  Notice that the “Event Fields” tab is that active tab.

Now you can read from multiple Extended Event files with a single statement.  This will allow you to take the results, dump the data into a temporary table and then use T-SQL to find the data you are looking for.

Thank you for visiting my blog!!!!

Extended Events File Target – File Rollover and Max File Size settings

Extended Events has a number of options for the storage of the events.  These include Ring_buffer, pair_matching, event_file and several others.  As expected, each of these have different options for configuration.  This post is about the event_file target.  As a side note, Extended Event sessions will have only one target, it is possible to have more than one.  The System_Health session does have two, event_file and ring_buffer.

Below you will find a screenshot of the options for the event_file target.  These are the same type of options we have had in Profiler for many years.

File name just identifies what the file name will start with.  As you can see in the image below, SQL Server adds a large number to the file name.  This number represents the number as defined below.

“The integer value is calculated as the number of milliseconds between January 1, 1601 and the date and time the file is created.”

The other options are pretty straight forward. The Maximum file size has two option, MB and GB.

If I only have a few sessions, I could easily go and look at each session.  However, what if I have a large number of sessions and I want to review these settings for all sessions.  It would be nice to be able to pull this information out of the database using T-SQL.

When using Extended Events, there are a number of DMVs that can help us gather information about the sessions that exist on the server. If you do a Google search on “DMV Extended Events” you will get a number of links that point to one of the views in the list below.  Notice that they all start with “dm_ex_”.  These are very useful views, however the data we are looking for can not be found in any of these.

To get this information we need to look in a different set of views.  These views start with “server_event” for an on-prem instance and “database_event” for an Azure SQL database.  Below is a list of this set of views.

To get the information we are looking for we need to use two of these, server_event_session_fields and server_event_sessions.  We will need to join these two views on the event_session_id column in both views.  If you look at the image below, you will notice that SQL Server stores the values as a row in the view rather than a column for each option.  Even though the GUI offers us two options for the maximum file size, MB and GB, the actual value is stored in MB.

For the maximum file size and the maximum rollover files, you will not see them in these views unless they are changed from the default. If you look at the image below there is one of the options missing, maximum rollover files.  This is because it is set to the default value of 5.

Just a little note about the max_rollover_files option.  This has a default of 5, as stated earlier, however if you see that it has a value of 0, then the “Enable File Rollover” is set to false.  As in the image below.  If you are wondering why all the options are not active, this is because the target has already been created.

To get the defaults we need to run the query below.

SELECT    name
  , object_name
  , type_name
  , column_value
  , description
FROM sys.dm_xe_object_columns
WHERE object_name = ‘event_file’

When you run the above query, your result set will look like what is below.  Notice the Max File Size and Max Number of files.

To get the data we are looking for we need to use the query below.

SELECT
 s.name,
 f.name,
 f.value
FROM 
 sys.server_event_session_fields AS f
INNER JOIN
 sys.server_event_sessions AS s
ON
 f.event_session_id = s.event_session_id
WHERE f.name IN (‘filename’ ,‘max_file_size’,‘max_rollover_files’
  , ‘filename’)
ORDER BY s.name

I have taken above code and used the PIVOT to hopefully make the results more readable.  This code also takes into account the default values, that as you may remember is in a different location.

 

Here is the complete code for an on-prem instance of SQL Server, including the PIVOT.

DECLARE @cols      AS NVARCHAR(MAX)
DECLARE @query  AS NVARCHAR(MAX);
CREATE TABLE #EEOptions
  ( OptionName  sql_variant,
    Filename   sql_variant,
    MaxFileSize  sql_variant,
    MaxRolloverFiles sql_variant)
 
SET @cols = STUFF((SELECT DISTINCT ‘,’ + QUOTENAME
       name)
      FROM  sys.server_event_session_fields
      WHERE name IN (‘filename’ ,‘max_file_size’,‘max_rollover_files’
        , ‘filename’)
            FOR XML PATH(), TYPE
            ).value(‘.’, ‘NVARCHAR(MAX)’)
        ,1,1,)

 
SET @query = ‘Insert #EEoptions SELECT SessionName AS ”OptionName”, ‘ + @cols + ‘ from
            (
            SELECT
 s.name AS ”SessionName”,
 f.name as ”OptionName2”,
 f.value
FROM 
 sys.server_event_session_fields AS f
INNER JOIN
 sys.server_event_sessions AS s
ON
 f.event_session_id= s.event_session_id
WHERE f.name IN (”filename” ,”max_file_size”,”max_rollover_files”
  , ”filename”)

           ) x
            pivot
            (
                  Max(value)
FOR OptionName2 IN  (‘ + @cols + ‘)
            ) p ‘
 
 
EXECUTE(@query)
SELECT OptionName
  , Filename
  , ISNULL(MaxFileSize, 1024) AS ‘MaxFileSize’
  , ISNULL(MaxRolloverFiles, 5) AS ‘MaxRolloverFiles’
FROM #EEOptions
DROP TABLE #EEOptions
This code will not work for an Azure SQL Database.  I will hopefully complete a post on how to do this in Azure.  One thing I can mention is that the properties are slightly different. Notice that the Maximum files size still an option, however Maximum Rollover files is not.
Thanks for taking the time to read my blog.  It is always appreciated.

Pending Restart for Authentication Type Change

In a previous post, I talked about how to determine if your SQL Server needs a restart(click here to read it).  In that post I talk about how to use the configurations view to determine if a restart is needed.  This just looks at the running vs configured values of the configuration settings. After talking about this with a co-worker, he had a very interesting question.

What about the server settings that are not in the configuration view?

Some server level settings, when changed require a reboot to take effect.  For example, if you change the Authentication type, a restart of the SQL Server service is needed for the change to take effect.

You will see this pop up box when you change a server property and a restart is needed.

In order to determine if a restart is needed, we need to look at two places, the registry and a ServerProperty.  There are a number of Server Properties. Because this post is about the Authentication type, the property we are looking for is “IsIntegratedSecurityOnly

When you change the value of the Authentication type in SQL Server Management Studio, it actually only changes it in the registry.  For the Server Property to change and match the registry, a restart of the SQL Server service is needed.

The code below can be used to completed the comparison. It looks at both places and completes the comparison.

USE MASTER
GO
DECLARE @authmodeServerProp TINYINT = 0
DECLARE @Restart            VARCHAR(100)
SELECT  @authmodeServerProp = CASE SERVERPROPERTY(‘IsIntegratedSecurityOnly’)
            WHEN 0 THEN 2  –Set to Mixed, Windows and SQL authentication
            WHEN 1 THEN –Set to Windows Only
END
DECLARE @AuthMode INT = NULL
EXEC xp_instance_regread N’HKEY_LOCAL_MACHINE’,
                         N’Software\Microsoft\MSSQLServer\MSSQLServer’,     
                         N’LoginMode’, @AuthMode OUTPUT,
       N’no_output’
IF @authmodeServerProp = @AuthMode
 BEGIN
  SELECT @Restart = ‘No pending restart of the service due ‘ +
       ‘a change in the Authenication Mode.’
 END
ELSE
  BEGIN
   SELECT @Restart =‘The Authentication Mode has changed, look into ‘ +
                 ‘restarting the service’
  END
  SELECT    CASE @authmodeServerProp
      WHEN  1 THEN ‘Windows Only’
      WHEN  2 THEN ‘Windows and SQL Authentication’
      ELSE  ‘Something when wrong’ END AS ‘ServerProperty’
    , CASE @authmode
      WHEN  1 THEN ‘Windows Only Authentication’
      WHEN  2 THEN ‘Mixed Authentication’
      ELSE ‘There was a problem’ END AS ‘Registry value’
    , @Restart   AS ‘RestartNeeded?

 

The code above simply looks into the registry and the Server Properties.  If they do not match, then a restart of the service is needed.  One thing to notice, the server property works with values of 0 and 1, while the registry works with values of 1 and 2.  That is why the code changes the 0 and 1’s of the server property to a 1 or a 2.

When you run the above code, you will have a data set similar to the one below.  The first two columns return the current value of each of the locations.  While the last column will tells us of the a restart is needed.

This will be added to my “How to complete a Server Assessment” SQL Saturday presentation.

Thanks for stopping by my blog.

Is there a pending reboot on my SQL Server?

As a DBA we make every effort to avoid bringing systems off line, however there are times where that is exactly what needs to happen.  From a Windows and SQL Server perspective there are a number of changes that require a reboot.  These reasons range from the application of Windows updates, the changing of Windows policies and changes to SQL Server settings.  Over the years there have been a number of times where I have connected to a server and found a message stating that there is a pending reboot.  When changes are made, of course we want the changes to take effect as soon as possible.  In some cases a change will not take effect until a reboot happens. In order to determine if there is a pending reboot, we need to look at a number of places, including both Windows and SQL Server.

Starting with Windows, there are some changes that require a reboot, including that application of Windows updates.  Checking for a pending reboot for Windows can be done using one of the newest friends of a DBA, Powershell.  The information that needs to be queried is in the registry.

This is the popup that I have seen a number of times showing that there is a pending update.  What makes this the less than ideal option is that I need to logon to the server.

Brian Wilhite has a very nice PowerShell module that you can use to check for pending reboots in Windows.  It can be found on GitHub here. It can also be found on PowerShellGallery.com here. You can also download the ps1 file here.

On August 20, 2018 he stated on Microsoft’s Script Center web site that he is no longer making updates to this on the Script Center site, you will need to get it from GitHub instead, link is above.  According to the Script Center site here are the versions of the OS it has been tested on.  I have tested it on Windows 10 and it appears to work just fine.

The script below will install the module and complete the check.

         # Install

Module -Name PendingReboot
# Run
Test-PendingReboot -Detailed -ComputerName localhost
In order to run this script, you must run the Powershell application as Administrator.  If you do not, you will see this error.
Below are the results of running the PendingReboot PowerShell script.  By adding the -Detailed switch you will see more information about what might have caused the need for the restart.  In addition, you can add the -ComputerName switch to check a remote computer.
What I like about this is that it allows me to test a remote computer.  Just as a precautionary measure, make sure to throughout research and exercise the appropriate caution prior to running any code obtained on the internet on production servers.
Of course knowing if there is a pending reboot in Windows is important, however there are a number of SQL Server settings that will also cause a need for a restart of the SQL Server service in order to take effect.  The above powershell script will not detect this.  There are a few ways using SQL Server Management Studio that can tell us if the service needs to be restarted.
The first method is to look at the settings of the SQL Server.  At the bottom you will see “Configured Value” and “Running Value”.  If you look at the images below, you can see that the Default index fill factor is different.  The configured value is 0, while the running value is 85.  This is a sign that the SQL Server service needs to be restarted.

 

 

 

 

 

 

 

The drawback of this type of check is that I need to manually go to each setting option and do a compare.  Luckily, Microsoft has added a view for us to use to programmatically check this.  The view is sys.configurations.  There are a number of columns that are important for what we are looking for.  Below I have listed the columns that are important for the purpose of this post.  Of course there are more columns that these.

Is_dynamic is the column that will tell us if a restart is needed if that value of the setting is changed. If this is true, a restart of the service is not needed if the value is changed.

is_advanced identifies if it is an advanced option

value_in_use is the column that will tells us the Running value

value will identify that configured value

When looking at this, if Value_in_use and value are different, then a service restart is needed.  In this case, the fill factor(%) is different.  The running value is use is 85% while the configured value is 0.

 

This is the code used to get the above result set.

SELECT    name
  , is_dynamic
  , is_advanced
  , value_in_use
  , value
FROM sys.configurations
WHERE is_dynamic = 0

In order to determine if a restart is needed, we need to add more to the WHERE clause.  We just need to do a simple comparison of the value_in_use and the value columns.  Of course we are also only looking at the settings that is_dynamic is set to FALSE or 0.  If this query below returns any rows, a restart of the service is needed in order to have it take effect.

SELECT    name
  , is_dynamic
  , is_advanced
  , value_in_use
  , value
FROM sys.configurations
WHERE is_dynamic = 0
  AND value <> value_in_use
IF @@ROWCOUNT >= 1
 BEGIN
  SELECT ‘Restart is needed’ AS ‘Restart Needed?’
 END
ELSE
 BEGIN
  SELECT ‘NO restart is needed’ AS ‘Restart Needed?’
 END
Hopefully you will find this information useful.
Thank you for visiting my blog.

How Often does SQL Server look for Deadlocks

Although life would be much easier for every DBA if deadlocks never happened, we all realize that they do happen.  There is a great deal of information out there on the causes of and solutions for deadlocks.  The focus of this post is more about how often does SQL Server check to see if a deadlock is happening.

Prior to getting into how SQL Server detects deadlocks, let’s take a few minutes to quickly go over exactly what a deadlock it.

Microsoft states this on their website:

“A deadlock occurs when two or more tasks permanently block each other by each task having a lock on a resource which the other tasks are trying to lock”

Because deadlocks happen when two task permanently block each other, without a deadlock, both process will simply block forever. Of course this could never be good in a production system.  It is important that these situations be identified and dealt with in some manner.  This is where SQL Server database engine steps in, it is frequently searching the lock manager looking for deadlocks.

Initially the interval the deadlock check takes place is every 5 seconds. This interval may decrease depending on if it finds a deadlock.  If a deadlock is not found, the check will remain at every 5 seconds.  As mentioned, 5 seconds is the default interval, it is also the maximum number of seconds between checks. If SQL Server finds deadlock, then the interval will be decreased and checks will happen more frequently.  The minimum interval is 100 ms.  Finally, if deadlocks were found at one point and they longer exist the interval will return to 5 seconds.

While there are many options that can be changed in SQL Server, the default frequency for deadlock checks is not among the list.  This option is not configurable.  These checks are very light weight and should not cause any issues on the system.  If SQL Server is checking more frequently than 5 seconds, that means it is finding deadlocks and your time and energy should be focused on addressing that.

In addition to the interval for the check of deadlocks, there are a few other items related to deadlocks that I feel are important to know as a DBA.  The most obvious is what are some of the causes of deadlocks. If you Google what causes deadlocks you will find a ton of information. I also feel that it is important to know how to get information about the deadlock as well as how SQL Server determines which process well be the victim.  I have a blog post on one of the methods SQL Server uses to determine to victim, it can be found here.  This post is about the DeadLock_priority option.  SQL Server will also use the cost to rollback as one of the criteria to determine of the victim.

A logical question would be how do I capture information about deadlocks.  There are a number of ways to do this within SQL Server.  I like to use Extended Events.  I have a blog post on how to use Extended Events to capture the deadlock graph.  It can be found here. You can either create your own Extended Event session or use the System_health session that already exists.  You can also use traceflags 1222 and 1204.  In older versions of SQL Server, the trace flags were the only way to capture this information.

You can also use a third part monitoring tool such as the SQL Monitor by Red-Gate.  This is a great tool!  Here is a link to a demo.

Here are some really good links on deadlocks

 How to Fix SQL Server deadlocks – Grant Fritchey at Red-Gate

 Overview of Deadlocks – www.sqlshack.com

 Analyzing Deadlocks – Microsoft

I will also be doing future blog posts on deadlocks from time to time.  Thank you for visiting my blog!!!

 

 

Extended Events – Database Restore

Many of us have restored productions databases.  Of course no one wants to do it, but we all acknowledge that there are times in which that is exactly what we need to do, restore a database.  Have you ever wondered what steps SQL Server actually goes through to complete the restore?  Extended Events offers an event that will allow us to see inside the restore process.

When I did a search of what events are available for restore operations, I found two, backup_restore_progress_trace and databases_backup_restore_throughput. Backup_restore_progress_trace will be the focus of this post.

Backup_restore_progress_trace does exactly what you might think it would do, documents the internal steps of the restore process.

Below are all the steps that Backup_restore_progress_trace picked up during a restore of a very small database.  These are also in the order in which the steps took place.  These steps are for a restore to a database that does not exist yet.

RESTORE DATABASE started
Opening the backup set
Processing the leading metadata
Planning begins
Effective options: Checksum=0, Compression=0, Encryption=0, BufferCount=6, MaxTransferSize=64 KB
Planning is complete
Beginning OFFLINE restore
Attached database as DB_ID=5
Preparing containers
Containers are ready
Restoring the backup set
Estimated total size to transfer = 2838528 bytes
Transferring data
FileHandleCache: Initial CacheSize: 48
BackupStream(0): Processing MSDA of size 43 extents
6 percent (196608/2838528 bytes) processed
11 percent (327680/2838528 bytes) processed
16 percent (458752/2838528 bytes) processed
20 percent (589824/2838528 bytes) processed
25 percent (720896/2838528 bytes) processed
30 percent (851968/2838528 bytes) processed
36 percent (1048576/2838528 bytes) processed
41 percent (1179648/2838528 bytes) processed
46 percent (1310720/2838528 bytes) processed
50 percent (1441792/2838528 bytes) processed
55 percent (1572864/2838528 bytes) processed
60 percent (1703936/2838528 bytes) processed
66 percent (1900544/2838528 bytes) processed
71 percent (2031616/2838528 bytes) processed
76 percent (2162688/2838528 bytes) processed
80 percent (2293760/2838528 bytes) processed
85 percent (2424832/2838528 bytes) processed
90 percent (2555904/2838528 bytes) processed
96 percent (2752512/2838528 bytes) processed
BackupStream(0): Completed MSDA
Waiting for log zeroing to complete
Log zeroing is complete
BackupStream(0): Processing MSTL (FID=2, VLFID=36, size=65536 bytes)
100 percent (2838528/2838528 bytes) processed
Data transfer is complete
Backup set is restored
Offline roll-forward begins
Processing 8 VLF headers
Processing VLF headers is complete
First LSN: 36:4232:37, Last LSN: 36:4272:1
Stop LSN: 36:4272:1
Offline roll-forward is complete
Database fixup is complete
Transitioning database to ONLINE
Restarting database for ONLINE
PostRestoreContainerFixups begins
PostRestoreContainerFixups is complete
PostRestoreReplicationFixup begins
PostRestoreReplicationFixup is complete
Database is restarted
Resuming any halted Fulltext crawls
Writing history records
Writing history records is complete (elapsed = 58 ms)
MSDB maintenance is complete
RESTORE DATABASE finished

Many these steps are pretty straight forward.  They can also easily group the steps into phases.

The first few steps are simply to prepare for the restore.  Once SQL Server is done preparing, it then starts the restore.  You can see there are a number of steps in this process, including the assigning of a database ID.  Then you can see the increments in percent of where the restore stands.  Once the restore is complete, SQL Server moves on to the steps in which SQL Server a rolls forward transactions, brings the database online and starts the database.  After this, SQL Server the writes the history into the MSDB system tables.

While these steps are for a new database, many of the same steps will take place for the restore of an existing database.  The most noticeable additions to the steps are the acquiring of database locks.  As you can see below, these are Exclusive locks.

Acquiring X lock on the database

Acquired X lock on the database

SQL Server will spend varied amounts of time on each step.  As expected, the most time is spent on the actual transfer of data to the restored database. Below you will see a table with two columns, timestamp for each and the step name for the restored I completed for this test.  While this process only took a few seconds, a larger database would obviously take much more time.

Time Stamp At Start of Step Operation
00:11:38.457 RESTORE DATABASE started
00:11:38.457 Opening the backup set
00:11:38.473 Processing the leading metadata
00:11:38.473 Planning begins
00:11:38.490 Effective options: Checksum=0,

Compression=0, Encryption=0, BufferCount=6, MaxTransferSize=64 KB

00:11:38.490 Planning is complete
00:11:38.490 Beginning OFFLINE restore
00:11:38.490 Attached database as DB_ID=5
00:11:38.490 Preparing containers
00:11:38.577 Containers are ready
00:11:38.640 Restoring the backup set
00:11:38.640 Estimated total size

to transfer = 2838528 bytes

00:11:38.640 Transferring data
00:11:38.640 FileHandleCache: Initial CacheSize: 48
00:11:38.640 BackupStream(0): Processing

MSDA of size 43 extents

00:11:38.653 6 percent (196608/2838528 bytes) processed
00:11:38.657 11 percent (327680/2838528 bytes) processed
00:11:38.660 16 percent (458752/2838528 bytes) processed
00:11:38.667 20 percent (589824/2838528 bytes) processed
00:11:38.687 25 percent (720896/2838528 bytes) processed
00:11:38.687 30 percent (851968/2838528 bytes) processed
00:11:38.700 36 percent (1048576/2838528 bytes) processed
00:11:38.713 41 percent (1179648/2838528 bytes) processed
00:11:38.713 46 percent (1310720/2838528 bytes) processed
00:11:38.713 50 percent (1441792/2838528 bytes) processed
00:11:38.740 55 percent (1572864/2838528 bytes) processed
00:11:38.740 60 percent (1703936/2838528 bytes) processed
00:11:38.763 66 percent (1900544/2838528 bytes) processed
00:11:38.763 71 percent (2031616/2838528 bytes) processed
00:11:38.763 76 percent (2162688/2838528 bytes) processed
00:11:38.790 80 percent (2293760/2838528 bytes) processed
00:11:38.790 85 percent (2424832/2838528 bytes) processed
00:11:38.797 90 percent (2555904/2838528 bytes) processed
00:11:38.817 96 percent (2752512/2838528 bytes) processed
00:11:38.837 BackupStream(0): Completed MSDA
00:11:38.840 Waiting for log zeroing to complete
00:11:38.877 Log zeroing is complete
00:11:38.877 BackupStream(0): Processing MSTL (FID=2, VLFID=36, size=65536 bytes)
00:11:38.880 100 percent (2838528/2838528 bytes) processed
00:11:38.897 Data transfer is complete
00:11:38.910 Backup set is restored
00:11:38.957 Offline roll-forward begins
00:11:38.957 Processing 8 VLF headers
00:11:38.960 Processing VLF headers is complete
00:11:38.963 First LSN: 36:4232:37, Last LSN: 36:4272:1
00:11:38.963 Stop LSN: 36:4272:1
00:11:38.967 Offline roll-forward is complete
00:11:38.990 Database fixup is complete
00:11:38.993 Transitioning database to ONLINE
00:11:38.993 Restarting database for ONLINE
00:11:39.093 PostRestoreContainerFixups begins
00:11:39.097 PostRestoreContainerFixups is complete
00:11:39.100 PostRestoreReplicationFixup begins
00:11:39.267 PostRestoreReplicationFixup is complete
00:11:39.277 Database is restarted
00:11:39.297 Resuming any halted Fulltext crawls
00:11:39.307 Writing history records
00:11:39.367 Writing history records

is complete (elapsed = 58 ms)

00:11:39.367 MSDB maintenance is complete
00:11:39.370 RESTORE DATABASE finished

Hidden within the steps are some very useful tidbits of information.

    • Database ID
    • Number of Extents
    • Number of bytes restored
    • Beginning and Ending LSN for the roll forward
    • Total time to write history records

While the very small test database took about 60 steps to complete, I was wondering what if the database was bigger.  Using the AdventureWork2014 database, I completed a restore and there were many more steps, a total of 114 steps.  Below are the steps in order.

backup_restore_progress_trace 2019-06-18 11:17:01.5974587 +00:00 BACKUP LOG started

backup_restore_progress_trace 2019-06-18 11:17:01.5975148 +00:00 Acquiring U lock on the database

backup_restore_progress_trace 2019-06-18 11:17:01.5975471 +00:00 Synchronizing with other operations on the database is complete

backup_restore_progress_trace 2019-06-18 11:17:01.5988528 +00:00 Halting Fulltext crawls

backup_restore_progress_trace 2019-06-18 11:17:01.5994759 +00:00 Acquiring X lock on the database

backup_restore_progress_trace 2019-06-18 11:17:01.5995329 +00:00 Acquired X lock on the database

backup_restore_progress_trace 2019-06-18 11:17:01.6135276 +00:00 Writing a failover checkpoint

backup_restore_progress_trace 2019-06-18 11:17:01.6753875 +00:00 Failover checkpoint is complete

backup_restore_progress_trace 2019-06-18 11:17:01.6755993 +00:00 Failover FlushCache is complete

backup_restore_progress_trace 2019-06-18 11:17:01.6756137 +00:00 Opening the backup media set

backup_restore_progress_trace 2019-06-18 11:17:01.6801075 +00:00 The backup media set is open

backup_restore_progress_trace 2019-06-18 11:17:01.6801651 +00:00 Preparing the media set for writing

backup_restore_progress_trace 2019-06-18 11:17:01.6811968 +00:00 The media set is ready for backup

backup_restore_progress_trace 2019-06-18 11:17:01.6812025 +00:00 Effective options: Checksum=0, Compression=0, Encryption=0, BufferCount=7, MaxTransferSize=1024 KB

backup_restore_progress_trace 2019-06-18 11:17:01.6812297 +00:00 Start LSN: 500:27608:118, SERepl LSN: 0:0:0

backup_restore_progress_trace 2019-06-18 11:17:01.6812385 +00:00 First LSN: 630:28048:1

backup_restore_progress_trace 2019-06-18 11:17:01.6812477 +00:00 Estimated total size = 542920704 bytes (data size = 0 bytes, log size = 542920704 bytes)

backup_restore_progress_trace 2019-06-18 11:17:01.6812508 +00:00 Work estimation is complete

backup_restore_progress_trace 2019-06-18 11:17:01.6812559 +00:00 Last LSN: 663:7624:1

backup_restore_progress_trace 2019-06-18 11:17:01.6812606 +00:00 Scanning filestream data

backup_restore_progress_trace 2019-06-18 11:17:01.6819216 +00:00 Scanning filestream data is complete

backup_restore_progress_trace 2019-06-18 11:17:01.6819294 +00:00 Writing the leading metadata

backup_restore_progress_trace 2019-06-18 11:17:01.6819931 +00:00 BackupStream(0): Writing leading metadata to the device C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\MSSQL\Backup\AdventureWorks2014_LogBackup_2019-06-18_06-16-59.bak

backup_restore_progress_trace 2019-06-18 11:17:01.6821581 +00:00 Copying filestream data

backup_restore_progress_trace 2019-06-18 11:17:01.6905789 +00:00 Copying filestream data is complete

backup_restore_progress_trace 2019-06-18 11:17:01.6905984 +00:00 Copying transaction log

backup_restore_progress_trace 2019-06-18 11:17:01.6907865 +00:00 MediaFamily(0): FID=2, VLFID=630, DataStreamSize=2424832 bytes

backup_restore_progress_trace 2019-06-18 11:17:01.7037592 +00:00 MediaFamily(0): FID=2, VLFID=631, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:01.7704909 +00:00 MediaFamily(0): FID=2, VLFID=632, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:01.8050246 +00:00 5 percent (27590656/542920704 bytes) processed

backup_restore_progress_trace 2019-06-18 11:17:01.8396719 +00:00 MediaFamily(0): FID=2, VLFID=633, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:01.9147575 +00:00 MediaFamily(0): FID=2, VLFID=634, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:01.9238444 +00:00 10 percent (54853632/542920704 bytes) processed

backup_restore_progress_trace 2019-06-18 11:17:01.9810017 +00:00 MediaFamily(0): FID=2, VLFID=635, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:02.0323014 +00:00 15 percent (82116608/542920704 bytes) processed

backup_restore_progress_trace 2019-06-18 11:17:02.0498153 +00:00 MediaFamily(0): FID=2, VLFID=636, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:02.1195952 +00:00 MediaFamily(0): FID=2, VLFID=637, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:02.1464469 +00:00 20 percent (109379584/542920704 bytes) processed

backup_restore_progress_trace 2019-06-18 11:17:02.1877744 +00:00 MediaFamily(0): FID=2, VLFID=638, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:02.2570845 +00:00 25 percent (136642560/542920704 bytes) processed

backup_restore_progress_trace 2019-06-18 11:17:02.2573724 +00:00 MediaFamily(0): FID=2, VLFID=639, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:02.3263001 +00:00 MediaFamily(0): FID=2, VLFID=640, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:02.3702739 +00:00 30 percent (163905536/542920704 bytes) processed

backup_restore_progress_trace 2019-06-18 11:17:02.3941585 +00:00 MediaFamily(0): FID=2, VLFID=641, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:02.4635771 +00:00 MediaFamily(0): FID=2, VLFID=642, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:02.4768947 +00:00 35 percent (190119936/542920704 bytes) processed

backup_restore_progress_trace 2019-06-18 11:17:02.5315044 +00:00 MediaFamily(0): FID=2, VLFID=643, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:02.5872516 +00:00 40 percent (217382912/542920704 bytes) processed

backup_restore_progress_trace 2019-06-18 11:17:02.6020868 +00:00 MediaFamily(0): FID=2, VLFID=644, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:02.6732336 +00:00 MediaFamily(0): FID=2, VLFID=645, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:02.7043113 +00:00 45 percent (244645888/542920704 bytes) processed

backup_restore_progress_trace 2019-06-18 11:17:02.7416025 +00:00 MediaFamily(0): FID=2, VLFID=646, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:02.8128799 +00:00 MediaFamily(0): FID=2, VLFID=647, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:02.8199368 +00:00 50 percent (271908864/542920704 bytes) processed

backup_restore_progress_trace 2019-06-18 11:17:02.8834042 +00:00 MediaFamily(0): FID=2, VLFID=648, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:02.9338510 +00:00 55 percent (299171840/542920704 bytes) processed

backup_restore_progress_trace 2019-06-18 11:17:02.9552501 +00:00 MediaFamily(0): FID=2, VLFID=649, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:03.0292680 +00:00 MediaFamily(0): FID=2, VLFID=650, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:03.0524488 +00:00 60 percent (326434816/542920704 bytes) processed

backup_restore_progress_trace 2019-06-18 11:17:03.0999388 +00:00 MediaFamily(0): FID=2, VLFID=651, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:03.1645396 +00:00 65 percent (353697792/542920704 bytes) processed

backup_restore_progress_trace 2019-06-18 11:17:03.1683370 +00:00 MediaFamily(0): FID=2, VLFID=652, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:03.2374707 +00:00 MediaFamily(0): FID=2, VLFID=653, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:03.2742992 +00:00 70 percent (380960768/542920704 bytes) processed

backup_restore_progress_trace 2019-06-18 11:17:03.3044229 +00:00 MediaFamily(0): FID=2, VLFID=654, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:03.3723517 +00:00 MediaFamily(0): FID=2, VLFID=655, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:03.4062676 +00:00 75 percent (408223744/542920704 bytes) processed

backup_restore_progress_trace 2019-06-18 11:17:03.4598903 +00:00 MediaFamily(0): FID=2, VLFID=656, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:03.5105134 +00:00 80 percent (434438144/542920704 bytes) processed

backup_restore_progress_trace 2019-06-18 11:17:03.5290138 +00:00 MediaFamily(0): FID=2, VLFID=657, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:03.6010196 +00:00 MediaFamily(0): FID=2, VLFID=658, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:03.6293332 +00:00 85 percent (461701120/542920704 bytes) processed

backup_restore_progress_trace 2019-06-18 11:17:03.6878991 +00:00 MediaFamily(0): FID=2, VLFID=659, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:03.7633069 +00:00 90 percent (488964096/542920704 bytes) processed

backup_restore_progress_trace 2019-06-18 11:17:03.7636287 +00:00 MediaFamily(0): FID=2, VLFID=660, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:03.8362761 +00:00 MediaFamily(0): FID=2, VLFID=661, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:03.8837162 +00:00 95 percent (516227072/542920704 bytes) processed

backup_restore_progress_trace 2019-06-18 11:17:03.9154381 +00:00 MediaFamily(0): FID=2, VLFID=662, DataStreamSize=16777216 bytes

backup_restore_progress_trace 2019-06-18 11:17:03.9883764 +00:00 MediaFamily(0): FID=2, VLFID=663, DataStreamSize=3932160 bytes

backup_restore_progress_trace 2019-06-18 11:17:04.0073959 +00:00 100 percent (542920704/542920704 bytes) processed

backup_restore_progress_trace 2019-06-18 11:17:04.0187093 +00:00 Copying transaction log is complete

backup_restore_progress_trace 2019-06-18 11:17:04.0187231 +00:00 Writing the trailing metadata

backup_restore_progress_trace 2019-06-18 11:17:04.0188013 +00:00 BackupStream(0): Writing trailing metadata to the device C:\Program Files\Microsoft SQL Server\MSSQL14.MSSQLSERVER\MSSQL\Backup\AdventureWorks2014_LogBackup_2019-06-18_06-16-59.bak

backup_restore_progress_trace 2019-06-18 11:17:04.0188753 +00:00 Writing the end of backup set

backup_restore_progress_trace 2019-06-18 11:17:05.1136445 +00:00 Resuming any halted Fulltext crawls

backup_restore_progress_trace 2019-06-18 11:17:05.1145570 +00:00 Writing history records

backup_restore_progress_trace 2019-06-18 11:17:05.1570406 +00:00 Writing history records is complete (elapsed = 43 ms)

backup_restore_progress_trace 2019-06-18 11:17:05.1571172 +00:00 BACKUP LOG finished

backup_restore_progress_trace 2019-06-18 11:17:05.1846587 +00:00 Acquiring U lock on the database

backup_restore_progress_trace 2019-06-18 11:17:05.1846855 +00:00 RESTORE DATABASE started

backup_restore_progress_trace 2019-06-18 11:17:05.1846983 +00:00 Acquiring X lock on the database

backup_restore_progress_trace 2019-06-18 11:17:05.1847096 +00:00 Acquired X lock on the database

backup_restore_progress_trace 2019-06-18 11:17:05.1848243 +00:00 Opening the backup set

backup_restore_progress_trace 2019-06-18 11:17:05.1929402 +00:00 Processing the leading metadata

backup_restore_progress_trace 2019-06-18 11:17:05.1941467 +00:00 Planning begins

backup_restore_progress_trace 2019-06-18 11:17:05.2423481 +00:00 Effective options: Checksum=0, Compression=0, Encryption=0, BufferCount=6, MaxTransferSize=1024 KB

backup_restore_progress_trace 2019-06-18 11:17:05.2423558 +00:00 Planning is complete

backup_restore_progress_trace 2019-06-18 11:17:05.2426545 +00:00 Beginning OFFLINE restore

backup_restore_progress_trace 2019-06-18 11:17:05.8613121 +00:00 Preparing containers

backup_restore_progress_trace 2019-06-18 11:17:05.9191974 +00:00 Preparing FileStream container: c:\data\imoltp_mod1

backup_restore_progress_trace 2019-06-18 11:17:06.1012537 +00:00 Prepared FileStream container: c:\data\imoltp_mod1

backup_restore_progress_trace 2019-06-18 11:17:06.1013026 +00:00 Containers are ready

backup_restore_progress_trace 2019-06-18 11:17:06.1842403 +00:00 Restoring the backup set

backup_restore_progress_trace 2019-06-18 11:17:06.1842624 +00:00 Estimated total size to transfer = 2909552640 bytes

backup_restore_progress_trace 2019-06-18 11:17:06.1855198 +00:00 Transferring data

backup_restore_progress_trace 2019-06-18 11:17:06.1856031 +00:00 FileHandleCache: Initial CacheSize: 48

backup_restore_progress_trace 2019-06-18 11:17:06.1883666 +00:00 BackupStream(0): Processing MSDA of size 8288 extents

backup_restore_progress_trace 2019-06-18 11:17:07.5410761 +00:00 5 percent (145752064/2909552640 bytes) processed

backup_restore_progress_trace 2019-06-18 11:17:08.9161089 +00:00 10 percent (291504128/2909552640 bytes) processed

backup_restore_progress_trace 2019-06-18 11:17:10.3224426 +00:00 15 percent (437256192/2909552640 bytes) processed

backup_restore_progress_trace 2019-06-18 11:17:11.4572357 +00:00 BackupStream(0): Completed MSDA

backup_restore_progress_trace 2019-06-18 11:17:11.4580438 +00:00 BackupStream(0): Processing PH6(filestream) data

backup_restore_progress_trace 2019-06-18 11:17:11.5794020 +00:00 BackupStream(0): Completed PH6(filestream) data

backup_restore_progress_trace 2019-06-18 11:17:11.5894867 +00:00 Waiting for log zeroing to complete

There are a lot more steps to a restore than I ever thought there would be.  If you are interested, here is the code I used to create the Extended Event session.  If you decide to use this, please make sure you change the name to something that works for you and change the path for the target files.

CREATE EVENT SESSION [RestoreDatabase] ON SERVER
ADD EVENT sqlserver.backup_restore_progress_trace(
ACTION(sqlos.worker_address,sqlserver.client_hostname,sqlserver.nt_username,sqlserver.sql_text))
ADD TARGET package0.event_file(SET filename=N‘RestoreDatabase’)
WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=ON)
GO

To read the data, you can utilize SSMS or use TSQL.  If you use SSMS, once you right click on the target and view the data, you can then go to the Extended Events menu item and at the bottom, you will see an option to Export the data.  With this, you can export the data to a table so you can then use TSQL to view the data.

Or you can use the code below.  If you place the .xel file in a different location than the default, you will need to update the path to the files in the function, fn_xe_file_target_read_file.

SELECT event_data = convert(xml, event_data)
INTO #eeTable
FROM sys.fn_xe_file_target_read_file(N’RestoreDatabase*.xel’, null, null, null);

SELECT * FROM #eeTable

SELECT
ts = event_data.value(N'(event/@timestamp)[1]’, N’datetime’),
[StepDescription] = event_data.value(N'(event/date[@name=”trace_message”]/value)[1]’, N’nvarchar(max)’)
FROM #eeTable
ORDER BY ts

DROP TABLE #eeTable

 

Thanks for stopping by my blog!!!

 

Extended Events Session Filters in SSMS

Recently during a demo at a SQL Saturday the query to pull the Extended Event session data, didn’t return the expected results. The session I used for the demo was the create database statement.

Prior to the session, I deleted the Create Database session, however did not delete the target files because they are part of the demo.  Then I recreated the session, just as I had done before.  However, this time was there was a difference when I attempted to read the target data.  The entry for the newly created database was not showing up when I used the GUI, however was showing up when I read the XML.  During the session, I was not able to figure out why that was the case.

When I got home I did some more testing.  As I turns out, when I had done the same session the previous week at a different SQL Saturday, I created a filter for the data and that filter persisted, even though I had deleted the session and created it again.

So I decided to do some testing.  I made sure there were no longer any filters on the session data, then deleted the Create Database session.  Just to have a clean slate, I then restarted the SQL Server service.

Then I created two databases, test4 and test5.  As you can see below, they are both showing up when reading the event file using the GUI.

Then I added a filter to only show the test5 database by right clicking on the test5 value and then clicking, “Filter by this value”.

This resulted in only the test5 database showing in the grid.

Now that the session was up and running, I create another database, test6.   I restarted the Extended Event session to force the new database event to be written to the target.  Then I read the target again.  As you can see below, not only is test4 not showing up, but the newly created test6 is also not appearing.

I right clicked the test5 value and then to “Filter by this value”.  What I saw kind of surprised me a bit.  Below is a screenshot of what I saw, two entries for the test5 value.  This means that one had to exist before I tried to filter by the value again.

I clicked the Clear All button, and sure enough, all three databases now appear.  Just as in the image below.

What I found interesting is that the filter persists, even after deleting the session and creating it again.  Of course, deleting sessions is not a common task and recreating it with the same name and storing the target files in the same location is even less likely to happen.  However, I did find this interesting.

 

Thanks for stopping by my blog!!!!