Extended Events Session Filters in SSMS

Recently during a demo at a SQL Saturday the query to pull the Extended Event session data, didn’t return the expected results. The session I used for the demo was the create database statement.

Prior to the session, I deleted the Create Database session, however did not delete the target files because they are part of the demo.  Then I recreated the session, just as I had done before.  However, this time was there was a difference when I attempted to read the target data.  The entry for the newly created database was not showing up when I used the GUI, however was showing up when I read the XML.  During the session, I was not able to figure out why that was the case.

When I got home I did some more testing.  As I turns out, when I had done the same session the previous week at a different SQL Saturday, I created a filter for the data and that filter persisted, even though I had deleted the session and created it again.

So I decided to do some testing.  I made sure there were no longer any filters on the session data, then deleted the Create Database session.  Just to have a clean slate, I then restarted the SQL Server service.

Then I created two databases, test4 and test5.  As you can see below, they are both showing up when reading the event file using the GUI.

Then I added a filter to only show the test5 database by right clicking on the test5 value and then clicking, “Filter by this value”.

This resulted in only the test5 database showing in the grid.

Now that the session was up and running, I create another database, test6.   I restarted the Extended Event session to force the new database event to be written to the target.  Then I read the target again.  As you can see below, not only is test4 not showing up, but the newly created test6 is also not appearing.

I right clicked the test5 value and then to “Filter by this value”.  What I saw kind of surprised me a bit.  Below is a screenshot of what I saw, two entries for the test5 value.  This means that one had to exist before I tried to filter by the value again.

I clicked the Clear All button, and sure enough, all three databases now appear.  Just as in the image below.

What I found interesting is that the filter persists, even after deleting the session and creating it again.  Of course, deleting sessions is not a common task and recreating it with the same name and storing the target files in the same location is even less likely to happen.  However, I did find this interesting.

 

Thanks for stopping by my blog!!!!

Create Database Event – When does it fire?

Recently when presenting my Extended Event session at a SQL Saturday an attendee asked if the Create Database session I used as a demo would pick up the Tempdb that is recreated at start up. The answer is that it will not.  But this question got me thinking, what about a restore or attaching a database, would the event pick that up?

I have to admit, while I suspected that it would not pick up the TempDB creation, I was not 100% certain on whether or not the Create Database event fires on start up.  So I wanted to do a quick test and the answer is that the event does not fire when the SQL Server service is restarted.

What about when a database is attached?  When attaching a database, this event does indeed fire.  Which of course makes sense, at least to me, you are creating a new database when attaching the database files.

What about when a database is restored?  Although a restore is in reality a drop and recreate the database, it does NOT fire the Create Database event.  I tried both restoring the database with the same name and a different name and neither one was picked up by the Create Database event.

Thanks for visiting my blog!!!

 

Create a New Dashboard in Azure Portal

Dashboards can be used in many different products and can be very helpful for many reasons.  Azure has a default dashboard, but it allows you to create your own dashboards to easily get the information that is important to you.

If you look at the screenshot of an Azure portal dashboard, you can see I get quite a bit of information quickly.  In this case I can see three reports on activity for my subscription, a list of my resources and the current cost.

To get to the dashboard you need to click the “Dashboard” menu item on the portal menu on the left side of the screen.  Now that we are at the “Main” dashboard we can either leave it or edit it.

When you look at the dashboard the first time, you will see the default dashboard. An example of this is in the image below.

Create A New Dashboard

If the default dashboard does not meet my needs, Azure provides an option to create your own. To create a new dashboard, simply click “New Dashboard” on the menu that appeared at the top of the screen.

Once you click on “New Dashboard” you will see an empty dashboard as below.

The first item you should consider changing is the title.  Of course this should reflect what the dashboard is to display.

Add items to a Dashboard

Now we are at the point we need to add the tiles that will display the information you are looking for. On the left side you will see the “Tile Gallery”.  These are some of the items you can add to your dashboard.

To add the item you can either drag\drop the item to the dashboard or you can click the “Add” item to the right of the tile title.

When you click the “Add” item, Azure will place the tile in the next available location.  Azure will add the tile to the right of the tile in the top row.  In the image below, I added four items and Azure kept adding them to the right.  With the exception of the “User Sign-ins”.  Because the clock and the “User Sign=ins” are not as tall as the others, Azure placed the “User Sign-ins” under the clock.  However, with most of the tiles they will be placed to the right of the last tile on the top row.

Another method of adding tiles is to drag and drop the tiles from the Tile Gallery.  When you use this method, Azure will place a shadow outline on the screen that will show you where the tile will be placed, similar to the image below.

Once I have added the item I can change the size if the default size is not what I am looking for.  In the upper right corner of the tile you will see a garbage can and a box with three dots when you float your cursor over this area.  To change the size, click the box with three dots and a size context menu will appear.  You can then pick the size you would like for the tile.

Now that we have added a few tiles and have everything we need.  We can now click “Done Customizing” at the top of the screen.  When you do Azure will take you to your dashboard.

Change Dashboards

Now that we have added a dashboard, we should have two, the one we created and the default dashboard.  To change what dashboard you are viewing, you click the drop down and pick the dashboard you would like to view from the menu.

 

Edit an Existing Dashboard

If you have an existing dashboard you would like to edit, simply navigate to the dashboard and click “Edit” menu item on the menu across the top.

Now you can add tiles, removed tiles, resize tiles and change the order and placement of the tiles.

Add Metrics Reports

In a previous blog post, I talked about how to create a metrics report.  In this same blog I mention how to add a metrics report to a dashboard.  Here is the link to that post.

 

Thanks for visiting my blog!!!

 

 

Azure Storage Explorer Lock

In the Azure Storage Explorer, you are able to view files that you have stored in your Azure storage account. As you can see by the image below, it looks very much like the Windows file explorer.

Of course you have the file name, last modified date, type, size as well as a few other bits of information.  You also can see the little lock that is to the left of the second file.  This tells that there is a “lease” on this file.

According to Microsoft this is the definition of a lease:

“manages a lock on a blob for write and delete operations. “

This is pretty much the same as a lock on the file in Windows.  By having the lease, there are some limitations on what you can do with the file, like delete it.  If you try to delete a file in Azure Storage Explorer you will see this error and you will not be able to delete it until the lease is removed.

In this case I am using an Extended Event target as the file.

Here is another nice post on the leases of blog files. It is an older post on Red-Gates Simple Talk site, however I think much of it is still relevant today.

If you are using Azure Storage accounts and do not have the Azure Storage Explorer, I would recommend that you download it and take a look at it.  It could make life a bit easier.  Here is the link to download it.

Thanks for stopping by my blog!

Track Failed Logins with Extended Events

Many things in life have opposites.  There are stop and go, beginning and end and of course in and out.  SQL Server is not any different, you have BEGIN TRAN…COMMIT TRAN, successful login and failed login as well as a few others.  Using Extended Events there is just a login event, not a successful or failed login event.  If you look at the description of the login event this is what you will see:

Occurs when a successful connection is made to the Server. This event is fired for new connection or when connections are reused from a connection pool

Notice is states successful connections.  What if I want to use Extended Events to track failed logins. Logic would say that if there is an event for successful logins, there would be an event for failed logins.  In this case, logic would be wrong.

If you do a search for the work “fail” in the events a number of events will be returned.  Many of the are related to column store and none of them are related to failed logins, most are related to column store.

Although, when I think of Extended Events, I usually think about how I can use it to capture performance related information. However, there are somethings, like failed logins, that could be helpful as well.  Well how can I capture failed logins if there is not an event to do so.

To do this, you have to use the error_reported event and use predicates(filters) to return just failed logins.

As you can see in the above image, there are two filters we need to configure.

Severity is set to equal 14, this one is optional but might help eliminate unnecessary rows

error_number is set to equal 18456

state is set to greater than 8, this can be set to 1 as well

These combined allow you to capture failed logins. If you do a Google search on the error number you will find that the Microsoft documentation states this.

“When a connection attempt is rejected because of an authentication failure that involves a bad password or user name, a message similar to the following is returned to the client: “Login failed for user ‘<user_name>’. (Microsoft SQL Server, Error: 18456)”.”

This might have you wondering why the other predicates are needed.  Well, it is to help eliminate similar rows. If you look at the image below, the two rows that are highlighted are the same failed login event.  One has a severity of 1 and the other has a severity of 8.  The other difference is the error message. As you can see, there is just a bit more information with severity 8 than with severity 1.

Severity 1 message: Login failed for user ‘testgrant’.

Severity 8 message: “Login failed for user ‘testgrant’. Reason: Password did not match that for the login provided. [CLIENT: <local machine>]”

Just a little side note, to read the event file for Extended Events, simply right click on the target file in SQL Server Management Studio and click “View Target Data”.

 

This can be very useful information and in my opinion, might be a bit easier to query than reading the log file.  It is always nice to have options. This works for an on-prem instance of SQL Server, I have not been able to get it to work on an Azure SQL Database yet.  This could be due to the fact that in an Azure SQL Database, Extended Events are set up in the database.  I am still working on that so look for a future blog post on that.

Thanks for visiting my blog.

DEADLOCK_PRIORITY

When two processes are caught up in a deadlock, one will be chosen as the victim and the process will be terminated, while the other process will be successful.  SQL Server will, by default, pick the process that has the lowest cost to rollback.  This can present a potential issue if the terminated process is a business critical process.  The DEADLOCK_PRIORITY allows us to have more control over whether or not a process is chosen as the victim.

Before getting into how to set the DEADLOCK_PRIORITY, let’s quickly go over what the options are.  There are two ways to set the priority of a process. The first option is to use one of the keywords, LOW, NORMAL or HIGH. The second approach is to use a numeric value between -10 and 10.  The lowest value will be chosen as the victim.  For example, LOW will be the victim of the other process is HIGH and -1 will be the victim if the other process is greater than -1.

If you look at the chart below you will see the winners and the losers in the deadlock. These charts identify what process will be allowed to complete and not be the victim.  As you can see, when one process is LOW and the other is MEDIUM, the LOW process will be the victim.  If there is a tie, SQL Server will use the cost to rollback the transaction.

The chart below is somewhat incomplete.  The allowable ranges for the priority as a numeric value is the range between -10 and 10.  As you can see below, if one process is -9 and the other is -7, the -9 will be the victim.  Rather than take the chart all the way out to 10, I think this gives you the idea of how to figure it out beyond the numbers that are in the chart.

So now that we know how SQL Server determines the victim, let’s take a look at how we can determine what the priority was for the processes in a deadlock.

One way is to use the deadlock graph.  The two highlighted items are the deadlock priority for each of the processes.  As you can see, the priority 5 survived while the priority -5 was chosen as the victim.

You can also look at the XML to determine the priority of each of the processes.  In the XML you will look for “taskpriority” in the Process-List node.  As you can see below there are two of them, one for the process that succeeded and the victim. The “victimProcess id” identifies the process that was chosen as the victim.

To set the priority for a process, you would add the code below at the beginning for the batch. This setting is for the session.

SET DEADLOCK_PRIORITY 10

This is what the code would look like if using a numeric priority.  Remember, the value must between -10 and 10, inclusive.

This is what it would look like if you use the LOW, NORMAL, HIGH option.

Thanks for stopping by my blog!!!

 

Extended Events and Execution Plans

The ability to view execution plans is a critical part of any successful troubleshooting effort. It is in the execution plans that you can find a great deal of information.

Just a sample of what can be found in execution plans.

      • Data access type – Scan or a seek
      • What indexes were used
      • What type of physical join type – Hash Match, merge or nested loops
      • If key lookups are being used

Of course this is a very small sample.  Even in this short list, you can see the value in execution plans.  Given the value of execution plans, it is important to understand how these can be captured.  There are a number of methods that can be used.

SQL Server Management Studio is an effective method for capturing a single plan.  When you have a query you would like to see the plan on you have two options, estimated and actual.  The differences is pretty simple, estimated is what SQL Server thinks it will do while the actual execution plan is what it actually did to get the data you are requesting.

To get the estimated execution plan, click the “Display Estimated Execution Plan” under the Query menu.  At this point, SQL Server will get all the information it needs to create the plan, however will not actually execute the query. Statistics are a key part of this and should kept up to date.

When getting the estimated plan, SQL Server does not actually execute the query.  However, when the actual plan is used, SQL Server does execute the query. To get the actual execution plan you can do one of two things, click the toolbar icon highlighted below and execute the query. Or you can click the menu item under the Query menu and then execute the query.

As mentioned this method works great if you have the query and will capture the plan for a single query.  However, what if you want to capture the execution plans over a period of time.

Within SQL Server you have two methods that can be used to accomplish this.  One is Query Store and the other is Extended Events.  Since this is a post about Extended Events, here is some good documentation on Query Store, click here.

When using Extended Events to capture the plan you have two options.

query_pre_execution_showplan – this is the estimated plan

query_post_execution_showplan – this is the actual plan

Either one of these should work.  Once you have chosen the event, the XML for the plan can be found in the “Event Fields” tab and the showplan_XML column.

The easiest way to view this is to use SSMS.  If you navigate to the target and right click on the target.

In this case, right click on the event file and then click “view target data”.  When you do you will then you will see this.  If you look towards the bottom you will see the Query Plan tab. Once you click this, you will then see the plan.

What is missing?  If you look at the two plans below you will see a noticeable difference.  The SELECT operator is missing on the top one.  The top plan is from Extended Events and the bottom one is from SSMS.

Missing the SELECT operator is a big deal.  There is a great deal of information that can be found here. This alone would be a good reason to not use Extended Events to capture execution plans.

If that alone isn’t enough to avoid using Extended Events to capture execution plans, take a look at this warning that can be found in the description of the two events mentioned above.

“Using this event can have a significant performance overhead so it should only be used when troubleshooting or monitoring specific problems for brief periods of time.”

So if I shouldn’t use Extended Events to capture the execution plans, what should I use?  The answer is Query Store.

Thanks for stopping by my blog and I hope you learned something. More importantly, you learned what not to use!

All my Extended Event Options are Greyed out

Have you ever tried to change the options for an Extended Events session, only to see this.

There are a number of options that are greyed out and can’t be changed.  This is because the session is running.  There are some options that can be changed, just none on the Advanced tab.  This is an easy fix, if you need to change one of these options, simply stop the session and make your change.  Don’t forget to restart the session.

Something to keep in mind.  If you are using the file target, stopping and restarting the session could delete your oldest file and will create a new session file.  This could potentially result in the loss of key data.

Thanks for stopping by my blog and I hope you learned something.

Extended Events Max Dispatch Latency

When you create an Extended Event session, there are a number of properties that you can set.  These properties are spread through out the tabs in the Session Properties window.

Below you will see the four tabs,  General, Events, Data Storage and Advanced.

Just as a brief summary let’s go over a few of the key properties in each tab.

In the session tab, the obvious properties is the “Session Name”.  But there is also another key property, “Start the event session at server start up”.  This is an important property that can assure the session starts in the event of a server restart.

The event tab is exactly what you think it might be, it is where you will decide what events you want to capture and what data columns you would like to capture.

There are a number of options for storage and you will set this in the Data Storage tab.  Where the data collected is stored is referred to as targets  Two of the commonly used targets are ring_buffer, this is memory, and file target.

Which brings to the last table, Advanced.  This is where the Max_Dispatch_Latency property can be found.  The max_dispatch_latency property is the maximum duration that an event, once captured, would reside in the buffer before written to the target.  This doesn’t mean it will always take this long, just the maximum amount of time it will take.  The default for this property is 30 seconds.  For the most part, this is a good number and should avoid changing it.

You would logically think that the minimum setting would be zero seconds.  If you think that way, you are correct.  However, 0 does not mean 0 seconds.  When this is set to 0 it means the event will stay in the buffer until the buffer becomes full.  This is the same set setting the “Unlimited” option you see below. Given this, the true minimum is one second.

When I present my Extended Event session I usually use the Create Database event in my demo.  Once, I create the database, I go immediately to the target to see if the event has been written to the target.  Usually that takes just a few seconds.  I only see the event written to the target about 50% of the time.  This is because of the max_dispatch_latency.

Here is a really nice article by Jonathan Kehayias on the properties of an extended event session, click here.

Thanks for visiting my blog and I hope you learned something!

Capture Deadlock Graph with Extended Events

Over the past few years, I have presented a number of sessions on Extended Events.  As I prepare to present the topic a few more times in the coming weeks, I thought I would do a blog post on one particular event that has been very helpful to me when working with issues in a production environment.  That event is related to the Deadlock graph.

There are a number of ways you can capture this.  Profiler has been a very good and very effective way of capturing the Deadlock graph. However, starting with SQL Server 2008, there is now another option, Extended Events.  RedGate also puts out a great tool named SQL Monitor that can be used.

I am not going to get into the ins and outs of Extended Events in the post, I am simply going to cover how to use Extended Events to capture the deadlock graph.  There are a total of five events that have the word “deadlock” in the event name.

Let’s take a moment and quickly go over each of these five events.

database_xml_deadlock_report – This will be the same as the xml_deadlock_report, with one noticeable exception.  The xml_deadlock_report event has an additional tab, while the database_xml_deadlock_report does not have this.  This tab is where we will find the graph.

xml_deadlock_report – This is very similar to the database_xml_deadlock_report with the addition of the deadlock graph.  To see the deadlock graph, simply click on the Deadlock tab.

lock_deadlock – “Occurs when an attempt to acquire a lock is canceled for the victim of a deadlock.”  This definition is from SQL Server. Of the events we have discussed so far, this event has the most data points.

 

lock_deadlock_chain – “Occurs when an attempt to acquire a lock generates a deadlock. This event is raised for each participant in the deadlock.” This definition is from SQL Server.

scheduler_monitor_deadlock_ring_buffer_recorded – “The session tracks other useful events which makes it easy to trace back the series of events which led to the deadlocked schedulers condition!” – From www.troubleshootingsql.com

Since this post is about the deadlock graph, we will focus on the xml_deadlock_report event. As mentioned before, this is the event you will need to capture the deadlock graph.  There are two points to look at, the column is “XML_report” and the other is the graph itself. In the image below you will see that there are two places to look at the actual XML.  If you do not see this column in the grid, you can right click on it in the details at the bottom and then click “Show column in table”.  It might be very challenging to read due to formatting.  If you are having a difficult time reading the XML, this online formatting tool can be very helpful to get good formatting.

This is the actual XML that is used to build the graph.  This is quite a bit of useful information in the XML.  You can see the victim query as well as many other good data points.  The victim process can be found in the “victim-list” node.  In the image below, you will then see two processes, the first one listed matches the victim process id.  The UPDATE just below it is the T-SQL that was chosen as the victim.

 

The deadlock graph, while initially looks very useful, it does have a significant limitation.  The query with the blue X on it is the victim.  If you float your cursor above it, the query will then appear in a context pop up.

If you left click and hold one shapes you can move it around. This can be useful when there are more shapes than you see above.  Now on to the limitation, you can’t zoom it.  Without this, it can be difficult to see some of the details.

Here are just a few interesting deadlock graphs I have been able to capture.


While these are kind of interesting to look at, the scary part is what caused these.  These were all caused by a single stored procedure.  Now you have the graph.  I think now the fun is just beginning trying to figure out what is causing this and a solution.

Thank you for stopping by my blog.