Capture Deadlock Graph with Extended Events

Over the past few years, I have presented a number of sessions on Extended Events.  As I prepare to present the topic a few more times in the coming weeks, I thought I would do a blog post on one particular event that has been very helpful to me when working with issues in a production environment.  That event is related to the Deadlock graph.

There are a number of ways you can capture this.  Profiler has been a very good and very effective way of capturing the Deadlock graph. However, starting with SQL Server 2008, there is now another option, Extended Events.  RedGate also puts out a great tool named SQL Monitor that can be used.

I am not going to get into the ins and outs of Extended Events in the post, I am simply going to cover how to use Extended Events to capture the deadlock graph.  There are a total of five events that have the word “deadlock” in the event name.

Let’s take a moment and quickly go over each of these five events.

database_xml_deadlock_report – This will be the same as the xml_deadlock_report, with one noticeable exception.  The xml_deadlock_report event has an additional tab, while the database_xml_deadlock_report does not have this.  This tab is where we will find the graph.

xml_deadlock_report – This is very similar to the database_xml_deadlock_report with the addition of the deadlock graph.  To see the deadlock graph, simply click on the Deadlock tab.

lock_deadlock – “Occurs when an attempt to acquire a lock is canceled for the victim of a deadlock.”  This definition is from SQL Server. Of the events we have discussed so far, this event has the most data points.

 

lock_deadlock_chain – “Occurs when an attempt to acquire a lock generates a deadlock. This event is raised for each participant in the deadlock.” This definition is from SQL Server.

scheduler_monitor_deadlock_ring_buffer_recorded – “The session tracks other useful events which makes it easy to trace back the series of events which led to the deadlocked schedulers condition!” – From www.troubleshootingsql.com

Since this post is about the deadlock graph, we will focus on the xml_deadlock_report event. As mentioned before, this is the event you will need to capture the deadlock graph.  There are two points to look at, the column is “XML_report” and the other is the graph itself. In the image below you will see that there are two places to look at the actual XML.  If you do not see this column in the grid, you can right click on it in the details at the bottom and then click “Show column in table”.  It might be very challenging to read due to formatting.  If you are having a difficult time reading the XML, this online formatting tool can be very helpful to get good formatting.

This is the actual XML that is used to build the graph.  This is quite a bit of useful information in the XML.  You can see the victim query as well as many other good data points.  The victim process can be found in the “victim-list” node.  In the image below, you will then see two processes, the first one listed matches the victim process id.  The UPDATE just below it is the T-SQL that was chosen as the victim.

 

The deadlock graph, while initially looks very useful, it does have a significant limitation.  The query with the blue X on it is the victim.  If you float your cursor above it, the query will then appear in a context pop up.

If you left click and hold one shapes you can move it around. This can be useful when there are more shapes than you see above.  Now on to the limitation, you can’t zoom it.  Without this, it can be difficult to see some of the details.

Here are just a few interesting deadlock graphs I have been able to capture.


While these are kind of interesting to look at, the scary part is what caused these.  These were all caused by a single stored procedure.  Now you have the graph.  I think now the fun is just beginning trying to figure out what is causing this and a solution.

Thank you for stopping by my blog.

Events…No Events

Extended Events is a great way to capture information about what is happening on your server.  You can capture information about deadlocks, failed logins, wait stats and many more important performance data points.

One of the nice features of the Extended Event interface in SQL Server Management Studio is the searching feature that can be used when setting up a session.  In SQL Server 2017 there are over 1,500 events that can be utilized.  Searching through all those events can make it a challenge to find the event you are looking for.

Luckily Microsoft has given us some help.  If you look at the image below, you can see under the “Event Library” box the two letters “de”.  As expected, this returns any event name that has “de” in the name somewhere.  It could be at the beginning, middle or the end of the event name.

The events returned will be based on two additional things category and channel.  Both of these options help to limit the number of events you will see in your results.

Category is simple groups of events.  For example, in the database category you will find many events related to databases. These include the following:

        • Database Starting
        • Database attached
        • Database created
        • Many..many more

The Channel is also a grouping of events, but this grouping is more on who and or why the events might be used. Below is a sampling of what you will find in the Analytic channel.

I have never really changed either one of these and I have been able to find every event I needed.

Now, what does it mean when I enter a value, for example “deadlock” into the Event Library search box and I get this:

Notice that no events are showing up.  Well there are a few reasons why this is happening. First thing to check is the Category and Channel to see if you are limiting the results.

Another reason is that all the events that contain that string are already in use.  If you look at the above screenshot, you will see that there are five events related to deadlocks and the are all in the “Selected Events” pane.

Now you can see that all 5 events are now showing up to select from.  This is a very short blog, but hopefully you learned something.

Thanks for stopping by my blog!!!

Reading Extended Event Files for an Azure SQL Database in a Storage Account

Extended Events are a great way to find information about what is happening on your on-prem SQL Server.  It is also a great way to find out what is happening in your Azure SQL Database. Setting up Extended Events in an Azure SQL DB is very similar to doing so in an on-prem SQL Server.

Extended Events can be used to capture quite a bit of information about your SQL Server.  With SQL Server 2017 there are over 1,500 events that can be captured.  These events include anything from alter column event to query execution plans.  When setting up an Extended Events session you will have an option of where you would like to store the collected the data.  One of the common destinations for stored the event data is a file.  This file is an XML based file and can be read a number of ways.

Extended Events can be found under Management and is a server level item.

For an Azure SQL Database it has been moved.  It is now a database level item.

File Target Basics

When you use Extended Events you will need to provide a target type. A target is nothing more that a location that will store the event information.  With both an on-prem instance of SQL Server and an Azure SQL Database you have several options to chose from. Below you will find the options for targets for both an Azure SQL database, on the left, and an on-prem instance of SQL Server, on the right.  As you can see you have more options with the on-prem instance.  While the focus of this post is how to read an Extended Events file, it is good to know the different options that are available.

When storing the data in a file target, the file extension will be an .xel extension.  This file type is an XML based document type.  Normally you can open an XML document using notepad and it will still be in a readable format.  That is not the case here.  As you can see by the image below, there really isn’t much that is readable in an XEL file type.

Setting up an Azure Storage Account

Now that we have reviewed a bit about what Extended Events are and what the file target is.  Now we need to go over how do you create the actual target files.  The first thing we need is a destination location.  For an on-prem SQL Server this can be a local drive, a network share or a mapped SAN connection.  These are not really the best options for an Azure SQL Database.  This is where an Azure Storage Account is going to be our friend.

Once you have located it, click “Storage Accounts” . This will bring you to the screen below.

When you click the “Add” button you will be taken to a page that will allow you to enter all the appropriate information to create the storage account.

As you can see by the above screenshot, you will be asked for much of the expected information.  This includes the account name, subscription, resource group as well as location.  When deciding on a location, it is important to chose this carefully.  Remember by picking a location that is a different location could lead to additional unexpected charges.  Please double check with your Microsoft representative to assure this does not happen.

While this post really isn’t about how to create an Azure storage account, I did want to at least provide some basics.  One thing I do want to point out, the Access tier you see at the bottom of the image above.

As you can see there are two options, Cool and Hot. Below you will find the definitions as stated in Microsoft’s documentation here.

    • Hot – Optimized for storing data that is accessed frequently.
    • Cool – Optimized for storing data that is infrequently accessed and stored for at least 30 days.

This MS site also gives a good, more detailed explanation on the differences between the two.

Setting up the File Target to an Azure Storage Account

Now that we have our storage account in Azure we now need to set up everything so it can be used as a target for the Extended Event session.  With both the on-prem and Azure instances of SQL Server, there are multiple options when configuring a Extended Event target.

How To Read the Data

As you can see above, the file target really isn’t readable with Notepad or any other text editor.  With that, we need another tool to read the data we have collected. For an on-prem SQL Server, you have several options.  You can use the GUI in SQL Server Management Studio or you can use T-SQL.

When you the GUI in SSMS, you will navigate to the Extended Event session and simply right click on the file target.  When you do you will see an option to view target data.

When you click, you will see something similar to what is in the image above.  In this GUI, you can sort, group and even export the data.   I really like this for analyzing the event data.  However, this is not an option for an Azure SQL Database. When you right click on the target in an Azure SQL database there is not an option to view target data.  Below you can see the results of right clicking on the target.  Notice that the Azure SQL Database does not have an option to “View Target Data”.

The other option is to use T-SQL.  This is an option for both Azure SQL Database and an on-prem instance of SQL Server.  Luckily the code requires very little modification to move from using it on-prem to using on an Azure SQL Database. Because T-SQL will work in both Azure SQL Database and an on-prem instance of SQL Server, we will focus this blog post how to use T-SQL to extract and review your event data.

When reading from the XEL file in an Azure Storage account, there are few prerequisite items that we need to briefly cover as well.

fn_xe_file_target_read_file

The XEL file is an XML based file so when reading the data you will need to parse through the XML. In order to do this, we can use the sys.fn_xe_file_target_read_file function. This function has a total of four arguments. Here is the link to Microsoft’s documentation on this function.

      • Path – this is the path to the XEL file(s)
      • mdpath – this is the metadata file path. Starting with SQL Server 2012, this is no longer needed and NULL can be used
      • initial_file_name – this specifies the first file to read from, if NULL all files will be read
      • initial_offset – per Microsoft documentation this is “Used to specify last offset read previously and skips all events up to the offset (inclusive)”

Below is an example of how the initial step in reading the XEL file might look.  Notice the four arguments and that the only one without NULL as a value is the first one.

SELECT event_data = CONVERT(xml, event_data)
INTO #eeTable
FROM sys.fn_xe_file_target_read_file(
‘C:\eefiles\efiles_0_132002087635590000.xel’,null, null, null);

Hopefully you also noticed that the SELECT converts the event_data column to XML.  This is just the first step, later we will discuss how to parse the resulting XML into a more readable format.

Azure Storage Explorer

When reading from an Azure Storage account, instead of using the usual path to a drive or a UNC, you will use a URL instead.

SELECT event_data = convert(xml, event_data)
FROM sys.fn_xe_file_target_read_file(
‘https://StorageAccountName.blob.core.windows.net/eefiles/efiles_0_132002087635590000.xel’,null, null, null);

When first looking at this, you would expect that when given the proper credentials, you would not have any issues reading from the file.  Well, you don’t get an error, but you also do not get any data even thought the file you are trying to read from does indeed exist.

The reason for this is because you will need to create a credential in the Azure SQL Database that the Extended Event is in. This is where the Azure Storage Explorer will come in handy.

When you create the credential, you will need the “Secret”.  This is a Shared Access Signature code that will looks something like this.

?st=2019-05-18T12%3A52%3A56Z&se=2019-05-19T12%3A52%3A56Z&sp=rl&sv=2018-03-28&sr=c&sig=rUriw3oYKU%2BtqK9Ji5oq6Ih34i0uHe34ldorjfnQgUM%3D

You can use the Azure Storage Explorer to get this.  The Azure Explorer can be downloaded here.  It also has a great price, FREE!!  Per Microsoft, Azure Storage Explorer offers the following features.

Once you have installed and opened Azure Storage Explorer, you will then need to connect to your Azure Storage Account.  You can do this by expanding the tree and right clicking on “Storage Accounts” and then clicking “Connect to Azure Storage…”

When you will see the form you see below.  As you can see there are a number of methods you can use to connect to your Storage Account.

For this post, I used the “Use a connection string” option.  In order to get the connection string you will need to navigate to the Azure Portal.  Once there proceed to your storage account can click “Access Keys”.

Once you can see the Connection string, simple click the “Copy” button to the right of the string. As expected, this will place the connection string on your clipboard.

When you paste the connection string into the above form, Azure will automatically populate the Display Name box.  Then you click “Next” where you will then see a summary form.  At this point, you will review and click “Connect”.

You will need to create a Blob Container. This is pretty simple, just right click on “Blog Containers” and go to Create.

Now we have everything we need to get the last piece of information we need in order to utilize T-SQL to query the Extended Events .xel file.  This last bit of information is the Shared Access Signature.  You will need this to create the require Azure SQL Database credential.

Once you are connected to your storage account, navigate to the blog storage and click on it.  When you do, you will see “Actions” and “Properties” toward the bottom, click “Actions” and then click “Get Shared Access Signature…”.

As you can see the form that will pop up will have a number of options.  Given the this post is about how to read from the files, we will not change anything here.  At this point, click “Create”.  The following form will then appear.  Notice there three data points.

      • Container
      • URL – You will need this, well at least part of it
      • Query String: You will also need this to read the files

Now that we have completed the following prerequisites to utilize Azure Storage account to store the Extended Events files, we are ready to begin working on the T-SQL code.

These prerequisites included the following.

      • Created an Azure Storage account
      • Created a blog container
      • Downloaded and installed Azure Storage Explorer
      • Connected the Storage Explorer to the Azure Storage Account
      • Created the Shared Access Signature

Create Credential

This first step when using T-SQL to read Extended Files that are stored in an Azure Storage Account is to create a database credential.  Of course the credential will provide essential security information to connect to the Azure Storage Account.  This first data point you will need is the URL to a blog storage container in you storage account.  If you look below, you can see where you would place your storage account name and the blob storage container name.

CREATE DATABASE SCOPED CREDENTIAL [https://StorageAccount.blob.core.windows.net/blobstorage]
WITH IDENTITY=‘SHARED ACCESS SIGNATURE’
, SECRET = ‘st=2019-04-19T22%3A15%3A13Z&se=2020-04-20T22%3A15%3rtieo34=rwdl&sv=2018-03-28&sr=c&sig=ubWCJk3RvUrkeitpe,wdig3h56F5CM%3D’
GO

Because this is a database level credential, it is important to execute the code in the context of the database you are attempting to run the code to read the files from.

The “WITH IDENTITY” will always be “Shared access Signature” with the type of connection we are going to use.  Finally we get to the SECRET, this is the connection string we created earlier.

When you first create the connection string it will look something like this.  It will be much longer that what is here.  I removed many of the characters for security reasons.

?st=2019-04-19T22%3A15%3A13Z&se=2020-………..

Prior to using this string, you will need to make one minor adjustment.  You need to remove the question mark from the beginning.  You can keep it when you create the credential, in fact the credential will be created.  However, it will not work.  When you attempt to read the file, no error will be generated, you just wont see any results.

If you find that you need to delete the credential, you can use the code below with a few adjustments.  Those adjustments are in italics.

DROP DATABASE SCOPED CREDENTIAL [https://storageAccountName.blob.core.windows.net/BlobContainerName]

If you attempt to read the data from the EE files, and the credential does not exist you will see the error.  Given this, it is vital that the credential get created in the proper database

Msg 262, Level 14, State 1, Line 1
CONTROL permission denied in database ‘master’.
Msg 297, Level 16, State 1, Line 1
The user does not have permission to perform this action.

Now that the database credential exists, you can now run the code below to query the data.

SELECT event_data = CONVERT(xml, event_data)
INTO #eeTable
FROM sys.fn_xe_file_target_read_file(
— TODO: Fill in Storage Account name, and the associated Container name.
‘https://StorageAccountname.blob.core.windows.net/BlogContainerName/FileName.xel’,
null, null, null
);

SELECT
ts = event_data.value(N'(event/@timestamp)[1]’, N’datetime’),
[sql] = event_data.value(N'(event/action[@name=”sql_text”]/value)[1]’, N’nvarchar(max)’),
duration = event_data.value(N'(event/data[@name=”duration”]/value)[1]’, N’nvarchar(max)’),
spid = event_data.value(N'(event/action[@name=”session_id”]/value)[1]’, N’int’),
user_nm = event_data.value(N'(event/action[@name=”username”]/value)[1]’, N’nvarchar(max)’),
dbname = event_data.value(N'(event/action[@name=”database_name”]/value)[1]’, N’nvarchar(max)’),

explan = event_data.value(N'(event/data[@name=”showplan_xml”]/value)[1]’, N’nvarchar(max)’)
FROM #eeTable
ORDER BY ts

DROP TABLE #eeTable

This code does a number of things.  First of all it reads the file and converts the results to a single XML data row for each row in the file.

Then after dumping the results into a temporary table, the code then parses the XML and returns it in a more readable relational table type format.  When you look at the image below you will see two results.  The top set, in the red box, is the converted XML.  The second set, in the blue box is the XML parsed to the more traditional columns we are used to working with.

Now, once in the columns, we can do what ever T’SQL will allow us to do.  For example, insert the results into a disk based table for analysis at a later date.

One thing I would like to point out is the two bits of code below.

When the column name is taken from the Global Fields tab in the Extended Event session you will use “event/action” followed by the column name.

event/action[@name=”sql_text” 

When the column name is taken from the Event Fields tab in the Extended Events sessions, you will use “event/data” followed by the column name.

event/data[@name=”duration”

On thing to keep in mind, if you spell the column name incorrectly, it will not throw an error.  It will simply return NULL in every row for that column.

Question…

As I was writing this I realized that there a lot of steps. The question that came to my mind that there has to a way with less steps.  The answer to that question is, there is.

If you right click on the file in the Azure File Explorer, you can then click open.  When you do you will see the file downloading as you see in the red box below.

Once it has completed downloading and is ready to open, you may see this warning.

If you do, proceed with caution.  I believe you see this is because of the file format.  However, again, please proceed with caution.

When you click “Yes”, SQL Server Management Studio will then open and display the data as below.

Once you have the data in SSMS, all the functionality that comes with it is now available.  This includes the ability to export the data to a table, group the data, apply filters and sort the data as well.

If I was just reading the data, this is the way to go.  However, if I needed to automate the collection of the, I will need to use the T_SQL method mentioned in most of the post.

Another thing to keep in mind is the amount of date you will be pulling out of your Azure Storage Account. As you can see in the image below, the Extended Events files can get quite large.

In order to avoid large bills on your Azure subscription, it is important to understand the impact of opening large files.  It can get expensive if not managed carefully.

Hopefully you are still here……it was a long post.  By far the longest I have written.  If you are still here….thank you for stopping by my blog!!!

 

Azure SQL DB and @@version

Many of us are used to using @@version to get the version of SQL Server we are using.  This also works when you are utilizing an Azure SQL database, although the results are slightly different. However, before getting to @@version, I would like to go over a few other methods to obtain version information.

How to Get Version

There are many reasons why it is important to know the version of a SQL Server.  The reasons include everything from licensing to upgrades.  In this section, I will cover a few of the common methods used to get the version of SQL Server.

With an on-prem instance of SQL Server one method that can be used to get the version is to right click on the server in SQL Server Management Studio.  Once you do, you can click on the properties menu item and the window below will appear.  As you can see, in the boxes you are able to see both the version and edition of the SQL Server.

However, when you attempt to do this in an instance of Azure SQL database, the properties menu item is not available.  Below is what you will see when you right click on the instance of an Azure SQL database in SQL Server Management Studio.

Another method to get the version of SQL Server is to utilize the SERVERPROPERTY function.  The SERVERPROPERTY function can be used to view server level properties, in this case the product version.

SELECT SERVERPROPERTY(‘ProductVersion’);

When you execute the above statement you will then see the version. It will look like the number below.  The number represents the version, in this case, SQL Server 2017.

14.0.1000.169

As mentioned above, the ServerProperty function can be used to return a number of bits of information about the server.  Here is a link for more detailed information, click here.

The final method I would like to talk about is using @@version.  This is a global variable in SQL Server that will return not only the version of SQL Server, but also the OS version.

When reading from this global variable, you include it in a SELECT statement as illustrated below.

SELECT @@Version

The results will vary from an on-prem instance of SQL Server to an Azure SQL Database.

This is what you will see for an on-prem instance.

Here is the complete text of the results.

“Microsoft SQL Server 2017 (RTM) – 14.0.1000.169 (X64) Aug 22 2017 17:04:49 Copyright (C) 2017 Microsoft Corporation Developer Edition (64-bit) on Windows 10 Home 10.0 <X64> (Build 17134: ) “

As you can see it returns some good information.  Including the version of SQL Server, the service pack status, 32-bit or 64-bit and the OS version.

However, the results change quite a bit when this is ran against and Azure SQL database.

The complete text is just as you see it above.  Notice the lack of OS version.

What is next?

Now that we have covered a number of ways to get the version information, let’s take a look at some of the differences from an on-prem instance to an Azure SQL database.

When looking at the results, it is easy to identify an instance as an Azure instance in just in the first few words, it clearly states that it is an Azure instance, it is indeed in Azure.

If you look at the results below you will see that I used both @@version and ServerProperty to get the information I am looking for.  You will also notice that I included a simple query using the sys.databases object. This will return the compatibility level for each database, which as you will see is going to be important.

Looking above you will see that it is an Azure instance, has a product version of 12.0.2000.8 and both databases have a compatibility level of 140. The compatibility level of 140 is SQL Server 2017.

What….12.0?  Isn’t that SQL Server 2014?  If you said yes, you are correct. Is it really possible that Azure SQL Database is SQL Server 2014? The answer is no.  That number is different that an on-prem instance of SQL Server.  According to what I have ready, version 12.0 is the most current version. Given both the Azure instance and the SQL Server 2014 both of a product version of 12.0, it now comes down to the compatibility level for the databases for Azure.

As you can see both databases of a compatibility level of 140, which is SQL Server 2017, not 2014. Since I didn’t change this when I created the databases, this tells me that the instance of Azure SQL Database was on SQL Server 2017 when I created the database.  As with an on-prem instance of SQL Server, the compatibility level for an Azure SQL database can be changed. When I created a new Azure SQL Database, the compatibility level was 140, which is SQL Server 2017.

To change the compatibility level of a database you have to run an Alter Database command.

ALTER DATABASE Advent
SET COMPATIBILITY_LEVEL = 160

When I ran the above code, I got an error.

Msg 15048, Level 16, State 1, Line 1
Valid values of the database compatibility level are 100, 110, 120, 130, 140 or 150.

Notice it states that 150 is an option, while 160 is not.  Compatibility level of 150 is SQL Server 2019, the latest version.  It looks like the available options go back quite a bit, version 100, which is SQL Server 2008.

Since in Azure SQL Database, the version of the database is key when investigating versions, below you will find the code to retrieve the versions of all databases.

SELECT name, compatibility_level FROM sys.databases;

All the above code, works on both an Azure SQL database and an on-prem SQL Server.

Notice that the Advent database has a compatibility level of 150, SQL Server 2019.  When investigating version information, it important to also review the compatibility level of the database you are looking at as well as the server version.

Thanks for stopping by, I hope you learned something.

Azure Metrics Reporting

A few days ago, I did a post on DTUs of an Azure SQL Database.  In the post, I mentioned a report that you can use to see how many DTUs your subscription has used.  This is an important number to know and measure over time so you don’t get slapped with an unexpectedly high bill or have performance issues because you have exceeded your DTU limit.

Azure has quiet a bit of reporting that can me used to get the current state of your Azure subscription. Although many of the metrics are targeted more for subscription rather than database performance counters, there are few DB related such as Deadlocks. Below is a small list of the items that can be viewed, of course there are more than included in the list:

      • Azure SQL Database
        • Data space allocated
        • Data space used
        • Deadlocks
        • In-Memory OTLP usage
        • DTU used
          • What is a DTU?  Stands for Database Transaction Unit. Here is a nice article by Andy Mallon the does a nice job explaining what it is. Click Here
      • Storage Accounts
        • Used Capacity
        • Egress
        • Ingress
        • Availability

How do I Get to the reporting tool?

Getting to the reporting tool is just a matter completing a few clicks. Of course with most products there are several ways to get to a specific place.

On the left side of the Azure Portal, click “All Resources”.  You could also click on Resource Groups if you like.

Just a little side note on how to add a resource. As expected there are many resources in the Azure world.  Here are just a few: Azure SQL Database, Azure VM, Storage accounts, resource groups, Cosmos DB as well as many other items.  Not all resources are from Microsoft.  There are a number of resources that are made available in Azure.  After you click the “Add” button, you will be taken to a page that looks something similar to what is below.

Above is a screenshot of some of the available resources.  The list on the left is just the list of resource categories.  On the right are some of the resources in the chosen category.  You can either pick a category or click the “See All” link towards the top.

Once you click the “All Resources” link you will be taken to something that looks like this.

The above image lists a number of important items.  Obviously the resource name is included and the Resource Group that resource is part of.  You can also see the resource type and the Location.  The location is really important, especially when it comes to cost.  Since this blog post is about Metrics reporting, it is important to note that not all the resource types have this type of reporting.  Above you will see that I have three resources, a storage account, a SQL Sever and an Azure SQL database.  Of these three, the SQL Server is the only one that doesn’t have this metrics reporting functionality.

Once you click the resource you would like to get the reports for, you will be taken to a page similar to this.

By chosing a resource on the left, the panel on the right will update with the information for that resource.  In the above situation, I chose the storage account.  Notice two green boxes, both of these will take you to a reporting tool. You want to click the one on the top. The bottom one, Metrics(Classic) is going to be retired on June 30, 2019.  This is according to Microsoft.

Creating the Chart

Now you simple click on the Metrics link and you will be taken to a page that looks similar to what is below.

Now confirm that you are on the proper resource by viewing the resource identified in the box highlighted in the green box.  If you are not on the proper resource, simply click the resource that is there and you will have the option to change to the proper resource.

Staying in the same box as the resource, you will see a few other items.

Metric Namespace:  This is the group of metrics that you are looking for.  The items in the drop down box changes depending on the type of resource selected. Since I am working with a storage account, you see the account option.  If you don’t see this for some reason, click the “Add Metric” link that can be found just under the Chart Tile.

Metric: This is the counter that we are looking to review.

Aggregation:  This is what it says it is.  There are usually three options, Avg, Min and Max.  However, some only have Avg as the only choice.

Once you have chosen all the desired options, you will see this.

As you can see, this chart contains more than one metric.  This is easy to add, just click the “Add Metric” link in the upper left hand corner. When you do, you will be given the same options as before.  The additional metrics do not have to be from the same resource, as you can see.  The metric on the left is from the storage account, while the one on the right is from the Azure SQL Database.

Although this example is a line chart, you can change the chart type easily.  Just click the arrow to the right of where it says “Line chart” in the upper right corner.Just like in Excel, there are a number of options for chart types, just not as many options.  Only 5 options here.  Just like in any other reporting tool, not all charts work well with all data sets.

Also in the upper right corner you can change to time range of the chart.

If you click the time, the options window for the chart will open up. As you can see there are a number of options, including the time range, the Time granularity(time interval) and the time(GMT or local time).

The options for Time granularity are in intervals that would be expected.  The options are listed below.

      • 1 minute
      • 5 minute
      • 15 minutes
      • 30 minutes
      • 1 hour
      • 6 hours
      • 12 hours
      • 1 day
      • 1 week
      • 1 month

You also have another option, Automatic.  The interval for automatic is dependent on the Time Range chosen. Below are the time ranges with the automatic interval.

      • Last 30 min – 1 minute
      • Last hour – 1 minute
      • Last 4 hours – 1 minute
      • Last 12 hours – 5 minutes
      • Last 24 hours – 5 minutes
      • Last 48 hours – 15 minutes
      • Last 3 days – 15 minutes
      • Last 7 days – 15 minutes
      • Last 30 days – 30 minutes

Another area that could be very helpful is in the lower left corner.  There will be one number for each metric that has been added to the chart.

In the green box above there are a number of data points.  It contains the resource name of the color of the line in the chart.  It also contains the name of the metric, aggregation used as well at the value.  In this case the number represents the aggregation listed for the data range of the chart.  If you float your cursor over the chart and vertical line like below will appear.  When you do this, you can move it to any time on the chart and the numbers in the lower left corner will no longer represent an aggregate, but the numbers of the point in time on the chart.

Now that I have created my chart, but the numbers on the Y axis don’t meet my needs.  This can easily be changed.  In the upper right hand corner you will see three dots.  If you click it a menu opens up and you can the click “Chart settings”.

Once you click the menu item, the settings window opens.  Here you can change only Y value.  Remember the X values are determined by the date range mentioned earlier.

In addition to the Y value, you can also change the Chart Title and chart type.  Notice that the unit of measure is bytes, but many of the metrics are in MB.  If you change to a time granularity to something other than automatic, the warning goes away.

Notice the warning:

“Setting a min or max value is not recommended when the time granularity is set to Automatic”

Pin to Dashboard

Once you are in the Azure portal, you can have one or more dashboards.  Once  you have your chart looking they way you want it to look, you can then pin it to your dashboard.  Once you do that it, will now appear as below.  In this case I have two charts pinned.

 

To pin your chart to the current dashboard, click the “Pin to dashboard” button in the upper right corner.

When you do, you will have an option to pin to current dashboard.  However, when it is pinned, your dashboard may not look as you like it.  If this is the case, you can click the “Edit” option on the dashboard.

Once in edit mode, you can move the charts as you see fit.

One nice feature of pinned charts is that the vertical line that appears when you float your cursor over the chart still works.  There is one additional feature that is very nice if you have more than one chart.

 

When you have more than one chart, the line appears on all charts that are pinned to your dashboard.

Hopefully this will help you with creating charts in the Azure portal.  Thanks for stopping by my blog and I hope you learned something.

 

 

 

 

 

Azure SQL Database and DTU

As you know, when you are using an Azure SQL Database you can use SQL Server Management Studio on your computer to execute queries.  You also have an option to use the Azure Query Editor. Here is my post on the query editor.

As I was writing the post I wondered more and more about whether or not there might be a difference in how data usage is billed.  My thought was that since I am going from an Azure SQL database source to another Azure tool, Query Editor, could this potentially reduce my costs.

There are a number of Azure resources that have Egress as a metric that can be used to measure the amount of data that is being pulled from the Azure data center.  These resources include the Azure Storage Account. However, an Azure SQL Database does not need to use a storage account.

Before we move on, let’s define what egress is.

Here are a few definitions that I found online.

Azure egress charges apply only when data crosses from one Azure region to one or more other Azure regions. Any traffic that originates and terminates within a single Azure region is not subject to egress charges. It does not matter that traffic moves from one subscription to another. If the data remains within the same region, there are no egress charges.”  – help.bittitan.com

“….data going out of Azure data centers” – Microsoft

I think the most important point the two definitions are trying to make is that egress is data going out of a data center, including if it is going from a data center in one region to a data center in another region.  According to Microsoft, if there is a data pull and the database and the querying tool are both in the same region, you will not be billed.  However, you will need to confirm this.

I am not planning on getting into the pricing models for Azure in this post, however you can get that from Microsoft.  https://azure.microsoft.com

Although some resources have Egress to measure data, an Azure SQL Database does not.  It does however have another option, DTU.  DTU stands for Database Transaction Unit and it is a unit of measure that is used, in part, to calculate your bill and to help you chose the proper pricing tier for your database.

A DTU is a “blended measure of CPU, memory, and data I/O and transaction log I/O”, per Microsoft.  The DTU is a measurement for the Azure SQL database only, not for your entire Azure environment or any other database.  An important thing to keep in mind is that when your workload exceeds the DTU maximum for your pricing tier, Azure will then throttle your workload.  My subscription currently has a max DTU of 10. As long as my workload stays below that I should be fine.  Because of the potential of being throttled, it is important to monitor your work load.   This is a nice article on the DTU by Andy Mallon.

If you are not sure what your workload might be in Azure, there are a number of DTU calculators available that can help.  Here is a link to one, DTU Calculator.

As for the how many DTUs you can have, that will depend on the tier that you have chosen.  Below is a sample of a chart from Microsoft that outline the limits of two of the tiers.  Notice that there are a number of options for Max DTUs. Source of the chart is here. As you can see, the Max DTUs goes from 5 to 100.  It even goes higher if you have are using the Premium tier.

Per Microsoft, you have up to the Max DTU for your tier before you will be charged.

The DTU price for a single database includes a certain amount of storage at no additional cost. —Microsoft

Of course this might change at any moment, so I have to make a disclaimer.  Always consult your MS rep or Licensing expert to get the exact pricing and rules.

As I stated earlier, it is important to monitor your work load to see if you are at a position in which you might be throttled.  Azure is a pretty good set of monitoring reports that you can create.  You can use the metrics reports to accomplish this.  Here is a link to my post on how to create a metrics report.

So what should you monitor?  Well since we are talking about an Azure SQL database, you have three options for measure DTUs.


DTU Limit – this is what you think it might be, the limit for your tier.

DTU Percentage – this is just that, the percentage of the DTU max that is being used.

DTU Used – this is the current DTUs that are being your by your workload for the database.

This post is less about DTU calculation and pricing tiers and more about does it make a difference if I use SSMS or the Azure Query Editor on the use of DTUs.

From the tests that I have completed that answer is no.  Using Azure Query Editor showed little or no improvement in the use to DTUs.

Before we get into the results of the test, let’s cover a few basics of what I did.  I used the AdventureworksLT sample database that you can get from the Azure portal when creating a new sample database.

This is the query I ran this query using both tools.

SELECT TOP 100000 *
FROM saleslt.salesorderdetail a
CROSS JOIN saleslt.salesorderdetail b

In the chart below the green boxes represent the test using SQL Server Management Studio, while the blue boxes are the executions in the Azure Query Editor.  Notice that there is very little difference in the DTU usage.

Although there really isn’t much difference in DTU consumption, I did notice something I didn’t really didn’t expect.

When I ran the query in SSMS is usually took less the 3 seconds to run.  However, when I ran the same query using the Query Editor, it usually took 45 to 60 seconds.  That is a huge difference.  Also note that in the above chart, the Max DTU never reached 10, which is my limit.  Because if this, I know that I was not throttled by Azure.

You might be wondering with the red box is.  This is the test when I ran this query.

SELECT *
FROM saleslt.salesorderdetail a
CROSS JOIN saleslt.salesorderdetail b

The only difference it that it doesn’t have the TOP keyword.  What is interesting about this is that when I ran the query in SSMS, it took about 45 seconds.  However, when I ran in the Query Editor, it was pushing 5 minutes before I cancelled it.

How can I get information about the MAX DTU for the Azure SQL Database?  There are a few DMVs that you can use.  The first on is the dm_db_resource_stats.  This can be used to the Max DTU, as well a number of other bits of useful information.

If you run this query in your Azure SQL Database you can get the information.  The column you want to look for to get the Max DTU limit is DTU_limit.  Here is a link to get more information on this DMV.

SELECT * FROM sys.dm_db_resource_stats;

However, if you try to run it in the Master database, you will get this error:

Msg 262, Level 14, State 1, Line 1
VIEW DATABASE STATE permission denied in database ‘master’.
Msg 297, Level 16, State 1, Line 1
The user does not have permission to perform this action.

You can also use he sys.resource_stats dmv to get the DTU limit.  This DMV has the same column name as above, dtu_limit.  This DMV must run in the Master database.  Here is a link to get more information on this DMV.

SELECT * FROM sys.resource_stats

If you try to run it in another database, you will get this error.

Msg 208, Level 16, State 1, Line 3
Invalid object name ‘sys.resource_stats’

How do I get to the metrics report. This is pretty straight forward.  Navigate to your Azure SQL Database in the portal.

 

As you can see, in the menu on the side you will see the Metrics items.  If you click it, You will then be taken to an Azure Reporting tool.

You will then create your report.  This my post on the Metrics Report.

Thank you for stopping by and I hope you learned something.

Disabled Database User?

In my many years of working as a DBA, I have encountered many disabled logins.  However, I have never really encountered what looks to be a disabled database user account.  I didn’t even think it was possible to disable a user account in a SQL Server database.  I checked the user account properties just to makes sure I was correct.  Sure enough, no option to disable a user account. This finally turned out to be a simple case of looks can be deceiving.

When we first received the call that a user was having an issue, we asked what was the error.  This is what we were sent:

For the purpose of this blog, we will be using a SQL login named, testuser.

My first thought was maybe a permissions issue.  I very quickly realized that it was something else.

The first thing I did was look at the properties of the login.  This clearly showed that the login had access to a few databases, except the one related to the reported error, 1GBDB.  As you can see below, it looks like the login doesn’t have access into the database.

 

However, if you look at the users in the database, you can see that the account does indeed exist in the database.  As you can see below.

Of course the red “x” on the user account was a bit suspicious, we will get into that in a bit.  One of the items I checked was if we have an orphaned account.  So I check the SIDs for the login and the user account using these queries.

SELECT sid, name
FROM sys.syslogins
WHERE name = ‘testuser’

SELECT sid, name
FROM sysusers
WHERE name = ‘testuser’

Here are the results.

As you can see, the SIDs for both the user account and the login are the same. So that rules out an orphaned account.

I still couldn’t figure out why they user couldn’t get into the database.  That red ‘x” was still bothering me.  After some checking, I discovered there was something different about the user account in this database, 1GBDB, when compared to the same user accounts in different databases.

Database Permission properties for the 1GBDB, which the user cannot connect to.

This the a screenshot of the database permissions for the AdventureWorks2014 database, which the user can get to.

There is one important difference, the Connect permissions.  If you remove the “Connect” permissions from a user account in a database that has a corresponding login, you will see this.

Once you grant the Connect permissions for the user account, the red “x” goes away and all should be good.  Unless of course there are other issues.

Thanks for stopping by and I hope you learned something!

Azure SQL DB Query Editor

We have all been using SQL Server Management Studio to query and manipulate data, even for an Azure SQL database.  There is also an option to do this same thing built into the SQL Azure database interface in the Azure portal.  Although there have been a number of posts related to this topic dating back a few years, this feature is still marked as “preview” in the Azure portal.

How to get to the Query Editor

Once you are in the SQL Database list in the portal, find the database you would like to work with and click it.

When you do, you will be taken to the administration page for the database.  On the left side you will see the “Query editor” link.  You will also notice that this is a feature that is still considered to be in preview.

When I did a Google search on the Query editor in the Azure portal, I found posts from as far back as 2017.  Despite that, I still wanted to complete a blog post on it.  Once you click it, you will be taken to a login page similar to below.

Notice on the right side there is the ability to utilize the Active Directory single sign on functionality, which I do not have.  On the left you will enter the appropriate credentials to login to the database.

Once at the Query Editor

Once you are in the Query editor, you will notice many of the same features as SSMS.

These features include the following

      • Tabs for new queries
      • Open query
      • Results pane
      • Messages pain
      • A condensed object explorer
      • as well as many other features of SSMS

Below you will find a screenshot of the Query editor interface.

Menu Bar

Starting with the menu bar across the top, you will find the expected buttons.  In addition to the Open query, new query and save query buttons, there is also an Edit Data(Preview) button.  This does as you expect it might, allows you to edit the data.  Just like the Query editor, this is also a preview feature. There is also a Login button if you wish to use a different user.

Object Explorer

 

The Object explorer is very similar as that in SSMS, however there are some very glaring differences, mostly that there are fewer object types displays in Query editor.  As you can see you can browse tables, views and stored procedures.  As for the tables, you can see the column names and data types as well. For stored procedures, you are able to see the parameters just as the image below indicates.

In SSMS a user can right click on a stored procedure and execute it.  In the Azure query editor, when you right click on an object you get nothing.  The context menu does not appear, therefore not allowing you to execute the procedure.  Of course you can still use T-SQL to execute a stored procedure.

Query Window

In many aspects the query window is very similar to that of SSMS.  There are tabs across the top, the T-SQL pane, a results pane and a messages pane.

Starting with the T-SQL pane, there is color coding of the SQL, however it is slightly different than that of SSMS.  First big difference is that there is not an option to change the size or color of the font.

If SSMS if you execute two SELECT statements, like the ones below, the results are much different in Azure Query editor when compared to the same two queries in SSMS.

SELECT TOP 10 p.name
, p.productnumber
FROM saleslt.product p

SELECT TOP 10 *
FROM saleslt.address

Here are the results when you run the code in SSMS.  Notice that there are two data sets

Now look what happens when I run the same queries as the Query editor, I only get one data set vs the two when executed in SSMS.

In SSMS, if you right click on a table you will see an option to Edit X number of rows.  That options is very similar in Azure Query Editor. The way this is done in the portal is to click on a table and then click the “Edit Data (Preview)” button just to the right of the Login button.

This will in turn, return the data in a grid that allows for editing data. You can just double click in a cell in the results set.  This will change the cell to an editable version.  The edit menu has just 5 buttons on it.

The are:

        • Create New Row
        • Save
        • Refresh
        • Discard
        • Delete row

When you click the new row button, the new row will appear in a different location that if you attempted to enter a new record use the grid results set in SSMS.  The big difference here is that the new row will appear at the top of the grid.

One option that is important to me is the ability to change the font size.  I have NOT found out how to do this in Azure Query editor.

When you execute a statement and an error is returned.  In SSMS this error is red, but default. In the Query editor it is black.

Once you have your query as you like, all you have to do it click the Run button

Limitations

There are a number of limitations in the Query editor at this time.  Since it is in preview, my hope is the Microsoft will address some of these deficiencies as the Query editor moves closer to having the preview label removed.

      • Does not allow for the viewing of object definitions
      • Limited objects in object explorer
      • Tabs in the query window does not have the SPiD on it
      • If more than one SELECT statement is executed in a batch, you will only see the results of the last one
      • Cannot right click on a stored procedure to execute it
      • No options such as display Actual Execution Plan
      • Cannot drag and drop object names into the query

Despite all of the limitations, I still like it.  Once you get used to the limitations it really isn’t that difficult to use.  Since the name of the tool is Query Editor, it does make sense that it doesn’t have all the same functionality as SQL Server Management Studio. However, it would be nice to have some the items that are currently not included.

Thanks for stopping by and I hope you learned something small.

Anatomy of a SELECT statement part 6 – the WHERE clause

After a brief break, ok..maybe not so brief, this post continues the series of the Anatomy of a Select statement.   This post is about the WHERE clause.  The WHERE clause will provide a filter to limit the rows returned to only the rows that are needed.  As we should always do, it is important to only pull the data that is needed.  The WHERE clause is a key part of that effort.  We use the SELECT to limit the columns, while the WHERE is one of the statements that can limit the number of rows.  The sample code will be at the end of this blog post.

The first thing we should look at is exactly where does the WHERE go?  As indicated below, it follows immediately after the FROM.  Since the FROM is the first part processed, the WHERE is the second clause processed.

SELECT …..

FROM ……

WHERE….

The Expression

When utilizing the WHERE clause, you will also need to create an expression that will be used as a comparison for limiting the rows returned.  How this expression looks will depend on the data type you are comparing.  Although there are many data types, you will see a few fare more frequently then others. These will include the string based data types like VARCHAR and CHAR,  the DateTime related data types and numeric data types.

This expression can be a number of things.  First of all it can be a simple value you are attempting to compare. In the statement below you will see after the WHERE clause, there is a column name, a comparison operator and a comparison value.

Comparing to a Value

In the statement below we are comparing the SalesOrderID column values to the number 4.  The expression must evaluate to TRUE or FALSE. If it evaluates to TRUE, those records are returned. Of course the means that if the expression evaluates to FLASE that row will not be included in the result set.

FROM …..

WHERE SalesorderID = 4

When comparing a character based data type such as VARCHAR, you of course must include the single quotes. We will talk about wild card searches later in this post.

FROM ……

WHERE LastName = ‘Smith’

In this case the column name is LastName, the comparison operator is the equal sign(=) and the value to be compared is ‘Smith’.

Comparison Operator Options

We have a number of options when deciding on what comparison operator to use. This list includes, but not limited to, the list below.

Equal sign ( = )

Minus sign ( – )

Plus sign ( + )

Not Equal ( <> )

Greater Than ( > )

Less Than ( < )

IN

NOT IN

More than one Comparison

Many times we will have to complete more than one comparison to capture the rows we are looking for.  When you look below you will see that this WHERE clause is looking for anyone that has the LastName of Smith and the StateOfBirth is WI.

WHERE LastName = ‘Smith’

AND StateOfBirth = ‘WI’

You can also use the keyword OR rather than AND. In the statement below, All the records with a last name of Smith will be returned.  In addition, all records that have the StateOfBirth of WI.

SELECT ….

FROM …..

WHERE LastName = ‘Smith’

OR StateOfBirth = ‘WI’

An example would be if I have a last name of Smith, but I was born in New York, I would be returned in the results because I have the last name of Smith, even if the StateOfBirth is Ohio.

Using a List

The WHERE clause also supports the use of the IN keyword.  This can be used if there a multiple values in the date set that I would like to see.

SELECT …..

FROM …..

WHERE LastName IN (‘Smith’, ‘Jackson’, ‘Denault’)

In this case, all records that have the last name of Smith, Jackson or Denault will be returned in the result set. As you can see, this also works with numeric data types.

SELECT …..

FROM …..

WHERE EmployeeID IN (1, 2, 3, 4)

The above statement can also be written using Less Than or Equal to comparison operator.

SELECT …..

FROM …..

WHERE EmployeeID <= 4

Using the NOT Keyword or <>

Just as you can search for values that are equal to something, you can also search for something that is not equal to.  How you do this depends on if you are excluding a single value or a list of values.

If you are comparing a single value, you simply use this, <>.

SELECT …..

FROM …..

WHERE EmployeeID <> 4

This also works for character based data types as well.  The statement below will return all records that have a LastName of something other than Smith.

SELECT ….

FROM …..

WHERE LastName <> ‘Smith’

What Else Can be Used the WHERE

As mentioned before there are a number of ways to pull data from a table and limit the number of rows using the WHERE clause.  There is another option I would like to mention.  Although it may not be the best option, using a sub-query is indeed an option.

A sub-query is nothing more that a query in the SELECT statement.

SELECT *
FROM Sales.SalesOrderDetail
WHERE ProductID IN (SELECT ProductID
FROM Production.Product
WHERE MakeFlag = 1)

If you look at the above statement, the sub-query is in green.  This will work just fine.  If the sub-query returns more than on row, it will still work because of the use of the IN keyword.

A sub-query is nothing more that a query in the SELECT statement.

SELECT *
FROM Sales.SalesOrderDetail
WHERE ProductID IN (SELECT ProductID
FROM Production.Product
WHERE MakeFlag = 1)

However, if I were to change the IN keyword to an equal sign, it will fail. You will see this error.

Msg 512, Level 16, State 1, Line 45

Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.

The reason it is failing is because the equal sign expects just a single value, while we passed in a number of values.

Wildcard Searches

Sometimes when we are searching for something, we don’t know enough details about the value we are looking for.  This is where the wildcard searches come in to play.

There are a number of options where using a wildcard search, we will just focus on the use of the % character.  This the same as searching with an * to search for a file, like *.docx.

The statement below will return any state that starts with the word New.

WHERE State LIKE ‘New%’

This statement will return anything that starts with New and ends with the letter K.

WHERE State LIKE New%k’

The code below will return any state that has the work new in it.  Notice that the percent sign is also at the front of the value.  While all wildcard searches may not be the best performing, when placing the % at the beginning of the value, this will force a scan rather than a seek.  This is a potential performance problem and should be used with caution.

WHERE State LIKE ‘%New%’

Notice that these statements use the LIKE keyword.  If you use a wildcard search with the %, you must use the key word LIKE.  If you do not, SQL Server will process the string of ‘%new%’ as a literal and look for the word new and having the % both at the beginning and at the end.  It will not be processed as a wildcard.

Examples Below

These examples will work with the AdventureWorks2014 sample database.  Although it could work on a newer version of the database, I just haven’t tested it.

SELECT *
FROM Production.Product
WHERE Productid = 4

SELECT *
FROM Production.PRODUCT
WHERE Name = ‘Blade’

SELECT *
FROM Production.Product
WHERE MakeFlag = 1
AND FinishedGoodsFlag = 1

SELECT *
FROM Production.Product
WHERE MakeFlag = 1
OR FinishedGoodsFlag = 0

SELECT *
FROM Production.Product
WHERE Name IN (‘HL Road Frame – Black, 58’, ‘Headset Ball Bearings’)

SELECT *
FROM Production.Product
WHERE ProductID IN (1, 2, 3, 4)

SELECT *
FROM Production.Product
WHERE ProductID <= 4

SELECT *
FROM Production.Product
WHERE ProductID <> 4

SELECT *
FROM Sales.SalesOrderDetail
WHERE ProductID IN (SELECT ProductID
FROM Production.Product
WHERE MakeFlag = 1)

SELECT *
FROM Sales.SalesOrderDetail
WHERE ProductID = (SELECT ProductID
FROM Production.Product
WHERE MakeFlag = 1)

–This well return all records have a Name starting with Blade.

SELECT *
FROM Production.PRODUCT
WHERE Name LIKE ‘Blade%’

–This well return all records have the word name in the product name somewhere.

SELECT *
FROM Production.PRODUCT
WHERE Name LIKE ‘%Blade%’

–This well return all records have the a name that starts with the letter A and ends with the letter r.

SELECT *
FROM Production.PRODUCT
WHERE Name LIKE ‘A%r’

This may conclude my series on the Anatomy of a Select statement.  During this series of posts, I covered the processing order, SELECT, FROM, WHERE, GROUP BY\HAVING and ORDER BY.

Thanks for visiting my blog..I really hope you learned something as you read it!

 

 

 

 

 

 


 

 

 

 

Where are the events for Extended Events stored

Today I received an email from someone who attended my Extended Event session on Boston last fall.  He was asking what table stores information about what Extended Events exist on the server.  The answer is sys.dm_xe_objects data management view.

In this DMV, obviously there are a number of columns.  This post will focus mostly on the object_type column.  This column has a number of potential values, including Event, Target and Action.  When setting up a SQL Server Extended Event session these are a few of the key items that need to be filled out.

Let’s just take a few moments to identify what these are.

Target – This is where the data captured by the Extended Event session will be stored.  Many times you will find the target to be the event_file target, although from time to time you will find the data going to other targets.  The data will be stored in an XML format. See the image below.

 

Actions – These are also referred to as Global Fields.  These are the fields that many of the events have in common.  You can decide what data you would like to collect.  These differ from the Event Fields in the sense that are shared with all  the events.  Event Fields are data points that are specific to that event.  Sometimes you will find a particular data point in both the Actions and Event Fields.

Events – This is the focus of this post.  Events are things that happen on the server and you would like to capture the what, when and other information about the event.  As each release of SQL Server comes out, there are more and more events included. The image below is an example of some of the events you might want to use.  In this case they are related to Deadlocks.

Now back to the purpose of this post, where is this information found in the system tables.  As stated above, the information can be found in the sys.dm_xe_objects DMV.

If you run this query you will see all the events for that instance.  When I run this on my SQL Server 2017 server I get 1,505 events.  Although, your results might be slightly different depending on version.

SELECT Name
, Description
FROM sys.dm_xe_objects
WHERE object_type = ‘event’

Here what the data set looks like.

There are really two columns that are important for this post, the Name and Description columns.

Thank you for stopping by and a big thank you to the attendee that emailed me the question.