Create a New Dashboard in Azure Portal

Dashboards can be used in many different products and can be very helpful for many reasons.  Azure has a default dashboard, but it allows you to create your own dashboards to easily get the information that is important to you.

If you look at the screenshot of an Azure portal dashboard, you can see I get quite a bit of information quickly.  In this case I can see three reports on activity for my subscription, a list of my resources and the current cost.

To get to the dashboard you need to click the “Dashboard” menu item on the portal menu on the left side of the screen.  Now that we are at the “Main” dashboard we can either leave it or edit it.

When you look at the dashboard the first time, you will see the default dashboard. An example of this is in the image below.

Create A New Dashboard

If the default dashboard does not meet my needs, Azure provides an option to create your own. To create a new dashboard, simply click “New Dashboard” on the menu that appeared at the top of the screen.

Once you click on “New Dashboard” you will see an empty dashboard as below.

The first item you should consider changing is the title.  Of course this should reflect what the dashboard is to display.

Add items to a Dashboard

Now we are at the point we need to add the tiles that will display the information you are looking for. On the left side you will see the “Tile Gallery”.  These are some of the items you can add to your dashboard.

To add the item you can either drag\drop the item to the dashboard or you can click the “Add” item to the right of the tile title.

When you click the “Add” item, Azure will place the tile in the next available location.  Azure will add the tile to the right of the tile in the top row.  In the image below, I added four items and Azure kept adding them to the right.  With the exception of the “User Sign-ins”.  Because the clock and the “User Sign=ins” are not as tall as the others, Azure placed the “User Sign-ins” under the clock.  However, with most of the tiles they will be placed to the right of the last tile on the top row.

Another method of adding tiles is to drag and drop the tiles from the Tile Gallery.  When you use this method, Azure will place a shadow outline on the screen that will show you where the tile will be placed, similar to the image below.

Once I have added the item I can change the size if the default size is not what I am looking for.  In the upper right corner of the tile you will see a garbage can and a box with three dots when you float your cursor over this area.  To change the size, click the box with three dots and a size context menu will appear.  You can then pick the size you would like for the tile.

Now that we have added a few tiles and have everything we need.  We can now click “Done Customizing” at the top of the screen.  When you do Azure will take you to your dashboard.

Change Dashboards

Now that we have added a dashboard, we should have two, the one we created and the default dashboard.  To change what dashboard you are viewing, you click the drop down and pick the dashboard you would like to view from the menu.


Edit an Existing Dashboard

If you have an existing dashboard you would like to edit, simply navigate to the dashboard and click “Edit” menu item on the menu across the top.

Now you can add tiles, removed tiles, resize tiles and change the order and placement of the tiles.

Add Metrics Reports

In a previous blog post, I talked about how to create a metrics report.  In this same blog I mention how to add a metrics report to a dashboard.  Here is the link to that post.


Thanks for visiting my blog!!!



Azure Storage Explorer Lock

In the Azure Storage Explorer, you are able to view files that you have stored in your Azure storage account. As you can see by the image below, it looks very much like the Windows file explorer.

Of course you have the file name, last modified date, type, size as well as a few other bits of information.  You also can see the little lock that is to the left of the second file.  This tells that there is a “lease” on this file.

According to Microsoft this is the definition of a lease:

“manages a lock on a blob for write and delete operations. “

This is pretty much the same as a lock on the file in Windows.  By having the lease, there are some limitations on what you can do with the file, like delete it.  If you try to delete a file in Azure Storage Explorer you will see this error and you will not be able to delete it until the lease is removed.

In this case I am using an Extended Event target as the file.

Here is another nice post on the leases of blog files. It is an older post on Red-Gates Simple Talk site, however I think much of it is still relevant today.

If you are using Azure Storage accounts and do not have the Azure Storage Explorer, I would recommend that you download it and take a look at it.  It could make life a bit easier.  Here is the link to download it.

Thanks for stopping by my blog!

Track Failed Logins with Extended Events

Many things in life have opposites.  There are stop and go, beginning and end and of course in and out.  SQL Server is not any different, you have BEGIN TRAN…COMMIT TRAN, successful login and failed login as well as a few others.  Using Extended Events there is just a login event, not a successful or failed login event.  If you look at the description of the login event this is what you will see:

Occurs when a successful connection is made to the Server. This event is fired for new connection or when connections are reused from a connection pool

Notice is states successful connections.  What if I want to use Extended Events to track failed logins. Logic would say that if there is an event for successful logins, there would be an event for failed logins.  In this case, logic would be wrong.

If you do a search for the work “fail” in the events a number of events will be returned.  Many of the are related to column store and none of them are related to failed logins, most are related to column store.

Although, when I think of Extended Events, I usually think about how I can use it to capture performance related information. However, there are somethings, like failed logins, that could be helpful as well.  Well how can I capture failed logins if there is not an event to do so.

To do this, you have to use the error_reported event and use predicates(filters) to return just failed logins.

As you can see in the above image, there are two filters we need to configure.

Severity is set to equal 14, this one is optional but might help eliminate unnecessary rows

error_number is set to equal 18456

state is set to greater than 8, this can be set to 1 as well

These combined allow you to capture failed logins. If you do a Google search on the error number you will find that the Microsoft documentation states this.

“When a connection attempt is rejected because of an authentication failure that involves a bad password or user name, a message similar to the following is returned to the client: “Login failed for user ‘<user_name>’. (Microsoft SQL Server, Error: 18456)”.”

This might have you wondering why the other predicates are needed.  Well, it is to help eliminate similar rows. If you look at the image below, the two rows that are highlighted are the same failed login event.  One has a severity of 1 and the other has a severity of 8.  The other difference is the error message. As you can see, there is just a bit more information with severity 8 than with severity 1.

Severity 1 message: Login failed for user ‘testgrant’.

Severity 8 message: “Login failed for user ‘testgrant’. Reason: Password did not match that for the login provided. [CLIENT: <local machine>]”

Just a little side note, to read the event file for Extended Events, simply right click on the target file in SQL Server Management Studio and click “View Target Data”.


This can be very useful information and in my opinion, might be a bit easier to query than reading the log file.  It is always nice to have options. This works for an on-prem instance of SQL Server, I have not been able to get it to work on an Azure SQL Database yet.  This could be due to the fact that in an Azure SQL Database, Extended Events are set up in the database.  I am still working on that so look for a future blog post on that.

Thanks for visiting my blog.


When two processes are caught up in a deadlock, one will be chosen as the victim and the process will be terminated, while the other process will be successful.  SQL Server will, by default, pick the process that has the lowest cost to rollback.  This can present a potential issue if the terminated process is a business critical process.  The DEADLOCK_PRIORITY allows us to have more control over whether or not a process is chosen as the victim.

Before getting into how to set the DEADLOCK_PRIORITY, let’s quickly go over what the options are.  There are two ways to set the priority of a process. The first option is to use one of the keywords, LOW, NORMAL or HIGH. The second approach is to use a numeric value between -10 and 10.  The lowest value will be chosen as the victim.  For example, LOW will be the victim of the other process is HIGH and -1 will be the victim if the other process is greater than -1.

If you look at the chart below you will see the winners and the losers in the deadlock. These charts identify what process will be allowed to complete and not be the victim.  As you can see, when one process is LOW and the other is MEDIUM, the LOW process will be the victim.  If there is a tie, SQL Server will use the cost to rollback the transaction.

The chart below is somewhat incomplete.  The allowable ranges for the priority as a numeric value is the range between -10 and 10.  As you can see below, if one process is -9 and the other is -7, the -9 will be the victim.  Rather than take the chart all the way out to 10, I think this gives you the idea of how to figure it out beyond the numbers that are in the chart.

So now that we know how SQL Server determines the victim, let’s take a look at how we can determine what the priority was for the processes in a deadlock.

One way is to use the deadlock graph.  The two highlighted items are the deadlock priority for each of the processes.  As you can see, the priority 5 survived while the priority -5 was chosen as the victim.

You can also look at the XML to determine the priority of each of the processes.  In the XML you will look for “taskpriority” in the Process-List node.  As you can see below there are two of them, one for the process that succeeded and the victim. The “victimProcess id” identifies the process that was chosen as the victim.

To set the priority for a process, you would add the code below at the beginning for the batch. This setting is for the session.


This is what the code would look like if using a numeric priority.  Remember, the value must between -10 and 10, inclusive.

This is what it would look like if you use the LOW, NORMAL, HIGH option.

Thanks for stopping by my blog!!!


Extended Events and Execution Plans

The ability to view execution plans is a critical part of any successful troubleshooting effort. It is in the execution plans that you can find a great deal of information.

Just a sample of what can be found in execution plans.

      • Data access type – Scan or a seek
      • What indexes were used
      • What type of physical join type – Hash Match, merge or nested loops
      • If key lookups are being used

Of course this is a very small sample.  Even in this short list, you can see the value in execution plans.  Given the value of execution plans, it is important to understand how these can be captured.  There are a number of methods that can be used.

SQL Server Management Studio is an effective method for capturing a single plan.  When you have a query you would like to see the plan on you have two options, estimated and actual.  The differences is pretty simple, estimated is what SQL Server thinks it will do while the actual execution plan is what it actually did to get the data you are requesting.

To get the estimated execution plan, click the “Display Estimated Execution Plan” under the Query menu.  At this point, SQL Server will get all the information it needs to create the plan, however will not actually execute the query. Statistics are a key part of this and should kept up to date.

When getting the estimated plan, SQL Server does not actually execute the query.  However, when the actual plan is used, SQL Server does execute the query. To get the actual execution plan you can do one of two things, click the toolbar icon highlighted below and execute the query. Or you can click the menu item under the Query menu and then execute the query.

As mentioned this method works great if you have the query and will capture the plan for a single query.  However, what if you want to capture the execution plans over a period of time.

Within SQL Server you have two methods that can be used to accomplish this.  One is Query Store and the other is Extended Events.  Since this is a post about Extended Events, here is some good documentation on Query Store, click here.

When using Extended Events to capture the plan you have two options.

query_pre_execution_showplan – this is the estimated plan

query_post_execution_showplan – this is the actual plan

Either one of these should work.  Once you have chosen the event, the XML for the plan can be found in the “Event Fields” tab and the showplan_XML column.

The easiest way to view this is to use SSMS.  If you navigate to the target and right click on the target.

In this case, right click on the event file and then click “view target data”.  When you do you will then you will see this.  If you look towards the bottom you will see the Query Plan tab. Once you click this, you will then see the plan.

What is missing?  If you look at the two plans below you will see a noticeable difference.  The SELECT operator is missing on the top one.  The top plan is from Extended Events and the bottom one is from SSMS.

Missing the SELECT operator is a big deal.  There is a great deal of information that can be found here. This alone would be a good reason to not use Extended Events to capture execution plans.

If that alone isn’t enough to avoid using Extended Events to capture execution plans, take a look at this warning that can be found in the description of the two events mentioned above.

“Using this event can have a significant performance overhead so it should only be used when troubleshooting or monitoring specific problems for brief periods of time.”

So if I shouldn’t use Extended Events to capture the execution plans, what should I use?  The answer is Query Store.

Thanks for stopping by my blog and I hope you learned something. More importantly, you learned what not to use!

All my Extended Event Options are Greyed out

Have you ever tried to change the options for an Extended Events session, only to see this.

There are a number of options that are greyed out and can’t be changed.  This is because the session is running.  There are some options that can be changed, just none on the Advanced tab.  This is an easy fix, if you need to change one of these options, simply stop the session and make your change.  Don’t forget to restart the session.

Something to keep in mind.  If you are using the file target, stopping and restarting the session could delete your oldest file and will create a new session file.  This could potentially result in the loss of key data.

Thanks for stopping by my blog and I hope you learned something.

Extended Events Max Dispatch Latency

When you create an Extended Event session, there are a number of properties that you can set.  These properties are spread through out the tabs in the Session Properties window.

Below you will see the four tabs,  General, Events, Data Storage and Advanced.

Just as a brief summary let’s go over a few of the key properties in each tab.

In the session tab, the obvious properties is the “Session Name”.  But there is also another key property, “Start the event session at server start up”.  This is an important property that can assure the session starts in the event of a server restart.

The event tab is exactly what you think it might be, it is where you will decide what events you want to capture and what data columns you would like to capture.

There are a number of options for storage and you will set this in the Data Storage tab.  Where the data collected is stored is referred to as targets  Two of the commonly used targets are ring_buffer, this is memory, and file target.

Which brings to the last table, Advanced.  This is where the Max_Dispatch_Latency property can be found.  The max_dispatch_latency property is the maximum duration that an event, once captured, would reside in the buffer before written to the target.  This doesn’t mean it will always take this long, just the maximum amount of time it will take.  The default for this property is 30 seconds.  For the most part, this is a good number and should avoid changing it.

You would logically think that the minimum setting would be zero seconds.  If you think that way, you are correct.  However, 0 does not mean 0 seconds.  When this is set to 0 it means the event will stay in the buffer until the buffer becomes full.  This is the same set setting the “Unlimited” option you see below. Given this, the true minimum is one second.

When I present my Extended Event session I usually use the Create Database event in my demo.  Once, I create the database, I go immediately to the target to see if the event has been written to the target.  Usually that takes just a few seconds.  I only see the event written to the target about 50% of the time.  This is because of the max_dispatch_latency.

Here is a really nice article by Jonathan Kehayias on the properties of an extended event session, click here.

Thanks for visiting my blog and I hope you learned something!

Capture Deadlock Graph with Extended Events

Over the past few years, I have presented a number of sessions on Extended Events.  As I prepare to present the topic a few more times in the coming weeks, I thought I would do a blog post on one particular event that has been very helpful to me when working with issues in a production environment.  That event is related to the Deadlock graph.

There are a number of ways you can capture this.  Profiler has been a very good and very effective way of capturing the Deadlock graph. However, starting with SQL Server 2008, there is now another option, Extended Events.  RedGate also puts out a great tool named SQL Monitor that can be used.

I am not going to get into the ins and outs of Extended Events in the post, I am simply going to cover how to use Extended Events to capture the deadlock graph.  There are a total of five events that have the word “deadlock” in the event name.

Let’s take a moment and quickly go over each of these five events.

database_xml_deadlock_report – This will be the same as the xml_deadlock_report, with one noticeable exception.  The xml_deadlock_report event has an additional tab, while the database_xml_deadlock_report does not have this.  This tab is where we will find the graph.

xml_deadlock_report – This is very similar to the database_xml_deadlock_report with the addition of the deadlock graph.  To see the deadlock graph, simply click on the Deadlock tab.

lock_deadlock – “Occurs when an attempt to acquire a lock is canceled for the victim of a deadlock.”  This definition is from SQL Server. Of the events we have discussed so far, this event has the most data points.


lock_deadlock_chain – “Occurs when an attempt to acquire a lock generates a deadlock. This event is raised for each participant in the deadlock.” This definition is from SQL Server.

scheduler_monitor_deadlock_ring_buffer_recorded – “The session tracks other useful events which makes it easy to trace back the series of events which led to the deadlocked schedulers condition!” – From

Since this post is about the deadlock graph, we will focus on the xml_deadlock_report event. As mentioned before, this is the event you will need to capture the deadlock graph.  There are two points to look at, the column is “XML_report” and the other is the graph itself. In the image below you will see that there are two places to look at the actual XML.  If you do not see this column in the grid, you can right click on it in the details at the bottom and then click “Show column in table”.  It might be very challenging to read due to formatting.  If you are having a difficult time reading the XML, this online formatting tool can be very helpful to get good formatting.

This is the actual XML that is used to build the graph.  This is quite a bit of useful information in the XML.  You can see the victim query as well as many other good data points.  The victim process can be found in the “victim-list” node.  In the image below, you will then see two processes, the first one listed matches the victim process id.  The UPDATE just below it is the T-SQL that was chosen as the victim.


The deadlock graph, while initially looks very useful, it does have a significant limitation.  The query with the blue X on it is the victim.  If you float your cursor above it, the query will then appear in a context pop up.

If you left click and hold one shapes you can move it around. This can be useful when there are more shapes than you see above.  Now on to the limitation, you can’t zoom it.  Without this, it can be difficult to see some of the details.

Here are just a few interesting deadlock graphs I have been able to capture.

While these are kind of interesting to look at, the scary part is what caused these.  These were all caused by a single stored procedure.  Now you have the graph.  I think now the fun is just beginning trying to figure out what is causing this and a solution.

Thank you for stopping by my blog.

Events…No Events

Extended Events is a great way to capture information about what is happening on your server.  You can capture information about deadlocks, failed logins, wait stats and many more important performance data points.

One of the nice features of the Extended Event interface in SQL Server Management Studio is the searching feature that can be used when setting up a session.  In SQL Server 2017 there are over 1,500 events that can be utilized.  Searching through all those events can make it a challenge to find the event you are looking for.

Luckily Microsoft has given us some help.  If you look at the image below, you can see under the “Event Library” box the two letters “de”.  As expected, this returns any event name that has “de” in the name somewhere.  It could be at the beginning, middle or the end of the event name.

The events returned will be based on two additional things category and channel.  Both of these options help to limit the number of events you will see in your results.

Category is simple groups of events.  For example, in the database category you will find many events related to databases. These include the following:

        • Database Starting
        • Database attached
        • Database created
        • Many..many more

The Channel is also a grouping of events, but this grouping is more on who and or why the events might be used. Below is a sampling of what you will find in the Analytic channel.

I have never really changed either one of these and I have been able to find every event I needed.

Now, what does it mean when I enter a value, for example “deadlock” into the Event Library search box and I get this:

Notice that no events are showing up.  Well there are a few reasons why this is happening. First thing to check is the Category and Channel to see if you are limiting the results.

Another reason is that all the events that contain that string are already in use.  If you look at the above screenshot, you will see that there are five events related to deadlocks and the are all in the “Selected Events” pane.

Now you can see that all 5 events are now showing up to select from.  This is a very short blog, but hopefully you learned something.

Thanks for stopping by my blog!!!

Reading Extended Event Files for an Azure SQL Database in a Storage Account

Extended Events are a great way to find information about what is happening on your on-prem SQL Server.  It is also a great way to find out what is happening in your Azure SQL Database. Setting up Extended Events in an Azure SQL DB is very similar to doing so in an on-prem SQL Server.

Extended Events can be used to capture quite a bit of information about your SQL Server.  With SQL Server 2017 there are over 1,500 events that can be captured.  These events include anything from alter column event to query execution plans.  When setting up an Extended Events session you will have an option of where you would like to store the collected the data.  One of the common destinations for stored the event data is a file.  This file is an XML based file and can be read a number of ways.

Extended Events can be found under Management and is a server level item.

For an Azure SQL Database it has been moved.  It is now a database level item.

File Target Basics

When you use Extended Events you will need to provide a target type. A target is nothing more that a location that will store the event information.  With both an on-prem instance of SQL Server and an Azure SQL Database you have several options to chose from. Below you will find the options for targets for both an Azure SQL database, on the left, and an on-prem instance of SQL Server, on the right.  As you can see you have more options with the on-prem instance.  While the focus of this post is how to read an Extended Events file, it is good to know the different options that are available.

When storing the data in a file target, the file extension will be an .xel extension.  This file type is an XML based document type.  Normally you can open an XML document using notepad and it will still be in a readable format.  That is not the case here.  As you can see by the image below, there really isn’t much that is readable in an XEL file type.

Setting up an Azure Storage Account

Now that we have reviewed a bit about what Extended Events are and what the file target is.  Now we need to go over how do you create the actual target files.  The first thing we need is a destination location.  For an on-prem SQL Server this can be a local drive, a network share or a mapped SAN connection.  These are not really the best options for an Azure SQL Database.  This is where an Azure Storage Account is going to be our friend.

Once you have located it, click “Storage Accounts” . This will bring you to the screen below.

When you click the “Add” button you will be taken to a page that will allow you to enter all the appropriate information to create the storage account.

As you can see by the above screenshot, you will be asked for much of the expected information.  This includes the account name, subscription, resource group as well as location.  When deciding on a location, it is important to chose this carefully.  Remember by picking a location that is a different location could lead to additional unexpected charges.  Please double check with your Microsoft representative to assure this does not happen.

While this post really isn’t about how to create an Azure storage account, I did want to at least provide some basics.  One thing I do want to point out, the Access tier you see at the bottom of the image above.

As you can see there are two options, Cool and Hot. Below you will find the definitions as stated in Microsoft’s documentation here.

    • Hot – Optimized for storing data that is accessed frequently.
    • Cool – Optimized for storing data that is infrequently accessed and stored for at least 30 days.

This MS site also gives a good, more detailed explanation on the differences between the two.

Setting up the File Target to an Azure Storage Account

Now that we have our storage account in Azure we now need to set up everything so it can be used as a target for the Extended Event session.  With both the on-prem and Azure instances of SQL Server, there are multiple options when configuring a Extended Event target.

How To Read the Data

As you can see above, the file target really isn’t readable with Notepad or any other text editor.  With that, we need another tool to read the data we have collected. For an on-prem SQL Server, you have several options.  You can use the GUI in SQL Server Management Studio or you can use T-SQL.

When you the GUI in SSMS, you will navigate to the Extended Event session and simply right click on the file target.  When you do you will see an option to view target data.

When you click, you will see something similar to what is in the image above.  In this GUI, you can sort, group and even export the data.   I really like this for analyzing the event data.  However, this is not an option for an Azure SQL Database. When you right click on the target in an Azure SQL database there is not an option to view target data.  Below you can see the results of right clicking on the target.  Notice that the Azure SQL Database does not have an option to “View Target Data”.

The other option is to use T-SQL.  This is an option for both Azure SQL Database and an on-prem instance of SQL Server.  Luckily the code requires very little modification to move from using it on-prem to using on an Azure SQL Database. Because T-SQL will work in both Azure SQL Database and an on-prem instance of SQL Server, we will focus this blog post how to use T-SQL to extract and review your event data.

When reading from the XEL file in an Azure Storage account, there are few prerequisite items that we need to briefly cover as well.


The XEL file is an XML based file so when reading the data you will need to parse through the XML. In order to do this, we can use the sys.fn_xe_file_target_read_file function. This function has a total of four arguments. Here is the link to Microsoft’s documentation on this function.

      • Path – this is the path to the XEL file(s)
      • mdpath – this is the metadata file path. Starting with SQL Server 2012, this is no longer needed and NULL can be used
      • initial_file_name – this specifies the first file to read from, if NULL all files will be read
      • initial_offset – per Microsoft documentation this is “Used to specify last offset read previously and skips all events up to the offset (inclusive)”

Below is an example of how the initial step in reading the XEL file might look.  Notice the four arguments and that the only one without NULL as a value is the first one.

SELECT event_data = CONVERT(xml, event_data)
INTO #eeTable
FROM sys.fn_xe_file_target_read_file(
‘C:\eefiles\efiles_0_132002087635590000.xel’,null, null, null);

Hopefully you also noticed that the SELECT converts the event_data column to XML.  This is just the first step, later we will discuss how to parse the resulting XML into a more readable format.

Azure Storage Explorer

When reading from an Azure Storage account, instead of using the usual path to a drive or a UNC, you will use a URL instead.

SELECT event_data = convert(xml, event_data)
FROM sys.fn_xe_file_target_read_file(
‘’,null, null, null);

When first looking at this, you would expect that when given the proper credentials, you would not have any issues reading from the file.  Well, you don’t get an error, but you also do not get any data even thought the file you are trying to read from does indeed exist.

The reason for this is because you will need to create a credential in the Azure SQL Database that the Extended Event is in. This is where the Azure Storage Explorer will come in handy.

When you create the credential, you will need the “Secret”.  This is a Shared Access Signature code that will looks something like this.


You can use the Azure Storage Explorer to get this.  The Azure Explorer can be downloaded here.  It also has a great price, FREE!!  Per Microsoft, Azure Storage Explorer offers the following features.

Once you have installed and opened Azure Storage Explorer, you will then need to connect to your Azure Storage Account.  You can do this by expanding the tree and right clicking on “Storage Accounts” and then clicking “Connect to Azure Storage…”

When you will see the form you see below.  As you can see there are a number of methods you can use to connect to your Storage Account.

For this post, I used the “Use a connection string” option.  In order to get the connection string you will need to navigate to the Azure Portal.  Once there proceed to your storage account can click “Access Keys”.

Once you can see the Connection string, simple click the “Copy” button to the right of the string. As expected, this will place the connection string on your clipboard.

When you paste the connection string into the above form, Azure will automatically populate the Display Name box.  Then you click “Next” where you will then see a summary form.  At this point, you will review and click “Connect”.

You will need to create a Blob Container. This is pretty simple, just right click on “Blog Containers” and go to Create.

Now we have everything we need to get the last piece of information we need in order to utilize T-SQL to query the Extended Events .xel file.  This last bit of information is the Shared Access Signature.  You will need this to create the require Azure SQL Database credential.

Once you are connected to your storage account, navigate to the blog storage and click on it.  When you do, you will see “Actions” and “Properties” toward the bottom, click “Actions” and then click “Get Shared Access Signature…”.

As you can see the form that will pop up will have a number of options.  Given the this post is about how to read from the files, we will not change anything here.  At this point, click “Create”.  The following form will then appear.  Notice there three data points.

      • Container
      • URL – You will need this, well at least part of it
      • Query String: You will also need this to read the files

Now that we have completed the following prerequisites to utilize Azure Storage account to store the Extended Events files, we are ready to begin working on the T-SQL code.

These prerequisites included the following.

      • Created an Azure Storage account
      • Created a blog container
      • Downloaded and installed Azure Storage Explorer
      • Connected the Storage Explorer to the Azure Storage Account
      • Created the Shared Access Signature

Create Credential

This first step when using T-SQL to read Extended Files that are stored in an Azure Storage Account is to create a database credential.  Of course the credential will provide essential security information to connect to the Azure Storage Account.  This first data point you will need is the URL to a blog storage container in you storage account.  If you look below, you can see where you would place your storage account name and the blob storage container name.

, SECRET = ‘st=2019-04-19T22%3A15%3A13Z&se=2020-04-20T22%3A15%3rtieo34=rwdl&sv=2018-03-28&sr=c&sig=ubWCJk3RvUrkeitpe,wdig3h56F5CM%3D’

Because this is a database level credential, it is important to execute the code in the context of the database you are attempting to run the code to read the files from.

The “WITH IDENTITY” will always be “Shared access Signature” with the type of connection we are going to use.  Finally we get to the SECRET, this is the connection string we created earlier.

When you first create the connection string it will look something like this.  It will be much longer that what is here.  I removed many of the characters for security reasons.


Prior to using this string, you will need to make one minor adjustment.  You need to remove the question mark from the beginning.  You can keep it when you create the credential, in fact the credential will be created.  However, it will not work.  When you attempt to read the file, no error will be generated, you just wont see any results.

If you find that you need to delete the credential, you can use the code below with a few adjustments.  Those adjustments are in italics.


If you attempt to read the data from the EE files, and the credential does not exist you will see the error.  Given this, it is vital that the credential get created in the proper database

Msg 262, Level 14, State 1, Line 1
CONTROL permission denied in database ‘master’.
Msg 297, Level 16, State 1, Line 1
The user does not have permission to perform this action.

Now that the database credential exists, you can now run the code below to query the data.

SELECT event_data = CONVERT(xml, event_data)
INTO #eeTable
FROM sys.fn_xe_file_target_read_file(
— TODO: Fill in Storage Account name, and the associated Container name.
null, null, null

ts = event_data.value(N'(event/@timestamp)[1]’, N’datetime’),
[sql] = event_data.value(N'(event/action[@name=”sql_text”]/value)[1]’, N’nvarchar(max)’),
duration = event_data.value(N'(event/data[@name=”duration”]/value)[1]’, N’nvarchar(max)’),
spid = event_data.value(N'(event/action[@name=”session_id”]/value)[1]’, N’int’),
user_nm = event_data.value(N'(event/action[@name=”username”]/value)[1]’, N’nvarchar(max)’),
dbname = event_data.value(N'(event/action[@name=”database_name”]/value)[1]’, N’nvarchar(max)’),

explan = event_data.value(N'(event/data[@name=”showplan_xml”]/value)[1]’, N’nvarchar(max)’)
FROM #eeTable


This code does a number of things.  First of all it reads the file and converts the results to a single XML data row for each row in the file.

Then after dumping the results into a temporary table, the code then parses the XML and returns it in a more readable relational table type format.  When you look at the image below you will see two results.  The top set, in the red box, is the converted XML.  The second set, in the blue box is the XML parsed to the more traditional columns we are used to working with.

Now, once in the columns, we can do what ever T’SQL will allow us to do.  For example, insert the results into a disk based table for analysis at a later date.

One thing I would like to point out is the two bits of code below.

When the column name is taken from the Global Fields tab in the Extended Event session you will use “event/action” followed by the column name.


When the column name is taken from the Event Fields tab in the Extended Events sessions, you will use “event/data” followed by the column name.


On thing to keep in mind, if you spell the column name incorrectly, it will not throw an error.  It will simply return NULL in every row for that column.


As I was writing this I realized that there a lot of steps. The question that came to my mind that there has to a way with less steps.  The answer to that question is, there is.

If you right click on the file in the Azure File Explorer, you can then click open.  When you do you will see the file downloading as you see in the red box below.

Once it has completed downloading and is ready to open, you may see this warning.

If you do, proceed with caution.  I believe you see this is because of the file format.  However, again, please proceed with caution.

When you click “Yes”, SQL Server Management Studio will then open and display the data as below.

Once you have the data in SSMS, all the functionality that comes with it is now available.  This includes the ability to export the data to a table, group the data, apply filters and sort the data as well.

If I was just reading the data, this is the way to go.  However, if I needed to automate the collection of the, I will need to use the T_SQL method mentioned in most of the post.

Another thing to keep in mind is the amount of date you will be pulling out of your Azure Storage Account. As you can see in the image below, the Extended Events files can get quite large.

In order to avoid large bills on your Azure subscription, it is important to understand the impact of opening large files.  It can get expensive if not managed carefully.

Hopefully you are still here……it was a long post.  By far the longest I have written.  If you are still here….thank you for stopping by my blog!!!