DEADLOCK_PRIORITY

When two processes are caught up in a deadlock, one will be chosen as the victim and the process will be terminated, while the other process will be successful.  SQL Server will, by default, pick the process that has the lowest cost to rollback.  This can present a potential issue if the terminated process is a business critical process.  The DEADLOCK_PRIORITY allows us to have more control over whether or not a process is chosen as the victim.

Before getting into how to set the DEADLOCK_PRIORITY, let’s quickly go over what the options are.  There are two ways to set the priority of a process. The first option is to use one of the keywords, LOW, NORMAL or HIGH. The second approach is to use a numeric value between -10 and 10.  The lowest value will be chosen as the victim.  For example, LOW will be the victim of the other process is HIGH and -1 will be the victim if the other process is greater than -1.

If you look at the chart below you will see the winners and the losers in the deadlock. These charts identify what process will be allowed to complete and not be the victim.  As you can see, when one process is LOW and the other is MEDIUM, the LOW process will be the victim.  If there is a tie, SQL Server will use the cost to rollback the transaction.

The chart below is somewhat incomplete.  The allowable ranges for the priority as a numeric value is the range between -10 and 10.  As you can see below, if one process is -9 and the other is -7, the -9 will be the victim.  Rather than take the chart all the way out to 10, I think this gives you the idea of how to figure it out beyond the numbers that are in the chart.

So now that we know how SQL Server determines the victim, let’s take a look at how we can determine what the priority was for the processes in a deadlock.

One way is to use the deadlock graph.  The two highlighted items are the deadlock priority for each of the processes.  As you can see, the priority 5 survived while the priority -5 was chosen as the victim.

You can also look at the XML to determine the priority of each of the processes.  In the XML you will look for “taskpriority” in the Process-List node.  As you can see below there are two of them, one for the process that succeeded and the victim. The “victimProcess id” identifies the process that was chosen as the victim.

To set the priority for a process, you would add the code below at the beginning for the batch. This setting is for the session.

SET DEADLOCK_PRIORITY 10

This is what the code would look like if using a numeric priority.  Remember, the value must between -10 and 10, inclusive.

This is what it would look like if you use the LOW, NORMAL, HIGH option.

Thanks for stopping by my blog!!!

 

Extended Events and Execution Plans

The ability to view execution plans is a critical part of any successful troubleshooting effort. It is in the execution plans that you can find a great deal of information.

Just a sample of what can be found in execution plans.

      • Data access type – Scan or a seek
      • What indexes were used
      • What type of physical join type – Hash Match, merge or nested loops
      • If key lookups are being used

Of course this is a very small sample.  Even in this short list, you can see the value in execution plans.  Given the value of execution plans, it is important to understand how these can be captured.  There are a number of methods that can be used.

SQL Server Management Studio is an effective method for capturing a single plan.  When you have a query you would like to see the plan on you have two options, estimated and actual.  The differences is pretty simple, estimated is what SQL Server thinks it will do while the actual execution plan is what it actually did to get the data you are requesting.

To get the estimated execution plan, click the “Display Estimated Execution Plan” under the Query menu.  At this point, SQL Server will get all the information it needs to create the plan, however will not actually execute the query. Statistics are a key part of this and should kept up to date.

When getting the estimated plan, SQL Server does not actually execute the query.  However, when the actual plan is used, SQL Server does execute the query. To get the actual execution plan you can do one of two things, click the toolbar icon highlighted below and execute the query. Or you can click the menu item under the Query menu and then execute the query.

As mentioned this method works great if you have the query and will capture the plan for a single query.  However, what if you want to capture the execution plans over a period of time.

Within SQL Server you have two methods that can be used to accomplish this.  One is Query Store and the other is Extended Events.  Since this is a post about Extended Events, here is some good documentation on Query Store, click here.

When using Extended Events to capture the plan you have two options.

query_pre_execution_showplan – this is the estimated plan

query_post_execution_showplan – this is the actual plan

Either one of these should work.  Once you have chosen the event, the XML for the plan can be found in the “Event Fields” tab and the showplan_XML column.

The easiest way to view this is to use SSMS.  If you navigate to the target and right click on the target.

In this case, right click on the event file and then click “view target data”.  When you do you will then you will see this.  If you look towards the bottom you will see the Query Plan tab. Once you click this, you will then see the plan.

What is missing?  If you look at the two plans below you will see a noticeable difference.  The SELECT operator is missing on the top one.  The top plan is from Extended Events and the bottom one is from SSMS.

Missing the SELECT operator is a big deal.  There is a great deal of information that can be found here. This alone would be a good reason to not use Extended Events to capture execution plans.

If that alone isn’t enough to avoid using Extended Events to capture execution plans, take a look at this warning that can be found in the description of the two events mentioned above.

“Using this event can have a significant performance overhead so it should only be used when troubleshooting or monitoring specific problems for brief periods of time.”

So if I shouldn’t use Extended Events to capture the execution plans, what should I use?  The answer is Query Store.

Thanks for stopping by my blog and I hope you learned something. More importantly, you learned what not to use!

All my Extended Event Options are Greyed out

Have you ever tried to change the options for an Extended Events session, only to see this.

There are a number of options that are greyed out and can’t be changed.  This is because the session is running.  There are some options that can be changed, just none on the Advanced tab.  This is an easy fix, if you need to change one of these options, simply stop the session and make your change.  Don’t forget to restart the session.

Something to keep in mind.  If you are using the file target, stopping and restarting the session could delete your oldest file and will create a new session file.  This could potentially result in the loss of key data.

Thanks for stopping by my blog and I hope you learned something.

Extended Events Max Dispatch Latency

When you create an Extended Event session, there are a number of properties that you can set.  These properties are spread through out the tabs in the Session Properties window.

Below you will see the four tabs,  General, Events, Data Storage and Advanced.

Just as a brief summary let’s go over a few of the key properties in each tab.

In the session tab, the obvious properties is the “Session Name”.  But there is also another key property, “Start the event session at server start up”.  This is an important property that can assure the session starts in the event of a server restart.

The event tab is exactly what you think it might be, it is where you will decide what events you want to capture and what data columns you would like to capture.

There are a number of options for storage and you will set this in the Data Storage tab.  Where the data collected is stored is referred to as targets  Two of the commonly used targets are ring_buffer, this is memory, and file target.

Which brings to the last table, Advanced.  This is where the Max_Dispatch_Latency property can be found.  The max_dispatch_latency property is the maximum duration that an event, once captured, would reside in the buffer before written to the target.  This doesn’t mean it will always take this long, just the maximum amount of time it will take.  The default for this property is 30 seconds.  For the most part, this is a good number and should avoid changing it.

You would logically think that the minimum setting would be zero seconds.  If you think that way, you are correct.  However, 0 does not mean 0 seconds.  When this is set to 0 it means the event will stay in the buffer until the buffer becomes full.  This is the same set setting the “Unlimited” option you see below. Given this, the true minimum is one second.

When I present my Extended Event session I usually use the Create Database event in my demo.  Once, I create the database, I go immediately to the target to see if the event has been written to the target.  Usually that takes just a few seconds.  I only see the event written to the target about 50% of the time.  This is because of the max_dispatch_latency.

Here is a really nice article by Jonathan Kehayias on the properties of an extended event session, click here.

Thanks for visiting my blog and I hope you learned something!

Capture Deadlock Graph with Extended Events

Over the past few years, I have presented a number of sessions on Extended Events.  As I prepare to present the topic a few more times in the coming weeks, I thought I would do a blog post on one particular event that has been very helpful to me when working with issues in a production environment.  That event is related to the Deadlock graph.

There are a number of ways you can capture this.  Profiler has been a very good and very effective way of capturing the Deadlock graph. However, starting with SQL Server 2008, there is now another option, Extended Events.  RedGate also puts out a great tool named SQL Monitor that can be used.

I am not going to get into the ins and outs of Extended Events in the post, I am simply going to cover how to use Extended Events to capture the deadlock graph.  There are a total of five events that have the word “deadlock” in the event name.

Let’s take a moment and quickly go over each of these five events.

database_xml_deadlock_report – This will be the same as the xml_deadlock_report, with one noticeable exception.  The xml_deadlock_report event has an additional tab, while the database_xml_deadlock_report does not have this.  This tab is where we will find the graph.

xml_deadlock_report – This is very similar to the database_xml_deadlock_report with the addition of the deadlock graph.  To see the deadlock graph, simply click on the Deadlock tab.

lock_deadlock – “Occurs when an attempt to acquire a lock is canceled for the victim of a deadlock.”  This definition is from SQL Server. Of the events we have discussed so far, this event has the most data points.

 

lock_deadlock_chain – “Occurs when an attempt to acquire a lock generates a deadlock. This event is raised for each participant in the deadlock.” This definition is from SQL Server.

scheduler_monitor_deadlock_ring_buffer_recorded – “The session tracks other useful events which makes it easy to trace back the series of events which led to the deadlocked schedulers condition!” – From www.troubleshootingsql.com

Since this post is about the deadlock graph, we will focus on the xml_deadlock_report event. As mentioned before, this is the event you will need to capture the deadlock graph.  There are two points to look at, the column is “XML_report” and the other is the graph itself. In the image below you will see that there are two places to look at the actual XML.  If you do not see this column in the grid, you can right click on it in the details at the bottom and then click “Show column in table”.  It might be very challenging to read due to formatting.  If you are having a difficult time reading the XML, this online formatting tool can be very helpful to get good formatting.

This is the actual XML that is used to build the graph.  This is quite a bit of useful information in the XML.  You can see the victim query as well as many other good data points.  The victim process can be found in the “victim-list” node.  In the image below, you will then see two processes, the first one listed matches the victim process id.  The UPDATE just below it is the T-SQL that was chosen as the victim.

 

The deadlock graph, while initially looks very useful, it does have a significant limitation.  The query with the blue X on it is the victim.  If you float your cursor above it, the query will then appear in a context pop up.

If you left click and hold one shapes you can move it around. This can be useful when there are more shapes than you see above.  Now on to the limitation, you can’t zoom it.  Without this, it can be difficult to see some of the details.

Here are just a few interesting deadlock graphs I have been able to capture.


While these are kind of interesting to look at, the scary part is what caused these.  These were all caused by a single stored procedure.  Now you have the graph.  I think now the fun is just beginning trying to figure out what is causing this and a solution.

Thank you for stopping by my blog.

Events…No Events

Extended Events is a great way to capture information about what is happening on your server.  You can capture information about deadlocks, failed logins, wait stats and many more important performance data points.

One of the nice features of the Extended Event interface in SQL Server Management Studio is the searching feature that can be used when setting up a session.  In SQL Server 2017 there are over 1,500 events that can be utilized.  Searching through all those events can make it a challenge to find the event you are looking for.

Luckily Microsoft has given us some help.  If you look at the image below, you can see under the “Event Library” box the two letters “de”.  As expected, this returns any event name that has “de” in the name somewhere.  It could be at the beginning, middle or the end of the event name.

The events returned will be based on two additional things category and channel.  Both of these options help to limit the number of events you will see in your results.

Category is simple groups of events.  For example, in the database category you will find many events related to databases. These include the following:

        • Database Starting
        • Database attached
        • Database created
        • Many..many more

The Channel is also a grouping of events, but this grouping is more on who and or why the events might be used. Below is a sampling of what you will find in the Analytic channel.

I have never really changed either one of these and I have been able to find every event I needed.

Now, what does it mean when I enter a value, for example “deadlock” into the Event Library search box and I get this:

Notice that no events are showing up.  Well there are a few reasons why this is happening. First thing to check is the Category and Channel to see if you are limiting the results.

Another reason is that all the events that contain that string are already in use.  If you look at the above screenshot, you will see that there are five events related to deadlocks and the are all in the “Selected Events” pane.

Now you can see that all 5 events are now showing up to select from.  This is a very short blog, but hopefully you learned something.

Thanks for stopping by my blog!!!

Reading Extended Event Files for an Azure SQL Database in a Storage Account

Extended Events are a great way to find information about what is happening on your on-prem SQL Server.  It is also a great way to find out what is happening in your Azure SQL Database. Setting up Extended Events in an Azure SQL DB is very similar to doing so in an on-prem SQL Server.

Extended Events can be used to capture quite a bit of information about your SQL Server.  With SQL Server 2017 there are over 1,500 events that can be captured.  These events include anything from alter column event to query execution plans.  When setting up an Extended Events session you will have an option of where you would like to store the collected the data.  One of the common destinations for stored the event data is a file.  This file is an XML based file and can be read a number of ways.

Extended Events can be found under Management and is a server level item.

For an Azure SQL Database it has been moved.  It is now a database level item.

File Target Basics

When you use Extended Events you will need to provide a target type. A target is nothing more that a location that will store the event information.  With both an on-prem instance of SQL Server and an Azure SQL Database you have several options to chose from. Below you will find the options for targets for both an Azure SQL database, on the left, and an on-prem instance of SQL Server, on the right.  As you can see you have more options with the on-prem instance.  While the focus of this post is how to read an Extended Events file, it is good to know the different options that are available.

When storing the data in a file target, the file extension will be an .xel extension.  This file type is an XML based document type.  Normally you can open an XML document using notepad and it will still be in a readable format.  That is not the case here.  As you can see by the image below, there really isn’t much that is readable in an XEL file type.

Setting up an Azure Storage Account

Now that we have reviewed a bit about what Extended Events are and what the file target is.  Now we need to go over how do you create the actual target files.  The first thing we need is a destination location.  For an on-prem SQL Server this can be a local drive, a network share or a mapped SAN connection.  These are not really the best options for an Azure SQL Database.  This is where an Azure Storage Account is going to be our friend.

Once you have located it, click “Storage Accounts” . This will bring you to the screen below.

When you click the “Add” button you will be taken to a page that will allow you to enter all the appropriate information to create the storage account.

As you can see by the above screenshot, you will be asked for much of the expected information.  This includes the account name, subscription, resource group as well as location.  When deciding on a location, it is important to chose this carefully.  Remember by picking a location that is a different location could lead to additional unexpected charges.  Please double check with your Microsoft representative to assure this does not happen.

While this post really isn’t about how to create an Azure storage account, I did want to at least provide some basics.  One thing I do want to point out, the Access tier you see at the bottom of the image above.

As you can see there are two options, Cool and Hot. Below you will find the definitions as stated in Microsoft’s documentation here.

    • Hot – Optimized for storing data that is accessed frequently.
    • Cool – Optimized for storing data that is infrequently accessed and stored for at least 30 days.

This MS site also gives a good, more detailed explanation on the differences between the two.

Setting up the File Target to an Azure Storage Account

Now that we have our storage account in Azure we now need to set up everything so it can be used as a target for the Extended Event session.  With both the on-prem and Azure instances of SQL Server, there are multiple options when configuring a Extended Event target.

How To Read the Data

As you can see above, the file target really isn’t readable with Notepad or any other text editor.  With that, we need another tool to read the data we have collected. For an on-prem SQL Server, you have several options.  You can use the GUI in SQL Server Management Studio or you can use T-SQL.

When you the GUI in SSMS, you will navigate to the Extended Event session and simply right click on the file target.  When you do you will see an option to view target data.

When you click, you will see something similar to what is in the image above.  In this GUI, you can sort, group and even export the data.   I really like this for analyzing the event data.  However, this is not an option for an Azure SQL Database. When you right click on the target in an Azure SQL database there is not an option to view target data.  Below you can see the results of right clicking on the target.  Notice that the Azure SQL Database does not have an option to “View Target Data”.

The other option is to use T-SQL.  This is an option for both Azure SQL Database and an on-prem instance of SQL Server.  Luckily the code requires very little modification to move from using it on-prem to using on an Azure SQL Database. Because T-SQL will work in both Azure SQL Database and an on-prem instance of SQL Server, we will focus this blog post how to use T-SQL to extract and review your event data.

When reading from the XEL file in an Azure Storage account, there are few prerequisite items that we need to briefly cover as well.

fn_xe_file_target_read_file

The XEL file is an XML based file so when reading the data you will need to parse through the XML. In order to do this, we can use the sys.fn_xe_file_target_read_file function. This function has a total of four arguments. Here is the link to Microsoft’s documentation on this function.

      • Path – this is the path to the XEL file(s)
      • mdpath – this is the metadata file path. Starting with SQL Server 2012, this is no longer needed and NULL can be used
      • initial_file_name – this specifies the first file to read from, if NULL all files will be read
      • initial_offset – per Microsoft documentation this is “Used to specify last offset read previously and skips all events up to the offset (inclusive)”

Below is an example of how the initial step in reading the XEL file might look.  Notice the four arguments and that the only one without NULL as a value is the first one.

SELECT event_data = CONVERT(xml, event_data)
INTO #eeTable
FROM sys.fn_xe_file_target_read_file(
‘C:\eefiles\efiles_0_132002087635590000.xel’,null, null, null);

Hopefully you also noticed that the SELECT converts the event_data column to XML.  This is just the first step, later we will discuss how to parse the resulting XML into a more readable format.

Azure Storage Explorer

When reading from an Azure Storage account, instead of using the usual path to a drive or a UNC, you will use a URL instead.

SELECT event_data = convert(xml, event_data)
FROM sys.fn_xe_file_target_read_file(
‘https://StorageAccountName.blob.core.windows.net/eefiles/efiles_0_132002087635590000.xel’,null, null, null);

When first looking at this, you would expect that when given the proper credentials, you would not have any issues reading from the file.  Well, you don’t get an error, but you also do not get any data even thought the file you are trying to read from does indeed exist.

The reason for this is because you will need to create a credential in the Azure SQL Database that the Extended Event is in. This is where the Azure Storage Explorer will come in handy.

When you create the credential, you will need the “Secret”.  This is a Shared Access Signature code that will looks something like this.

?st=2019-05-18T12%3A52%3A56Z&se=2019-05-19T12%3A52%3A56Z&sp=rl&sv=2018-03-28&sr=c&sig=rUriw3oYKU%2BtqK9Ji5oq6Ih34i0uHe34ldorjfnQgUM%3D

You can use the Azure Storage Explorer to get this.  The Azure Explorer can be downloaded here.  It also has a great price, FREE!!  Per Microsoft, Azure Storage Explorer offers the following features.

Once you have installed and opened Azure Storage Explorer, you will then need to connect to your Azure Storage Account.  You can do this by expanding the tree and right clicking on “Storage Accounts” and then clicking “Connect to Azure Storage…”

When you will see the form you see below.  As you can see there are a number of methods you can use to connect to your Storage Account.

For this post, I used the “Use a connection string” option.  In order to get the connection string you will need to navigate to the Azure Portal.  Once there proceed to your storage account can click “Access Keys”.

Once you can see the Connection string, simple click the “Copy” button to the right of the string. As expected, this will place the connection string on your clipboard.

When you paste the connection string into the above form, Azure will automatically populate the Display Name box.  Then you click “Next” where you will then see a summary form.  At this point, you will review and click “Connect”.

You will need to create a Blob Container. This is pretty simple, just right click on “Blog Containers” and go to Create.

Now we have everything we need to get the last piece of information we need in order to utilize T-SQL to query the Extended Events .xel file.  This last bit of information is the Shared Access Signature.  You will need this to create the require Azure SQL Database credential.

Once you are connected to your storage account, navigate to the blog storage and click on it.  When you do, you will see “Actions” and “Properties” toward the bottom, click “Actions” and then click “Get Shared Access Signature…”.

As you can see the form that will pop up will have a number of options.  Given the this post is about how to read from the files, we will not change anything here.  At this point, click “Create”.  The following form will then appear.  Notice there three data points.

      • Container
      • URL – You will need this, well at least part of it
      • Query String: You will also need this to read the files

Now that we have completed the following prerequisites to utilize Azure Storage account to store the Extended Events files, we are ready to begin working on the T-SQL code.

These prerequisites included the following.

      • Created an Azure Storage account
      • Created a blog container
      • Downloaded and installed Azure Storage Explorer
      • Connected the Storage Explorer to the Azure Storage Account
      • Created the Shared Access Signature

Create Credential

This first step when using T-SQL to read Extended Files that are stored in an Azure Storage Account is to create a database credential.  Of course the credential will provide essential security information to connect to the Azure Storage Account.  This first data point you will need is the URL to a blog storage container in you storage account.  If you look below, you can see where you would place your storage account name and the blob storage container name.

CREATE DATABASE SCOPED CREDENTIAL [https://StorageAccount.blob.core.windows.net/blobstorage]
WITH IDENTITY=‘SHARED ACCESS SIGNATURE’
, SECRET = ‘st=2019-04-19T22%3A15%3A13Z&se=2020-04-20T22%3A15%3rtieo34=rwdl&sv=2018-03-28&sr=c&sig=ubWCJk3RvUrkeitpe,wdig3h56F5CM%3D’
GO

Because this is a database level credential, it is important to execute the code in the context of the database you are attempting to run the code to read the files from.

The “WITH IDENTITY” will always be “Shared access Signature” with the type of connection we are going to use.  Finally we get to the SECRET, this is the connection string we created earlier.

When you first create the connection string it will look something like this.  It will be much longer that what is here.  I removed many of the characters for security reasons.

?st=2019-04-19T22%3A15%3A13Z&se=2020-………..

Prior to using this string, you will need to make one minor adjustment.  You need to remove the question mark from the beginning.  You can keep it when you create the credential, in fact the credential will be created.  However, it will not work.  When you attempt to read the file, no error will be generated, you just wont see any results.

If you find that you need to delete the credential, you can use the code below with a few adjustments.  Those adjustments are in italics.

DROP DATABASE SCOPED CREDENTIAL [https://storageAccountName.blob.core.windows.net/BlobContainerName]

If you attempt to read the data from the EE files, and the credential does not exist you will see the error.  Given this, it is vital that the credential get created in the proper database

Msg 262, Level 14, State 1, Line 1
CONTROL permission denied in database ‘master’.
Msg 297, Level 16, State 1, Line 1
The user does not have permission to perform this action.

Now that the database credential exists, you can now run the code below to query the data.

SELECT event_data = CONVERT(xml, event_data)
INTO #eeTable
FROM sys.fn_xe_file_target_read_file(
— TODO: Fill in Storage Account name, and the associated Container name.
‘https://StorageAccountname.blob.core.windows.net/BlogContainerName/FileName.xel’,
null, null, null
);

SELECT
ts = event_data.value(N'(event/@timestamp)[1]’, N’datetime’),
[sql] = event_data.value(N'(event/action[@name=”sql_text”]/value)[1]’, N’nvarchar(max)’),
duration = event_data.value(N'(event/data[@name=”duration”]/value)[1]’, N’nvarchar(max)’),
spid = event_data.value(N'(event/action[@name=”session_id”]/value)[1]’, N’int’),
user_nm = event_data.value(N'(event/action[@name=”username”]/value)[1]’, N’nvarchar(max)’),
dbname = event_data.value(N'(event/action[@name=”database_name”]/value)[1]’, N’nvarchar(max)’),

explan = event_data.value(N'(event/data[@name=”showplan_xml”]/value)[1]’, N’nvarchar(max)’)
FROM #eeTable
ORDER BY ts

DROP TABLE #eeTable

This code does a number of things.  First of all it reads the file and converts the results to a single XML data row for each row in the file.

Then after dumping the results into a temporary table, the code then parses the XML and returns it in a more readable relational table type format.  When you look at the image below you will see two results.  The top set, in the red box, is the converted XML.  The second set, in the blue box is the XML parsed to the more traditional columns we are used to working with.

Now, once in the columns, we can do what ever T’SQL will allow us to do.  For example, insert the results into a disk based table for analysis at a later date.

One thing I would like to point out is the two bits of code below.

When the column name is taken from the Global Fields tab in the Extended Event session you will use “event/action” followed by the column name.

event/action[@name=”sql_text” 

When the column name is taken from the Event Fields tab in the Extended Events sessions, you will use “event/data” followed by the column name.

event/data[@name=”duration”

On thing to keep in mind, if you spell the column name incorrectly, it will not throw an error.  It will simply return NULL in every row for that column.

Question…

As I was writing this I realized that there a lot of steps. The question that came to my mind that there has to a way with less steps.  The answer to that question is, there is.

If you right click on the file in the Azure File Explorer, you can then click open.  When you do you will see the file downloading as you see in the red box below.

Once it has completed downloading and is ready to open, you may see this warning.

If you do, proceed with caution.  I believe you see this is because of the file format.  However, again, please proceed with caution.

When you click “Yes”, SQL Server Management Studio will then open and display the data as below.

Once you have the data in SSMS, all the functionality that comes with it is now available.  This includes the ability to export the data to a table, group the data, apply filters and sort the data as well.

If I was just reading the data, this is the way to go.  However, if I needed to automate the collection of the, I will need to use the T_SQL method mentioned in most of the post.

Another thing to keep in mind is the amount of date you will be pulling out of your Azure Storage Account. As you can see in the image below, the Extended Events files can get quite large.

In order to avoid large bills on your Azure subscription, it is important to understand the impact of opening large files.  It can get expensive if not managed carefully.

Hopefully you are still here……it was a long post.  By far the longest I have written.  If you are still here….thank you for stopping by my blog!!!

 

Azure SQL DB and @@version

Many of us are used to using @@version to get the version of SQL Server we are using.  This also works when you are utilizing an Azure SQL database, although the results are slightly different. However, before getting to @@version, I would like to go over a few other methods to obtain version information.

How to Get Version

There are many reasons why it is important to know the version of a SQL Server.  The reasons include everything from licensing to upgrades.  In this section, I will cover a few of the common methods used to get the version of SQL Server.

With an on-prem instance of SQL Server one method that can be used to get the version is to right click on the server in SQL Server Management Studio.  Once you do, you can click on the properties menu item and the window below will appear.  As you can see, in the boxes you are able to see both the version and edition of the SQL Server.

However, when you attempt to do this in an instance of Azure SQL database, the properties menu item is not available.  Below is what you will see when you right click on the instance of an Azure SQL database in SQL Server Management Studio.

Another method to get the version of SQL Server is to utilize the SERVERPROPERTY function.  The SERVERPROPERTY function can be used to view server level properties, in this case the product version.

SELECT SERVERPROPERTY(‘ProductVersion’);

When you execute the above statement you will then see the version. It will look like the number below.  The number represents the version, in this case, SQL Server 2017.

14.0.1000.169

As mentioned above, the ServerProperty function can be used to return a number of bits of information about the server.  Here is a link for more detailed information, click here.

The final method I would like to talk about is using @@version.  This is a global variable in SQL Server that will return not only the version of SQL Server, but also the OS version.

When reading from this global variable, you include it in a SELECT statement as illustrated below.

SELECT @@Version

The results will vary from an on-prem instance of SQL Server to an Azure SQL Database.

This is what you will see for an on-prem instance.

Here is the complete text of the results.

“Microsoft SQL Server 2017 (RTM) – 14.0.1000.169 (X64) Aug 22 2017 17:04:49 Copyright (C) 2017 Microsoft Corporation Developer Edition (64-bit) on Windows 10 Home 10.0 <X64> (Build 17134: ) “

As you can see it returns some good information.  Including the version of SQL Server, the service pack status, 32-bit or 64-bit and the OS version.

However, the results change quite a bit when this is ran against and Azure SQL database.

The complete text is just as you see it above.  Notice the lack of OS version.

What is next?

Now that we have covered a number of ways to get the version information, let’s take a look at some of the differences from an on-prem instance to an Azure SQL database.

When looking at the results, it is easy to identify an instance as an Azure instance in just in the first few words, it clearly states that it is an Azure instance, it is indeed in Azure.

If you look at the results below you will see that I used both @@version and ServerProperty to get the information I am looking for.  You will also notice that I included a simple query using the sys.databases object. This will return the compatibility level for each database, which as you will see is going to be important.

Looking above you will see that it is an Azure instance, has a product version of 12.0.2000.8 and both databases have a compatibility level of 140. The compatibility level of 140 is SQL Server 2017.

What….12.0?  Isn’t that SQL Server 2014?  If you said yes, you are correct. Is it really possible that Azure SQL Database is SQL Server 2014? The answer is no.  That number is different that an on-prem instance of SQL Server.  According to what I have ready, version 12.0 is the most current version. Given both the Azure instance and the SQL Server 2014 both of a product version of 12.0, it now comes down to the compatibility level for the databases for Azure.

As you can see both databases of a compatibility level of 140, which is SQL Server 2017, not 2014. Since I didn’t change this when I created the databases, this tells me that the instance of Azure SQL Database was on SQL Server 2017 when I created the database.  As with an on-prem instance of SQL Server, the compatibility level for an Azure SQL database can be changed. When I created a new Azure SQL Database, the compatibility level was 140, which is SQL Server 2017.

To change the compatibility level of a database you have to run an Alter Database command.

ALTER DATABASE Advent
SET COMPATIBILITY_LEVEL = 160

When I ran the above code, I got an error.

Msg 15048, Level 16, State 1, Line 1
Valid values of the database compatibility level are 100, 110, 120, 130, 140 or 150.

Notice it states that 150 is an option, while 160 is not.  Compatibility level of 150 is SQL Server 2019, the latest version.  It looks like the available options go back quite a bit, version 100, which is SQL Server 2008.

Since in Azure SQL Database, the version of the database is key when investigating versions, below you will find the code to retrieve the versions of all databases.

SELECT name, compatibility_level FROM sys.databases;

All the above code, works on both an Azure SQL database and an on-prem SQL Server.

Notice that the Advent database has a compatibility level of 150, SQL Server 2019.  When investigating version information, it important to also review the compatibility level of the database you are looking at as well as the server version.

Thanks for stopping by, I hope you learned something.