Table Variable and Rollback

I am currently at PASS Summit in Seattle and learning a ton of new stuff.  There are a lot things that I have learned so far, but one thing really jumped out at me.  I attended Brent Ozar’s session on Cardinality Estimates and he showed that when you use table variables, the transaction does not get rolled back. That is what is great about SQL Server, there is always something to learn!!

If you run this code, you will see that when the select runs, it will actually not return any rows.

CREATE TABLE #test
(Name VARCHAR(100))

BEGIN TRAN
    INSERT #Test
(Name)
VALUES
(‘Dave’),
(‘Raf’)

ROLLBACK

SELECT * FROM #test

DROP TABLE #test

However, when you run this code, you will see that that rows are still in the variable even though the rollback command has completed.

DECLARE @TestV TABLE
(Name VARCHAR(100))

BEGIN TRAN
        INSERT @Testv
(Name)
VALUES
(‘Dave’),
(‘Raf’)

ROLLBACK

SELECT * FROM @TestV

This is another reason why table variables should be used with caution. Table variables do not get statistics on the columns which of course can be a performance killer!!!

SSMS Options – Part 2

In SSMS Options – Part 1, I went over several options, including the color theme, track changes, line number and how to get to the options screen.  This post is a continuation of that post.  We will cover a few more options that could be helpful to you when using SQL Server Management Studio.

This post will cover these options

  • Select\Edit
  • Surround object names with brackets
  • Automatically populate Find What with text from the Editor
  • Show Errors

SELECT\EDIT

If you right click on a table in the Object Explorer, you will see a number of options, as seen below.  I would like to show how to use Options to change two of these, Select Top 1000 Rows and Edit Top 200 Rows.

These two options do exactly as they say, Select and return rows that you can edit.  Although, these options are here, I will still prefer and recommend that you write the DML statements needed instead.

When you use the “Select” option you will see something similar to what is below.  SSMS writes the DML statement for you and includes a comment at the top.  The results will be returned to you in a grid format.  However, it will return only the number of rows indicated.  In this case, 1000 rows.  You cannot edit these rows in SSMS without writing the proper DML statement.

However, if you click the “Edit….” menu item you can edit the data in the grid by clicking in the cell you would like to edit.  Although, as I stated before, I would recommend that you use the proper TSQL DML statement to complete data modifications.  When you do this, you will see the grid below.  You can now begin editing data.

When you do start editing you will see a small pencil just to the left of the first column.  Once you move to another record or close the table, the record will be saved.

Again, I would recommend that you use the proper DML statements to modify your data.  This looks like it might be a nice short cut…but I wouldn’t do it because the modification can fail.

The question you may have is, how do I change these values?  You will find the answer below.  Change these to what you feel is appropriate.  Notice that both state the TOP keyword, yet we cannot specify the column to do the order by.

 

Surround object names with brackets……

As you may know, when you drag an object name or a column name from the Object Explorer to the query window, SSMS will drop the name.  This setting will determine is SSMS place brackets around the name or not.

 

This option can be found, just above the two settings talked about earlier.  The values can either be True or False.

As expected, the results are below.

Automatically populate Find What with text from the Editor

This setting actually can make our lives just a little bit easier.  When this is turned on, SSMS will populate the Find What box with what is selected in the text editor.  You can find this setting in the location below.

If you click “Find and Replace” under the Edit menu, the box highlighted below will appear.  In the box will be what every you have selected in the text, as seen below.

Show Errors

This options can make it easier to locate the errors in your code.  First of all, if can be found under “Text Editor”.

When this is enabled, you will see a red indicator near the right scroll bar when there is an error.  Notice that it is actually on the line below the line that actually contains the error.

That is if there is one error.  Well, what happens if there are two or more errors. The statement below contains two errors, if you notice, the error indicator is on the line below the first error.

When the Show Errors option is turned off, you will not see it, just like in the image below.

Thanks for stopping by my blog!

Optimize for Ad Hoc Workloads

Optimize for Ad Hoc Workloads is one of those server level settings that are not changed very often, but is still good to know about. Before we get into the details let’s talk about what it is. When using SQL Server, it reserves a portion of memory for Plan Cache. The Optimize for Ad Hoc Workloads setting controls what SQL Server places into this plan cache for single use queries. When it is turned off, all single use queries will have the entire plan cached, therefore consuming more space. By turning this on, you are asking SQL Server to not store the entire plan when the query is executed the first time, SQL Server will now only store a plan stub instead. Which consumes less memory than the full plan would. Something to keep in mind, the next time that the query is executed, it will flush the stub from the cache and replace it with the full plan.

There are two methods you can use to determine if this setting is on or off. The first being SQL Server Management Studio(SSMS). In the Object Explorer right click on the server and then click on Properties.  The default is False, meaning that the entire plan will be placed in cache when a query is compiled and executed.

As with many things in SQL Server, you can also use TSQL to get this information. You will need to query sys.Configurations to get this setting via TSQL.

SELECT name
, value
, description
FROM sys.configurations
WHERE Name = ‘optimize for ad hoc workloads’

Here is the result of the above query.

How to determine if there are a lot of Single use Queries in Cache

I think there are two measures we need to understand when it comes to finding out what is consuming the cache. One is the amount of space being consumed by Ad Hoc queries and the other is the number of queries that are considered to be Ad Hoc queries.

Below is the Microsoft description of the dm_exec_cached_plans DMV.  This DMV can be used to capture the amount of space consumed by Ad Hoc queries.

“Returns a row for each query plan that is cached by SQL Server for faster query execution. You can use this dynamic management view to find cached query plans, cached query text, the amount of memory taken by cached plans, and the reuse count of the cached plans.”  – Microsoft

The code below uses the dm_exec_cached_plans to find out how much space in cache is being consumed by Ad Hoc queries.

From Pinal Dave:

SELECT
AdHoc_Plan_MB, Total_Cache_MB,
AdHoc_Plan_MB*100.0 / Total_Cache_MB AS ‘AdHoc %’
FROM (
SELECT SUM(CASE
WHEN objtype = ‘adhoc’
           THEN CONVERT(BIGINT,size_in_bytes)
ELSE 0 END) / 1048576.0 AdHoc_Plan_MB,
           SUM(CONVERT(BIGINT,size_in_bytes)) / 1048576.0 Total_Cache_MB
FROM sys.dm_exec_cached_plans) T

Below you will see the results of the above query.  Note that only 2.8% of the cache is being consumed by Ad Hoc queries.

So now let’s see how many Ad Hoc query plans are currently in cache.  The query below used a few additional funtions, sys.dm_exec_sql_text and  sys.dm_exec_query_plan, both of which accepts the plan handle as an argument value. 

SELECT SUM(c.usecounts)
, c.objtype
FROM sys.dm_exec_cached_plans AS c
CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS t
CROSS APPLY sys.dm_exec_query_plan(plan_handle) AS q
GROUP BY c.objtype

The results are below

By looking at these two result sets you can see that although there are 2,542 Ad Hoc query plans in cache, they are consuming a small percentage of the total cache, 2.8%.

So what does this mean?  Well, although there are a lot of Ad Hoc queries, if the server is suffering from pressure in memory or in the plan cache, it is probably not due to the Ad Hoc queries.

How do I get the Query Plan

There is a post at www.sqlshack.com that gives  really good explanation on how to get the query plan.

This code from the above link.  It not only tells us the query text and the plan, but also gives us the number of times that plan was used.

SELECT    cplan.usecounts
                , cplan.objtype
                , qtext.text
                , qplan.query_plan
FROM sys.dm_exec_cached_plans AS cplan’
CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS qtext’
CROSS APPLY sys.dm_exec_query_plan(plan_handle) AS qplan’
ORDER BY cplan.usecounts DESC
Below is the result set from the above query.  With this, I can review the query and the execution plan for the Ad Hoc query.
Clear Cache
Of course, restarting the SQL Server service will clear the cache, however there is also a method we can used to clear the cache without restarting.  We can use the code below.
         DBCC FREEPROCCACHE
When the cache is cleared, whether by a reboot or using the above command, you should let the server run for a bit to let the cache get reloaded with plans.
Does Case Matter?
When talking about the plan cache, case does matter.  However, they will both result in the same execution plan.
Using the AdventureWorks2014 database, when you run the following query you will see the plan in cache.
SELECT *
FROM [AdventureWorks2014].[Person].[Person]
Where BusinessEntityID = 3
The plan is below in the green box.  If you look at the other queries, you will also see the same query with one small difference, the “w” in Where is lower case.  Even though both queries will use the same Execution Plan.  The reason this happens I because SQL Server creates a hash of the query and if the hash value exists in the cache, it will use that plan.  However, because the “w” is a different case, the hash will also be different.
Let’s See it at Work
This is just a simple demo to show that when Optimize for Ad Hoc Workloads is turned on, the plan does not appear in the plan cache with the first execution, only the second.
First thing is to go to the properties of the server and make sure that the Optimize for Ad Hoc workloads is set to true.  Once that is done flush the  cache either by restarting the service or using DBCC FREEPROCCACHE.
Once that is done, run this code to confirm that there are not any plans in cache for the query we are about to run.
The query we are going to use is in the AdventureWorks2014 database:
SELECT * FROM Production.Product
Here is the code to check if there is already a plan in cache for this query.  Make sure to change the database ID, it is 8 on the system I am using.
SELECT c.usecounts
, c.objtype
, t.text
, q.query_plan
FROM sys.dm_exec_cached_plans AS c
CROSS APPLY sys.dm_exec_sql_text(plan_handle) AS t
CROSS APPLY sys.dm_exec_query_plan(plan_handle) AS q
WHERE t.text LIKE ‘%select% *%’ AND q.dbid = 8
ORDER BY c.usecounts DESC
Once you run the above code, you will see something similar to the image below.  Notice that there an entry for our query of the product table.
Now run the product table query.  Once you do you will see something like below.  As you can see not only is there a record for our query, but also a plan as well.  This is because the Optimize for Ad Hoc workloads is turned off.
Now, set Optimize for Ad Hoc workload to true, flush the cache and let’s see what happens.  Sure enough, no plan, just as before.
Run the following query:
SELECT * FROM Production.Product
then the query to get the plans.  When you do, you will see the record for the execution, but no plan.
Run the query for the products table a second time. Now you will see the plan is also in cache.  This is because the Optimize for Ad Hoc Workloads, when set to True only stores a stub in the cache on the first execution of a query, then the entire plan on the second execution.
As mentioned earlier in this post, spaces and case do make a difference when SQL Server queries the plan cache.  Notice below that there are two rows for what appears to be the same query.  However, if you look closer, you will see that the second query actually has an additional space between FROM and the table name.  This results in a different hash, and then results in a different entry in the plan cache.
Given all the information should I turn the on or off?
Well, that is a good question.  In my option, unless you have evidence that there is pressure in the cache, I would leave it at the default setting.  I have never actually had a need to change this setting, but am open to it if there is evidence it would benefit the overall performance of the server.  So let’s take a look at what others say.
“I can’t really give you a situation that I would expect to see where this shouldn’t be enabled honestly. There are theoretical examples out there – maybe a series of queries – lots of them – with reusable plans executed only twice. Or something like that. I answer a lot of questions on DBA.StackExchange – it’s a great site in a great network and the consensus there seems to generally be that it almost always makes sense in this answer. When I do a SQL Server Health assessment? I want to see that enabled.`”
Kendra Little Says this on her blog:

“Arguing about this setting is a lot like arguing about whether or not it’s better to put cream in your coffee: if the coffee is decent, it probably tastes fine either way.

My general preference is to not turn on a setting unless I have a good reason to believe that it will make a positive difference, so I don’t turn this on by default. Instead, I’d rather monitor the number of single use plans in cache, and investigate and act accordingly if that number starts spiking upwards.

But admittedly, that’s being fussy: if I had a flock of 1,000 SQL Servers to manage myself and I knew they were all patched to recent supported versions, I’d probably enable it on them all and I wouldn’t feel a bit bad about it, because I wouldn’t have the bandwidth to do it the very best, artisanal, hand-crafted way.”

Pinal Dave states this on his blog:

“Based on the result you can make your own conclusion and change your settings. I, personally, prefer to turn Optimize for Ad Hoc Workloads settings on when I see AdHoc Percentages between 20-30%. Please note that this value gets reset when you restart SQL Server services. Hence, before you change the settings, make sure that your server is up for a quite a few days.

If you are not sure, what to do, just reach out to me with the result and I will be happy to help you.

One final note, This is just one settings, which contributes to your SQL Server Performance”

I tend to be more in line with what Pinal Dave is saying with the understanding that this is only one setting, however all three make very valid points.  You need to look at the big picture and decide for yourself on what the best settings is for your systems. There are a number of other SQL Server settings that could impact performance and you should review them all before making a change.  You should look at the big picture for performance to determine if the server could benefit by changing this setting.

Thanks for stopping by my blog….

SSMS Options – Part 1

We all use SQL Server Management Studio on a daily basis.  Although many of the options you will more than likely not change at all.  There are a few that would be good to change to help enhance our experience in SSMS.  This post, as well as a few future posts will go over a number of these options.  The thing to remember when changing options in SSMS, some change will require a restart of SSMS.  I will point these out as I cover the various options.

Obviously, the logical starting point is to go over how we get to the options form.  In SSMS you go to the “Tools” menu, under which you will see the “Options…”menu.

Once you click Options you will see this..

There are many options available that can be changed.  These options will not only change the way SSMS behaves, they can also change how it looks.

When you first look, on the left you will see a tree structure.  If you see an arrow to the left of the menu item, there are more options under that you can see when you click it the menu item.  You can see this in the image below.

As you can see, there are a ton of options.  If you are not sure of the exact name, you can search the options.  In the upper left hand corner of the Options window, you can enter the words you are looking for and SSMS will return all the options that have that word.

 

Color Theme

This can be used to change the over all appearance of SSMS.  With this option you have two choices, Blue and Light.  Blue is the default.  This does not require a restart of SSMS.

Blue

This give a darker look to SSMS. The borders are blue and the headers are blue as well.

Lite

As the name would imply, this turns SSMS into a light colored theme.

 

Track Changes

This option is used to track any changes you have made to the TSQL you are working with.  You can find this option under “Text Editor”.  Changing this does not require a restart of SSMS and is on by default.

When this option is turned on, you will see a yellow bar on the left side for every line you modify.  Once you save the file, the it will turn to green.  Like the two images below.

Line Number

This option is off by default.  Some folks like to turn this one, some prefer it off.  Personally I usually have it off.

When this option is turned on, you will see the line numbers on the left side.  When you copy the code, it does NOT copy the line numbers as well.

 


In order to keep the post kind of short, I will do another post on additional options that you might want to learn about.

Thanks for stopping by my blog and hope you learned something.

And we wait!!

Since I am at the airport waiting for a very delayed flight to take off, I thought it would be appropriate to do a post on…WAITFOR!!

As expected, this statement can be used to pause the execution of a batch.  It can also be used to have a batch wait until a specific time to continue the execution.

There are really two options, DELAY and TIME.

DELAY

The DELAY argument is used to pause the execution of the batch.  You can force a batch to wait for up to 24 hours.  This argument uses the “Time To Pass” value to determine how long to wait.  This really means, how much time has to pass before execution of the batch can continue. You cannot pass a date into this argument, it must be a time in the following format: hh:mm:ss.msss.  The time passed must not exceed 24 hours.  If the time does exceed 24 hours, you will get the following error.

If you put WAITFOR DELAY ’23:59:59′, this will run fine, however, if you put WAITFOR DELAY ’24:00:00′, it will fail with the same error as above.

TIME

The TIME argument is used to have a batch pause and continue executing at a specific time.  You will hear this referred to as “time_to_execute”. This argument has the same 24 hour limitation as the DELAY argument.  If you attempt to run this, WAITFOR TIME ’25:48:00′, you will receive the same error as above.

In the code below, the batch will execute the first SELECT and then wait until 1:48 AM to execute the second SELECT.

SELECT *
FROM [Person].[Person]

WAITFOR TIME ‘1:48:00’

SELECT *
FROM [Person].[Person]

GO

When specifying a time, you can use 24 hour time, WAITFOR TIME ’13:48:00′ or you can use AM/PM, WAITFOR TIME ‘1:48:00 PM’.

A point of caution if using WAITFOR in production code, make sure that by using it you are not increasing the timeouts in the application.  You may need to adjust the timeout setting of the application.  Also use a great deal of caution if using this in a transaction..it can make for a bad day!!

Keep in mind, the time specified starts when the thread is scheduled.  If there is a delay in scheduling, it may actually wait longer than expected. This can happen on really busy servers.

The statement below is also important to remember if you decide to use WAITFOR a lot.

“SQL Server monitors the number of threads associated with WAITFOR statements, and randomly selects some of these threads to exit if the server starts to experience thread starvation.” – Microsoft

Per Microsoft, you need to be careful using this because you can potentially increase the number of deadlocks.

Here is a link to Microsoft’s page on this topic.

Here is a very good video by Burt Wagner. I like that he has a stored procedure named:  BuyDonuts!!!

Thanks for visiting my blog.

SSMS Options – Change Font Size

This is the first of a series of posts to go over the properties of SSMS that can be changed.  I will post these from time to time, many times after I need to use one of them.

To access the Options, you have to go to the Tools menu in SSMS and then click on Options.

When you do this, you will see the screen below. As you can see, there are a ton of options that you can change.  Some are for the Database Engine while some are for SQL Server Analysis Services.  Some are even for Azure or AlwaysOn. For this post we will talk about how to change the font size.

Starting with changing the font when you write queries.  Under the Environment group go to “Fonts and Colors” then simply change the font size.  When you make this change, it takes effect immediately.  No need to restart SSMS. As you can see, you can also change the color and the background color.

Although the focus of this post is the Text Editor, you can also use this to change the font in many different aspects of SSMS.

When changing settings, you must be aware that some settings will require that you restart SSMS. For example, changing the Grid Results will require a restart.  You will see the message below.

As I said before, you can change the font size of many aspects of SSMS, including the result set.  Below is how you can change the size of the Grid font. Simply change the “Show Settings for” to “Grid Results”.  Then change the fonts as you see fit.

This is what the results grid looks like with the default font.

Now let’s change the size of the font to 16. As you can see below, the font is larger and easier to read.

Here is an example of what you will see if you change the font for Extended Events.  This is so much easier for me to read.  This is just an example of the far reach of the Options menu in SSMS.

Font Colors

When writing TSQL, there are many font colors you will see.

Keywords are blue, strings are red, object names are black and comments are green.  You can change these if you like.  Below is an example of the default font colors.
If you decide you would like to change the colors of fonts, you can do so by selecting the proper word type, inside green box below,  and then modifying the settings.  This screen also gives you an example of what the text will look like.

I change the font size when I am presenting so attendees in the back of the room can see the code better.   I like this option because you can adjust your SSMS environment that works for you!

Thanks for stopping by my blog!!

 

Anatomy of a Select Statement Part 3

Welcome to part 3 of this series.  The first part covered the order of execution of the commands, while the second session went in depth about the SELECT command. This post will cover the FROM clause.  Most people would think that you will list tables in the FROM clause.  Yes, that is correct, however you can do so much more.

What can be listed in the FROM

This first step is to list what can you use in the FROM clause. The list includes the below list.

  • Table
  • View
  • Temporary Table
  • Table Variable
  • Derived table
  • Table valued functions

We will cover each of these.

Table and View

Tables and views are probably the most common objects used in the FROM, although I really don’t have any exact statistics on that. Below is an example is a simple query.

SELECT TOP (1000) [ProductID]
,[Name]
,[ProductNumber]
,[MakeFlag]
,[FinishedGoodsFlag]
,[Color]
,[SafetyStockLevel]
,[ReorderPoint]
,[StandardCost]
,[ListPrice]
FROM [AdventureWorks2014].[Production].[Product]

In the FROM clause you will see the table or view.  Notice that not only is the table name listed, but the database and schema are also specified.  Those two are not require, but I might not be a bad idea to start listing the schema as well as the table name.

By looking at this statement, we can only speculate that the object listed is a table.  If you use a view instead, they will look that same.  Really the only way to tell if the object is a table or view is to simply know what type of object it is.  You can look in the objects explorer in SSMS and look for the object.  Sometimes using the naming convention can also be used to make this determination.

Temporary Table

Temporary tables are created in the TempDB system database.  A temporary table can be identified by the start of the table name.  The name will start with either # or ##.

Phil Factor on Red-Gate’s web site gives a great description of when we should use temporary tables. Here is the link: RedGate

They are used most often to provide workspace for the intermediate results when processing data within a batch or procedure.

When a temporary table starts with # it is a “local” table.  This means that the only process that can see and use the temporary table is the one that created it. While global temporary tables will start with ##, they are not as secure because they can be used by other processes.

Since local temporary tables are used only by the process that created it, it will automatically be dropped when the connect that created it ends.  Global temporary tables could persist a bit longer because it will remain until all processe that are using it are terminated.  This post isn’t an in depth discuss about temporary tables, you can to here to get more information: TempTables.

Prior to using a temporary table, it must be created.  You can use the CREATE TABLE statement or use SELECT…INTO.

–Create the table
CREATE TABLE #test
(ProductName VARCHAR(50)
, Price MONEY)

–Populate the table
INSERT #test
SELECT Name
, ListPrice
FROM Production.Product

–Use the table
SELECT *
FROM #test

In the code above you will see three distinct sections.  The first is to create the temporary table, populating the table is the second and the third and last step is the use the table.

If you attempt to query the table and it is out of scope you will get the following error.

You receive this error because with a local temporary table, it is only available to the session that created it.  That is not the case with a global temporary table, one that starts with ## rather than # for a local table.

However if you change from a local to a global temporary table, you will not receive the above error.  You will actually get the results you are hoping for. If you look at the image below, you will see that the table was create with SPID 51 and used with SPID 60.  Also note that even though it is a valid table, SSMS is still indicating that it is an invalid object in SPID 60.  Of course that is not the case, it works just fine.

As you can see, temporary tables are used in pretty much the same way as user tables in a database. You just need to be aware of the scope of the table, is it local or global.

Table Variable

A table variable follows the same steps for creation and use.  Instead of using the CREATE TABLE statement, when using a table variable, you use the DECLARE statement to declare the variable as a TABLE datatype.

The scope of a table variable is the batch.  If you create a temporary table you can use it for the duration of the session.  While the duration of a table variable is the batch itself.

There are some differences on how SQL Server processes temporary tables and table variables.  That is not the purpose of this post.  You can go here to find more information regarding the differences.

Derived table

When I teach the Microsoft SQL Server certification course on TSQL, a derived table is in the same chapter as subqueries. A derived table is really not much more than a SELECT statement in the FROM clause instead of a table name.  As you can see from the code below, the derived table is highlighted.  The derived table has an alias of “v” and is enclosed in parenthesis.

Table Valued Function

Like temporary tables and table variables, before using a table valued function(TVF) the function must be created.

The code below shows how to create the function.  This function returns a table. This is from Microsoft’s web site. This will work on the AdventureWorks2014 database.

CREATE FUNCTION ProductsCostingMoreThan(@cost money)
RETURNS TABLE 
AS 
RETURN 
SELECT ProductID, ListPrice
FROM Production.Product
WHERE ListPrice > @cost

Once the function is created you can see in SSMS under Programmability.

One of the advantages of a TVF is that they can accept parameters.  In the code above, the parameter is for the list price. In the code below you will that the TVF is used just a user table might be.  Also notice the parameter being passed into the function.

In this case, we are looking for any products that cost more than $80.50 and then joining it on the Product table.

Each of these objects has performance advantages and disadvantages.  That is not the purpose of this post, however it is something that must be considered when determining the appropriate solution.

In the Part 4 on this topic, I will cover JOINS.

Thank for visiting my blog.  I hope you learned something.

 

 

XEvent Profiler

XEvent Profiler is not a SQL Server feature, it is a feature of SQL Server Management Studio(SSMS). It was first introduced in SSMS v17.3 and the XEvent Profiler sits on top of Extended Events technology.  Per Microsoft, XEvents “enables quick access to a live streaming view of diagnostics events on the SQL server”.

You can find XEvent Profiler at the bottom of the server tree in SSMS, just under SQL Server Agent. There are two sessions that come with XEvents, Standard and TSQL.

 

Starting and Stopping the Sessions

When you first Launch the session and the Extended Event sessions does not exist, SQL Server will create the session, however it not be started.  To start it you need to right click on the session in XEvent Profiler and click “Launch Session”.  If the Extended Events session already exists, SQL Server will start the session.

When you are done doing what you need to do, simple follow the same steps and click “Stop Session”.

Standard Session

When you enable this session it creates an Extended Events session named QuickSessionStandard.  This sessions does not have a target defined and has a Maximum Dispatch Latency of 5 seconds.  If you create an Extended Events session the default Maximum Dispatch Latency is 30 seconds, significantly higher than the Standard session. This session collects a lot of performance information.  However, it also collects login and logout information. Below you will find the list of events that are being collected.

One thing you might notice about the above image is that there are number Predicates defined.  These are all the same, they exclude system processes.

When you launch the session, SSMS will allow you to watch live events.  The screenshot below is what you will see when viewing live events.  Among the information collected, you will find Database Name, Batch SQL, Client Application, User account and the query_hash.  This view can be modified to meet your needs. The changes to the view will remain when you close the session.

TSQL Session

The second session available is the TSQL session.  The Extended Event session is named QuickSessionTSQL.  As the name implies, this session collects information about the execution of TSQL statements. The events are below.  You notice a few things about these events, like the QuickSessionStandard session, the TSQL session does not have a designated target.

  1. Many are also in the QuickSessionStandard session including:
    • Existing_connection
    • Login
    • Logout
    • sql_batch_starting
  2. There are fewer events than in the QuickSessionStandard session
  3. The QuickSessionStandard session has the sql_batch_completed while this does not

Like the QuickSessionStandard session, the events in this session also have a few predicates applied. All of which are to exclude system sessions from the live data.

Viewing Session Data

When you launch either of these sessions, SSMS will also launch the screen that will allow you to view live data, see below.  Something to keep in mind, when you close this window, it will stop the session.

Modifying the Sessions

Although you can modify the view, it may not be the best idea to modify the definition of these sessions. If you find a need to modify these sessions, it might be better to either modify another user defined Extended Event session or create a new session.

Both of these sessions are intended to provide a quick way to collect and view some basic performance information about your SQL Server.  For those of you that like to use Profiler, this could be a good alternative, even though it is not as powerful or very flexible.  Since neither of these sessions has a defined target, the data is not stored.  Once you disable the session the data is gone.  If you really need this data for review at a later day, you can export it.  The export menu can be found under the Extended Events Menu item.

Thank for stopping by!!!

Additional Information

Microsoft

MSSQLTIPS

SQL Server Statistics Information – How do I Get It

 

When doing maintenance on a SQL Server there are number of things that must be completed.  These include backups, index optimization, DBCC Checkdb and update statistics.  This post really isn’t about how to update statistics, it is more about how do I gather information about statistics.  This information is important when troubleshooting a performance issue as well as a way to check to see if your statistics update strategy is effective in updating the statistics properly.  Remember, if statistics are not updated properly or not created at all, this could lead to performance issues.

There are a few ways to gather statistics information, but before getting into those, let’s go over that information about statistics is important.  This is not all the items, but they are the key ones.

Important Statistics Information

  1. What statistics exist
  2. Date they were last updated
  3. Is there a filter?
  4. Number of rows and the number of rows used for the sample
  5. How many modifications since the last statistics update

These are all important in one way or another.  Here is why each of these is important to look at.

What Statistics Exist – This is important to know because sometimes what statistics we think exist may not actually exist.  If the “Auto Create Statistics” database setting is turned off, then of course statistics would not be created other than manually or by creating an index.

Date They were last updates – This will help us understand if the statistics are being updated appropriately and as expected.

Is there a filter? – This will help us understand if the statistics are for the all the rows of the table or just a subset.

Number of Rows and the number of Rows used for the Sample – We need to review this to see if SQL Server is using an appropriate number of rows to update the statistics.

Number of Modifications since the last Statistics update – Understanding how frequently the column is modified will help us determine if the statistics update is happening appropriately.  It is important to keep this in perspective.  You can’t just look at this number, it is also important to look at the date last updated and the number of rows.  If there are a high number of modifications but the statistics were last updated six months ago, there may not be an issue.  Although I might want to look to see why the statistics haven’t been updated for 6 months.

How do I get this information

There are a couple of different ways to get this information.

SQL Server Management Studio

You can simply right click on the statistics object you want to review and click Properties. This will open up the properties window.

On the first screen you will see this.  You will be able to identify the date of the last update. It is highlighted below.

The number of rows and rows used in the sample can be found in the “Details” tab, while the filter information can be found by clicking the “Filter” tab on the left.

What cannot be found by using the Properties window is the number of modifications since the last statistics update.  This is kind of limiting.

There are two ways to get all the information above.  One is to use DBCC SHOW_STATISTICS the other is to query the SYS.STATS table.

DBCC SHOW_STATISTICS

This statement will return all the above information in three data sets, the header, the density vector and the histogram.  The syntax is below.  We are not going to get into all the syntax, just the basics.  If you want more information you can go here.

This is the basic syntax.  The statement expects and table or view name and a statistics object name.  The output is below.

DBCC SHOW_STATISTICS(‘person.person’, [IX_Person_LastName_FirstName_MiddleName])

The three sessions, Header, Density Vector and Histogram are identified by the arrows. While the green box labels that date last updated, red is the number of rows and rows used as a sample. The final box is the blue box on the left, this identifies if there is a filer or not.  Notice that to the left of that there is a column named “Filter Expression”.  This will identify the expression used for the filter if one exists.

You can only return one of the sections by run one of the code snippets below.

DBCC SHOW_STATISTICS(‘production.product’,ReorderPoint)WITH STAT_HEADER

DBCC SHOW_STATISTICS(‘production.product’,ReorderPoint)WITH DENSITY_VECTOR

DBCC SHOW_STATISTICS(‘person.person’,[IX_Person_LastName_FirstName_MiddleName]) WITH HISTOGRAM

SYS.Stats

This is from Microsoft: “Contains a row for each statistics object that exists for the tables, indexes, and indexed views in the database“. In reality this really needs to be used with a few other statistics related objects.  These related objects are sys.stats_columns and sys.dm_db_stats_properties. Below is the code that can be used to gather all the information and then some.  This code can be found here.

I just took the code that was provided by Microsoft and added the sysobjects table to allow for the data set to include the column name.

Notice that the dm_db_stats_properties obect is a function that has two parameters, objectID and StatsID.

Below are the results of the above statement.  Notice that it includes all the items mentioned at the beginning of this post.

 

All the above methods return much of the information to review statistics.  What I like about the last option, I can see all the statistics in one data set.

Here are a couple of good links to learn more about SQL Server Statistics.

Microsoft

Kendra Little

dm_db_stats_properties

Thanks for visiting and I hope you learned something!

Is it Live or Is It Memorex?

If you are my age, that is a phrase you will remember.  If not, it is a slogan for Memorex from quite a few years ago for cassette tapes….here is a link to a YouTube video.

This post really isn’t about that…it is about live query execution plans that were introduced with SSMS 2016.  Basically live query execution plans allows you to view the plan live as it is executing.

Before getting into the details, let’s take a look how to turn it on.  Before starting your query, Live Query Statistics must be turned on.  There are two ways this can be done.  The first one is under the Query menu and the second method is on the tool bar.  The button is the one the arrow is pointing to.

Once this is turned on, you can now run your query.  When you do, a new tab will appear in the results pain, which will be labeled “Live Query Statistics”.

This is actually pretty cool to see.  The dotted lines represent the parts of the plan that are either currently executing or haven’t executed yet.  In addition to seeing the lines move, you will also see the number of rows increment and the number of seconds go up as well.  The time is circled below.

Regarding the number of rows, you will actually see two numbers, actual number of rows and the estimated number of rows.  You will see something like this:

As indicated, the green arrow is the actual number of rows and the brown arrow is the estimated number of rows.  If you move away from the operator and the pop up menu disappears, when you return the actual number of rows will actually increment as more rows are returned.

If you look at the properties of the operator, you will see the Actual Number of Rows and the Elapsed Time properties increment as the query executes.

You can also see Live Execution Plans by using Activity Monitor.  Once you have opened Activity Monitor, navigate to the Active Expensive Queries section and right click on the query.  You will see a pop menu that will have the “Show Live Execution Plan”.

At the bottom left of the new tab, you will see the progress.  In the case below, the progress is at 18% complete.

Brent Ozar has a few really good posts that go more in depth about Live Query Execution Plans.  I figured why reinvent the wheel.  Here are the links to his posts.

BrentO Post 1

BrentO Post 2

I would recommend that you try this if you haven’t yet.

Here is some code that you can use.

USE [AdventureWorks2014]

SELECT *
FROM Production.TransactionHistory th
INNER JOIN Production.TransactionHistoryArchive tha
ON th.Quantity = tha.Quantity

Thanks for visiting my blog…hope you enjoyed the post!