Search

Thursday, October 25, 2012

Strategy to deign index for new database/table

It is recommended to take a strategic approach while you start designing index for a new database or new table/s for optimal performance and use of resources. Please consider using below strategy to design indexes for a newly created database/table/s:

PHASE 1: INITIAL DESIGN

This phase deals with designing index for a newly created database/table/s before moving it to production. Please follow below steps to design index for the database/table/s:

1. Start with each table and go one by one. Identify all queries which will run on the table and identify a best clustered index candidate for the table.

2. Select Primary Key and unique key. Do not blindly make your primary key as a clustered index. It is fine if your clustered index is not primary key.

3. Have foreign key are in place where ever needed. Foreign keys are not that bad for performance as long as you have index on this. So make sure you create non clustered index on the columns which are part of foreign key.
4. Create non clustered index for highly selective queries. Highly selective queries are the queries that will be frequently used and their performances are critical to business. Prefer to use composite (wide) index instead of having multiple smaller index but do not go for covering index.

5. STOP INDEX DESIGN AND IMPLEMENTATION AT THIS STAGE AND MOVE AHEAD WITH YOUR FINAL LOAD TEST AND/OR USER ACCEPTANCE TEST WHERE YOU CAN COLLECT TRUE WORKLOAD FROM PRODUCTION LOAD SCENERIO.

6. Gather true work load from load test and/or user acceptance test and analyze it and then design other non-clustered indexes. Follow below rules while you create such indexes:

a. Be sure that the workload you are working with truly represents the production scenario. A workload which captures partial scenario can lead us take decision which may degrade application’s performance.

b. Use available tools to analyze the workload and situation. Most common tools will be DTA (Database Tuning Advisor) and Missing Index Hint from Query Plan. However both tool has it’s limitation so do not blindly follow their recommendation.

7. MOVE THE DATABASE/TABLES TO PRODUCTION AT THIS STAGE. HOWEVER PLEASE REFER TO PHASE 2 FOR MAINTEAINCE OF INDEX FROM DESIGN PRESPECTIVE WHILE DATABASE/TABLE/S ARE IN PRODUCTION.

PHASE 2: DYNAMIC MAINTENANCE

Applications changes with time. This means database objects and server configuration changes which can make some index irrelevant or harmful for some queries or ma require new index. So once the database/tables are in production you need to keep evaluating the index design. You can execute below strategies to accomplish this task:

Proactive: Ensure that you capture good workload and execute Step 6 of PHASE 1 of this post at a realistic time interval. This time interval should be based on the database/application’s performance history and other relevant parameters. However you need to be very careful to ensure that you do not adversely affect existing system/s while working on this proactive maintenance.

Reactive: When you move a new code (change) to production, make sure that indexing is designed properly.

* Note: In this post I outlined a general strategy to design and implement index for new application/database/table/s which should work for almost any situation. However good index design comes from extensive experience and deep knowledge. Also consider using other options (like indexed view, code review etc) instead of completely relying on index for achieving desired performance.

Wide tables in SQL Server 2008

SQL Server table supports up to 1024 columns per table which is typically good enough for most of the cases. However if you need to expand a table beyond 1024 columns then you need to use a special table types named as Wide Tables. Wide tables are tables which can have more than 1024 columns and can actually have up to 30,000 columns. For in depth details on wide table, you may want refer http://msdn.microsoft.com/en-us/library/ms186986.aspx
 
Wide table uses sparse columns to increase the total of columns. This technology creates issues like performance problems, complications with index management, limits execution of DML statements, reduce performance for switch partition operations, prevents using compression etc. Hence unless absolutely necessary, it is not advisable to use wide tables. You may recommend using other technologies like joins or xml instead of using wide tables.

For more details on possible performance consideration for wide tables please refer to
http://msdn.microsoft.com/en-us/library/cc645884.aspx

How to create a wide table? (How to create a table which will have more than 1024 columns?)

Step 1:
Create a column set on the table using below code:
[Note: This must be first step before adding any sparse column to the table. If the table already has any sparse column then you cannot add column set. Also if you cannot add a column set then your table can not have more than 1024 columns even if you attempt to add new columns as sparse column]

ALTER TABLE [table name]
ADD CS XML COLUMN_SET FOR ALL_SPARSE_COLUMNS ;
GO
Example:
ALTER TABLE dbo.table1
ADD Widecolumn1 varchar(10) SPARSE NULL ;
GO
Step 2:
Now you can add columns and widen the table as you need.
[Note: New columns must be sparse columns assuming you already have 1024 traditional (non-sparse) columns. You cannot add more than 1024 non-sparse columns even if the table has column set.]

ALTER TABLE [table name]
ADD [Column Name] [Datatype] SPARSE NULL ;
GO

Example:

ALTER TABLE dbo.table1
ADD Widecolumn1 varchar(10) SPARSE NULL ;
GO

What I need to know while using sparse column?
Please refer to http://msdn.microsoft.com/en-us/library/cc280604.aspx for details. However below tips are useful for immediate references:

1. geography, text, geometry, timestamp, image, user-defined data types, ntext cannot be used as a sparse column data type.
2. SELECT…INTO statement does not copy over the sparse column property into a new table.
3. A sparse column must be nullable and cannot have the ROWGUIDCOL or IDENTITY properties
4. A sparse column
cannot have a default value or cannot be bound to a rule.
5. A sparse column cannot be part of a clustered index or a unique primary key index.
6. Although a computed column can contain a sparse column, a computed column cannot be marked as SPARSE

How to use data (read/write/alter) from a wide table?
You can use traditional statements to read/write/alter data in a wide table. However you should prefer using column set for data modification or retrieval for all sparse columns in a wide table instead of using traditional statements.
 
 
 
 
 
 
 Please refer to http://msdn.microsoft.com/en-us/library/cc280521.aspx for details on using column set.

SQL 2012 on Windows Core Edition

One good reason for moving to SQL 2012 will be it’s capability to run on Server Core Editions.

As core servers need less mainteaince and offer reduced attack surface, this will be a direct benefit on SQL Server 2012 also if it the instance is hosted on a core server. Basically all the advantegs of running a server core will be passed to SQL Server too.

For more details on advantages of running a core server please refer to
http://msdn.microsoft.com/en-us/library/ee391628(v=vs.85).aspx

For details on installing SQL 2012 on a server core please refer to http://msdn.microsoft.com/en-us/library/hh231669(v=sql.110).aspx

Last Backup Time of All Databases

SELECTdb.Name AS DatabaseName,
MAX
(bus.backup_finish_date) AS LastBackUpTime
FROM
sys.sysdatabases db
LEFT
OUTER JOIN msdb.dbo.backupset bus ON bus.database_name = db.name
GROUP
BY db.Name
order
by LastBackUpTime asc

New Licensing Model of SQL Server 2012 (Denali)

Denali is now formally announced as SQL Server 2012 and will be released in 2012.

SQL 2012 has many reasons for companies to upgrade but it will come with a major catch! “Cost Increase”.

(Although this is first major price hike of SQL Server since SQL 2005. So I guess this hike is not unfair. There was a minor hike for 2008 R2 also. Please refer to

http://tonymackelworth.wordpress.com/2010/04/24/pricing-increase-sql-server/ for details on the same.)

Here are some major changes in SQL Server 2012’s licensing model which will have impact on budget:

1. SQL Server license is now “Per Core” based. For example, if you have a server running with 2 quad- core processor processors, you will need to license 8 cores (4*2=8) instead of licensing 2 physical processors.

2. Enterprise edition will no more support CAL based license. That means you will need to migrate licensing of all Enterprise edition instances to “per core “licensing model even if your present instance is licensed on CAL.

3. Standard Edition will support CAL and core based licensing.

4. Business Intelligence Editions is a new edition for SQL 2012. It will support only CAL based licensing.

Impact on budget:

SQL 2012 is going to be more expensive compared to previous versions.

Example of price change for Level A licensing agreements:

Business Intelligence Editions -> $7026/Server (Same as Enterprise Edition)
Standard Editions -> $734/Server (No cost increase)
CALs -> $207/CAL ->  (This is a $45 increase/CAL)

Let us use below possible scenarios to analyze the situations:
Possibility 1: You have a Standard Server with 100 CAL. Compared to today’s cost, you will spend $4500 ($45*100) more for CAL licensing.

Possibility 2: You have an Enterprise Server with 100 CAL. For moving to SQL 2012, you need to move to per core license.

How your SA (Software Assurance) program can save your budget:

Microsoft did not forget old customers.If you have valid software assurance (SA) for your SQL Servers, you can minimize the impact on your budget. Contact with your Microsoft Reseller or Rep as soon as possible to discuss about your options. Generally if you complete the upgrade to SQL 2012 while you have a valid SA and within a prescribed time (which can be decided with you and your MS Rep/re-seller) then you can get up to 4 free core licenses per processor.

For example, if you have a SQL Enterprise Edition running on 2 physical quad core processors, then you do not pay anything extra as long as you have your existing edition licensed for 2 processors. Depending on your relationship with Microsoft, you may actually get all the required number of core licenses free even if you are running with more powerful processors.

That is, if you are having 8 core/processor in the above example, Microsoft may actually license your 16 cores for free. (8 cores * 2 processors = 16 core license required). This can save a good amount.

You should start discussing with your MS rep/reseller to discuss how to move from CAL based license to per core based license (if applicable) and how to minimize impact of the price impact of CAL. You may have to pay the price difference per CAL depending on your relationship with Microsoft.

Recommended Plan of action to minimize impact on budget:

If you are DBA or the person involved with licensing, you should be active now to minimize impact from this change. I recommend taking below steps as soon as you can:

Do an inventory (or revalidate it) of your SQL Server and licenses and engage with MS Rep to figure out best possible option for you.

Although SQL 2012 comes with a higher price tag that does not mean that your budget will be seriously impacted depending on your relationship with Microsoft. So consider to prioritize your SQL 2012 migrations to take full benefits of SA. (Per my best knowledge, we will get around 36 months after the release of 2012 to maximize the opportunities to avoid cost increase).

SQL 2012 Enterprise Edition comes with true unlimited virtualization. Consider exploring this option for future expansion if you are running data center editions of Windows.

Find port number of a SQL instance using TSQL

DECLARE @portNumber varchar(20), @key varchar(100)
if charindex('\',@@servername,0) <>0
begin
set @key = 'SOFTWARE\MICROSOFT\Microsoft SQL Server\' +@@servicename+'\MSSQLServer\Supersocketnetlib\TCP'
end
else
begin
set @key = 'SOFTWARE\MICROSOFT\MSSQLServer\MSSQLServer\Supersocketnetlib\TCP'
end
EXEC master..xp_regread @rootkey='HKEY_LOCAL_MACHINE', @key=@key, @value_name='Tcpport', @value=@portNumber OUTPUT
SELECT 'Server Name: '+@@servername + ' Port Number:'+convert(varchar(10),@portNumber)

Wednesday, October 24, 2012

How to kill a negative SPID (like SPID -2) in SQL Server?


What to do next when you find SPID -2 in SQL Server

 Error: Process ID -2 is not a valid process ID
This article applies to:
 SQL Server 2000
 SQL Server 2005
 SQL Server 2008

You’ve been performing an investigation in SQL Server into a performance issue. Maybe some database blocking issue and you’ve done the usual things and you’ve just run an sp_who or sp_who2. Or perhaps you’ve been browsing the error log for potential problems and spotted an odd error message. But fundamentally, what you see is a SPID with a negative number. Specifically, SPID=-2. And it’s at the top of a blocking chain with a dozen other transactions behind it trying to acquire resource. Until it’s killed, nothing will happen.

Using KILL -2 Should Do It

So you do what you always do in these situations. Wait for it to complete, or more usually, because there’s an issue, you make a call on it and kill the SPID:
KILL -2

 And what comes back is:

Msg 6101, Level 16, State 1, Line 1
Process ID -2 is not a valid process ID. Choose a number between 1 and 2048

I’ll give you the fix first, you’re probably desperate for it (and it doesn’t involve restarting SQL Server). Then I’ll explain why:

Run the following (you’ll need to have a SQL Server login with sysadmin or processadmin privileges to do this):

select req_transactionUOW
 from master..syslockinfo
 where req_spid = -2

This will return a 32 digit UOW number which looks like a GUID. Something like ‘DEF12078-0199-1212-B810-CD46A73F2498’

Copy this into your query window and run:
KILL ‘DEF12078-0199-1212-B810-CD46A73F2498’

Run sp_who/sp_who2 again and you will probably see that the offending SPID has disappeared. If it hasn’t, it’s probably still in rollback and you’ll have to wait, but by nature this kind of SPID is usually pretty quick to roll back and terminate.