Search

Tuesday, October 25, 2011

Data Collector Architecture

The collection system is grouped into four categories:


Storage components are the databases that contain configuration information and jobs (msdb) as well as the collected performance data (MDW).

Execution components manage the data collection and storage operations.

API components enable interaction between the user interfaces and the data collector and the user interfaces. These components include SQL Server metadata, such as the dynamic management views.

Client components are the user interfaces for managing the data collection and presenting the reports.







Friday, October 21, 2011

Different Types of Lock in SQL Server

There are different types of lock in SQL Server 2000 and 2005. These locks are applied in different situations. Here is the list of locks and the situation for the locks.



SHARED - This lock is applied for read operation where the data is not updated. A good example would be the select statement.

UPDATE – This locked on those resources that can be updated. This lock prevents the common form of dead lock that occurs when multiple sessions are locking the data so that they can update it later.

EXCLUSIVE - Used for data-modification operations, such as INSERT, UPDATE, or DELETE. Ensures that multiple updates cannot be made to the same resource at the same time.

INTENT - Used to establish a lock hierarchy. The different types of intent locks are: intent shared, intent exclusive, and shared with intent exclusive.
SCHEMA - Used when an operation dependent on the schema of a table is executing. The different types of schema locks are: schema modification and schema stability.

BULK UPDATE – This lock is applied when there is a bulk copying of data and the TABLOCK is applied

KEY RANGE - Protects the range of rows read by a query when using the serializable transaction isolation level. Ensures that other transactions cannot insert rows that would qualify for the queries of the serializable transaction if the queries were run again.

Diagnosing and Resolving Spinlock issue on SQL2008R2

Spinlocks are lightweight synchronization primitives which are used to protect access to data structures. Spinlocks are not unique to SQL Server. When a thread attempting to acquire a spinlock is unable to obtain access it executes in a loop periodically checking to determine if the resource is available instead of immediately yielding. After some period of time a thread waiting on a spinlock will yield before it is able to acquire the resource in order to allow other threads running on the same CPU to execute. This is known as a Backoff.

SQL Server utilizes spinlocks to protect access to some of its internal data structures. These are used within the engine to serialize access to certain data structures in a similar fashion to latches. The main difference between a latch and a spinlock is the fact that spinlocks will spin (execute a loop) for a period of time checking for availability of a data structure while a thread attempting to acquire access to a structure protected by a latch will immediately yield if the resource is not available. Yielding requires context switching of a thread off the CPU so that another thread can execute.

Spinlock statistics are exposed by the sys.dm_os_spinlock_stats Dynamic Management View (DMV) within SQL Server.

select * from sys.dm_os_spinlock_stats

order by spins desc

















The statistics exposed by this query are described as follows:








































Running Transact-SQL Script Files by Using sqlcmd


You can use sqlcmd to run a Transact-SQL script file. A Transact-SQL script file is a text file that can contain a combination of Transact-SQL statements, sqlcmd commands, and scripting variables.

1. Save the file as Script.sql in the C drive.

To run the script file

1. Open a command prompt window.

2. In the Command Prompt window, type: sqlcmd -S Server1\instanceName -i C:\Script.sql

3. Press ENTER.

A list of Adventure Works employee names and addresses is written to the command prompt window.

To save this output to a text file

1. Open a command prompt window.

2. In the Command Prompt window, type: sqlcmd -S Server1\instanceName -i C:\Script.sql -o C:\Output.txt

3. Press ENTER.















Monday, October 17, 2011

Configure Log Shipping

Step 1 - Registering the necessary servers















Step 2 - Starting the maintenance plan wizard


















Step 3 - The maintenance plan wizard welcome screen


 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Step 4 - Selecting the database to log ship   
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
  
Step 5 - Selecting backup options  
 
 
 
 
 
 
 
 
 
 
 
 
 

Step 6 - Entering the network share name 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 Step 7 - Adding destination servers  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 


Step 8 - Log shipping destination details  


 

Step 9 - Confirming the log shipping destination



 
 
 













Step 10 - Initializing the destination databases  
 
 
 
 
 
 
 
 
 
 
 
 
 

Step 11 - Log shipping schedules  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Step 12 - Log shipping alert thresholds  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Step 13 - Setting up the monitor server



 
 
 
 
 
 
 
 
 
 
 
 
Step 14 - Maintenance Plan summary  
 
 














Step 15 - Enterprise Manager task list





















Step 16 - SUCCESS!




















A Common Log Shipping Configuration in SQL 2005

How to find out the last time your database was restored

DECLARE @DatabaseName as varchar(50)


SET @DatabaseName = 'YOUR DATABASE NAME'

SELECT destination_database_name,max(restore_date) as restore_date from msdb..restorehistory

WHERE destination_database_name = @DatabaseName

GROUP BY destination_database_name

Friday, October 14, 2011

Script to check the database Backup/Restore Percentage

SET NOCOUNT ON


GO

SELECT Command

, 'PercentComplete' = percent_complete

, 'EstEndTime' = CONVERT(varchar(26),Dateadd(ms,estimated_completion_time,Getdate()),100)

, 'EstSecondsToEnd' = CONVERT(decimal(9,2),(estimated_completion_time * .001))

, 'EstMinutesToEnd' = CONVERT(decimal(9,2),(estimated_completion_time * .001 / 60))

, 'OperationStartTime' = CONVERT(varchar(26),start_time,100)

FROM sys.dm_exec_requests

WHERE command IN ('BACKUP DATABASE','RESTORE DATABASE')

Dynamic Management Views(DMVs)

Dynamic Management Views(DMVs) are system views that surface the internal counters of the database engine and help present an easily comprehensible dashboard of the database engine performance that can be used to monitor the health of a server instance, diagnose problems, and tune performance.

List of DMV in SQL Server 2005


======================================================

1. sys.dm_broker_activated_tasks

2. sys.dm_broker_connections

3. sys.dm_broker_forwarded_messages

4. sys.dm_broker_queue_monitors

5. sys.dm_clr_appdomains

6. sys.dm_clr_loaded_assemblies

7. sys.dm_clr_properties

8. sys.dm_clr_tasks

9. sys.dm_db_file_space_usage

10. sys.dm_db_index_usage_stats

11. sys.dm_db_mirroring_connections

12. sys.dm_db_missing_index_details

13. sys.dm_db_missing_index_group_stats

14. sys.dm_db_missing_index_groups

15. sys.dm_db_partition_stats

16. sys.dm_db_session_space_usage

17. sys.dm_db_task_space_usage

18. sys.dm_exec_background_job_queue

19. sys.dm_exec_background_job_queue_stats

20. sys.dm_exec_cached_plans

21. sys.dm_exec_connections

22. sys.dm_exec_query_memory_grants

23. sys.dm_exec_query_optimizer_info

24. sys.dm_exec_query_resource_semaphores

25. sys.dm_exec_query_stats

26. sys.dm_exec_query_transformation_stats

27. sys.dm_exec_requests

28. sys.dm_exec_sessions

29. sys.dm_fts_active_catalogs

30. sys.dm_fts_index_population

31. sys.dm_fts_memory_buffers

32. sys.dm_fts_memory_pools

33. sys.dm_fts_population_ranges

34. sys.dm_io_backup_tapes

35. sys.dm_io_cluster_shared_drives

36. sys.dm_io_pending_io_requests

37. sys.dm_os_buffer_descriptors

38. sys.dm_os_child_instances

39. sys.dm_os_cluster_nodes

40. sys.dm_os_hosts

41. sys.dm_os_latch_stats

42. sys.dm_os_loaded_modules

43. sys.dm_os_memory_allocations

44. sys.dm_os_memory_cache_clock_hands

45. sys.dm_os_memory_cache_counters

46. sys.dm_os_memory_cache_entries

47. sys.dm_os_memory_cache_hash_tables

48. sys.dm_os_memory_clerks

49. sys.dm_os_memory_objects

50. sys.dm_os_memory_pools

51. sys.dm_os_performance_counters

52. sys.dm_os_ring_buffers

53. sys.dm_os_schedulers

54. sys.dm_os_stacks

55. sys.dm_os_sublatches

56. sys.dm_os_sys_info

57. sys.dm_os_tasks

58. sys.dm_os_threads

59. sys.dm_os_virtual_address_dump

60. sys.dm_os_wait_stats

61. sys.dm_os_waiting_tasks

62. sys.dm_os_worker_local_storage

63. sys.dm_os_workers

64. sys.dm_qn_subscriptions

65. sys.dm_repl_articles

66. sys.dm_repl_schemas

67. sys.dm_repl_tranhash

68. sys.dm_repl_traninfo

69. sys.dm_tran_active_snapshot_database_transactions

70. sys.dm_tran_active_transactions

71. sys.dm_tran_current_snapshot

72. sys.dm_tran_current_transaction

73. sys.dm_tran_database_transactions

74. sys.dm_tran_tran_locks

75. sys.dm_tran_session_transactions

76. sys.dm_tran_top_version_generators

77. sys.dm_tran_transactions_snapshot

78. sys.dm_tran_version_store



New DMV in SQL Server 2008

======================================================

1. sys.dm_audit_actions

2. sys.dm_audit_class_type_map

3. sys.dm_cdc_errors

4. sys.dm_cdc_log_scan_sessions

5. sys.dm_cryptographic_provider_properties

6. sys.dm_database_encryption_keys

7. sys.dm_db_mirroring_auto_page_repair

8. sys.dm_db_mirroring_past_actions

9. sys.dm_db_persisted_sku_features

10. sys.dm_db_script_level

11. sys.dm_exec_procedure_stats

12. sys.dm_exec_trigger_stats

13. sys.dm_filestream_file_io_handles

14. sys.dm_filestream_file_io_requests

15. sys.dm_fts_fdhosts

16. sys.dm_fts_outstanding_batches

17. sys.dm_os_dispatcher_pools

18. sys.dm_os_dispatchers

19. sys.dm_os_memory_brokers

20. sys.dm_os_memory_node_access_stats

21. sys.dm_os_memory_nodes

22. sys.dm_os_nodes

23. sys.dm_os_process_memory

24. sys.dm_os_spinlock_stacks

25. sys.dm_os_sys_memory

26. sys.dm_resource_governor_configuration

27. sys.dm_resource_governor_resource_pools

28. sys.dm_resource_governor_workload_groups

29. sys.dm_server_audit_status

30. sys.dm_tran_commit_table

31. sys.dm_xe_map_values

32. sys.dm_xe_object_columns

33. sys.dm_xe_objects

34. sys.dm_xe_packages

35. sys.dm_xe_session_event_actions

36. sys.dm_xe_session_events

37. sys.dm_xe_session_object_columns

38. sys.dm_xe_session_targets

39. sys.dm_xe_sessions



Last modify date of a Database

SELECT name, create_date, modify_date


FROM sys.objects

WHERE type = 'P' ---------P=Procedure name---------

AND name = 'DatabaseName'

Access the backup disk in cluster environment

C:\Program Files\Tivoli\TSM\baclient>dsm -optfile=E:\tsm\dsm.opt


D:\tivoli\tsm\baclient>start dsm.exe -optfile=E:\TSM\dsm.opt

Script to Check Backup History

use msdb;

SELECT backupset.database_name,backupset.backup_start_date,backupmediafamily.physical_device_name,(backupset.backup_size/1024)/1000 as size
FROM backupset,backupmediafamily where backupset.media_set_id = backupmediafamily.media_set_id
and datediff(dd,backupset.backup_start_date,getdate()) < 30 and type='l'
order by backupset.database_name,backupset.backup_start_date desc

i=diff
l=log
d=full
===================================================================================
use msdb;


SELECT backupset.database_name,backupset.backup_start_date,backupset.backup_finish_date,backupmediafamily.physical_device_name,(backupset.backup_size/1024)/1000 as size,
backupset.first_lsn,backupset.last_lsn
FROM backupset,backupmediafamily where backupset.media_set_id = backupmediafamily.media_set_id
and datediff(dd,backupset.backup_start_date,getdate()) < 13
and database_name ='TestDatabase'-- and type='l'
order by backupset.database_name,backupset.backup_start_date desc


Database Mail Configuration


EXECUTE msdb.dbo.sysmail_help_status_sp;

Use MSDB

EXEC sysmail_start_sp

Use MSDB

Select count(*) from ExternalMailQueue

sp_configure

http://blogs.msdn.com/b/sqlserverfaq/archive/2010/03/18/database-mail-failing-with-the-service-queue-externalmailqueue-is-currently-disabled-message.aspx

Examining Query Execution Plans

Graphical Execution Plans

Graphical execution plans are accessed through the query window inside Management Studio in SQL Server 2005/2008 or through Query Analyzer in SQL Server 2000. To a large degree, the functionality of graphical plans is the same in SQL Server 2000 as it is in SQL Server 2008. However, there are some fundamental differences. For this reason, the different versions will be discussed individually in order to be more specific to the differences between the versions of SQL Server. Differences between SQL Server 2005 and 2008 will be highlighted within the section.

In all versions there are two fundamental types of execution plan, the estimated execution plan and the actual plan. The estimated plan does not require the query to be run while the actual plan is an output from the query engine showing the plan used to execute the query. Most of the time these two plans are identical, but there are circumstances where they will be different.

All graphical plans are read from the right to the left and from the top to the bottom. That's important to know so that you can understand other concepts such as understanding how a hash join works. Each icon represents an operation. Some operations are the same between the Estimated and Actual plans and some vary between the two plans. Each operator is connected by an arrow that represents a data feed. The data feed is the output from one operator and the input for the next. The thickness of the data feed varies per the amount of the data it represents. Thinner arrows represent fewer rows and thicker arrows represent more rows.

Operators represent various objects and actions within the execution plan. A full listing of operators is available in the Books Online.

SQL Server 2005/2008

Estimated Execution Plan

There are several ways to generate an estimated execution plan:

• Select the "Display Estimated Execution Plan" from the tool bar

• Right click within the query window and select "Display Estimated Execution Plan"

• Select the Query menu and then the "Display Estimated Execution Plan" menu choice

• Press CTL-L

When any of these actions is performed, an estimated, graphical, execution plan is created for the query in the query window. The query is not executed. That is an important point. The query is merely run against the query optimizer within the SQL Server system and the output from the optimizer is displayed as a graphical execution plan. If objects that don't exist, such as temporary tables, are part of the query, the estimated plan will fail.

Actual Execution Plan

An actual execution plan requires the query to be executed. To enable the generation of the actual execution plan:

• Select the "Include Actual Execution Plan" button from the tool bar

• Right click within the query window and select "Include Actual Execution Plan"

• Select the Query menu and then the "Include Actual Execution Plan" menu choice

• Press CTL-M

After the query executes, the actual execution plan will be available in a different tab in the results pane of the query window.



Administering Servers by Using Policy-Based Management


Policy-Based Management is a system for managing one or more instances of SQL Server 2008. When SQL Server policy administrators use Policy-Based Management, they use SQL Server Management Studio to create policies to manage entities on the server, such as the instance of SQL Server, databases, or other SQL Server objects.

Policy-Based Management has three components:

• Policy management

Policy administrators create policies.

• Explicit administration

Administrators select one or more managed targets and explicitly check that the targets comply with a specific policy, or explicitly make the targets comply with a policy.

• Evaluation modes

There are four evaluation modes, three of which can be automated:

• On demand. This mode evaluates the policy when directly specified by the user.

• On change: prevent. This automated mode uses DDL triggers to prevent policy violations.

• On change: log only. This automated mode uses event notification to evaluate a policy when a relevant change is made.

• On schedule. This automated mode uses a SQL Server Agent job to periodically evaluate a policy.

When automated policies are not enabled, Policy-Based Management will not affect system performance.

Policy-Based Management Terms and Concepts

Policy-Based Management managed target

Entities that are managed by Policy-Based Management, such as an instance of the SQL Server Database Engine, a database, a table, or an index. All targets in a server instance form a target hierarchy. A target set is the set of targets that results from applying a set of target filters to the target hierarchy, for example, all the tables in the database owned by the HumanResources schema.

Policy-Based Management facet

A set of logical properties that model the behavior or characteristics for certain types of managed targets. The number and characteristics of the properties are built into the facet and can be added or removed by only the maker of the facet. A target type can implement one or more management facets, and a management facet can be implemented by one or more target types. Some properties of a facet can only apply to a specific version. For example, the Database Mail property of the Surface Area Configuration facet only applies to SQL Server 2005 and later versions.

Policy-Based Management condition

A Boolean expression that specifies a set of allowed states of a Policy-Based Management managed target with regard to a management facet.

Policy-Based Management policy

A Policy-Based Management condition and the expected behavior, for example, evaluation mode, target filters, and schedule. A policy can contain only one condition. Policies can be enabled or disabled.

Policy-Based Management policy category

A user-defined category to help manage policies. Users can classify policies into different policy categories. A policy belongs to one and only one policy category. Policy categories apply to databases and servers. At the database level, the following conditions apply:

• Database owners can subscribe a database to a set of policy categories.

• Only policies from its subscribed categories can govern a database.

• All databases implicitly subscribe to the default policy category.

At the server level, policy categories can be applied to all databases.

Effective policy

The effective policies of a target are those policies that govern this target. A policy is effective with regard to a target only if all the following conditions are satisfied:

• The policy is enabled.

• The target belongs to the target set of the policy.

• The target or one of the targets ancestors subscribes to the policy group that contains this policy.

Allowed Evaluation Modes


The policy evaluation modes are determined by the characteristics of the Policy-Based Management facet that is used by the policy. All facets support On demand and On schedule. Facets support On change: log only if the change of the facet state can be captured by some events. Facets support On change: prevent if there is transactional support for the DDL statements that change the facet state. Policies that are automated with one of these three execution modes can be enabled and disabled.

In SQL Server Management Studio, the Evaluate Policies dialog box provides two options that you can use to run a policy:

Evaluate

This will evaluate policies against the selected targets.

Apply

This enables you to apply changes to applicable targets that violate policies. Some targets are not reconfigurable through Policy-Based Management. For example, if you are evaluating whether backup and data files exist on separate devices, Policy-Based Management can detect the violations to this condition; however, you cannot apply changes through Policy-Based Management to enforce policy compliance.

Policy Management


Policies are created and managed by using Management Studio. The process includes the following steps:

1. Select a Policy-Based Management facet that contains the properties to be configured.

2. Define a condition that specifies the state of a management facet.

3. Define a policy that contains the condition, additional conditions that filter the target sets, and the evaluation mode.

4. Check whether an instance of SQL Server is in compliance with the policy.

For failed policies, Object Explorer indicates a critical health warning as a red icon next to the target and the nodes that are higher in the Object Explorer tree.

Policy Storage

Policies are stored in the msdb database. After a policy or condition is changed, msdb should be backed up. SQL Server 2008 includes policies that can be used to monitor an instance of SQL Server. By default, these policies are not installed on the Database Engine; however, they can be imported from the default installation location of C:\Program Files\Microsoft SQL Server\100\Tools\Policies\DatabaseEngine\1033.

You can directly create policies by using the File/New menu, and then saving them to a file. This enables you to create policies when you are not connected to an instance of the Database Engine.

Policy history for policies evaluated in the current instance of the Database Engine is maintained in msdb system tables. Policy history for policies applied to other instances of the Database Engine or applied to Reporting Services or Analysis Services is not retained.

Configuring Alerts to Notify Policy Administrators of Policy Failures

When Policy-Based Management policies are executed in one of the three automated evaluation modes, if a policy violation occurs, a message is written to the event log. To be notified when this message is written to the event log, you can create an alert to detect the message and perform an action. The alert should detect the messages as shown in the following table.





















Additional Considerations About Alerts

Be aware of the following additional considerations about alerts:

• Alerts are raised only for policies that are enabled. Because On demand policies cannot be enabled, alerts are not raised for policies that are executed on demand.

• If the action you want to take includes sending an e-mail message, you must configure a mail account. We recommend that you use Database Mail. Alert security:

When policies are evaluated on demand, they execute in the security context of the user. To write to the error log, the user must have ALTER TRACE permissions or be a member of the sysadmin fixed server role. Policies that are evaluated by a user that has less privileges will not write to the event log, and will not fire an alert.

The automated execution modes execute as a member of the sysadmin role. This allows the policy to write to the error log and raise an alert.

Security

Administering Policy-Based Management requires membership in the PolicyAdministratorRole role in the msdb database. This role has complete control of all policies on the system. This control includes creating and editing policies and conditions and enabling and disabling policies.

The following security principles apply:

• A system administrator or database owner can subscribe a database to a policy or policy group.

• Members of the PolicyAdministratorRole role can enable or disable policies.

• Members of the PolicyAdministratorRole can create policies that they do not have permission to execute ad hoc, but which can be successful when the policies are run by other users that have sufficient permission.

• Ad hoc policy execution occurs in the security context of the user.

• Policies that have the On schedule evaluation mode, use SQL Server Agent jobs that are owned by the sa login.

Considerations for Using Policy-Based Management

Be aware that policies can affect how some SQL Server features work. For example, change data capture and transactional replication both use the systranschemas table, which does not have an index. If you enable a policy that all tables must have an index, enforcing compliance of the policy will cause these features to fail.





Register SPN for report server and set Authentication to NTLM

ERROR:
======
The Kerberos client received a KRB_AP_ERR_MODIFIED error from the server it-segbg-bcdev. The target name used was HTTP/server1.AP.net. This indicates that the target server failed to decrypt the ticket provided by the client. This can occur when the target server principal name (SPN) is registered on an account other than the account the target service is using. Please ensure that the target SPN is registered on, and only registered on, the account used by the server.

To register an SPN for a Report Server service running as a domain user

1.Install Reporting Services and configure the Report Server service to run as a domain user account.
Note that users will not be able to connect to the report server until you complete the following steps.

2.Download and install Windows Server 2003 Service Pack 1 Support Tools from the Download Center.
3.Log on to the domain controller as domain administrator.

4.Open a Command Prompt window.

5.Copy the following command, replacing placeholder values with actual values that are valid for your network:
Setspn -a http/:computername.domainname:port domainuseraccount
6.Run the command.

7.Open the RsReportServer.config file and locate the  AuthenticationTypes section. Add RSwindowsNegotiate as the first entry in this section to enable NTLM.

http://msdn.microsoft.com/en-us/library/cc281382.aspx 

NUMA Node & Max DOP Settings

MAXDOP should be set to the number of cores / number of NUMA nodes.

To find out how many NUMA nodes there is, execute dbcc memorystatus.

You will have a result looking similar to the following:


Here we get 5 result sets with a column named Memory node Id = x (and some other result sets that are not relevant).

"Memory node ID = 64" should not be included, only count the ones that comes in order , in this example we have nodes 0, 1, 2 and 3 which gives us that we have 4 NUMA nodes.

This server  has a total of 16 cores, so MAXDOP should be set to 16 / 4 = 4.

If we only see one result set with Memory node Id = 0, then the server does not have NUMA nodes. In that case set MAXDOP = number of cores.

MS Recommendation:


MaxDOP

The option, max degree of parallelism, controls the number of processors that can be used to run a single SQL Server statement using a parallel execution plan. The default value for this configuration is 0, indicating that all available processors can be used. Parallelism is often beneficial for longer running queries or for queries that have complicated execution plans. However, OLTP-centric application performance could sometimes suffer when parallel plans use more threads than the current number of physical processors can handle. The test case now makes the optimal recommendations in the NUMA architecture.

Use the following guidelines when you configure the MAXDOP value:

• For servers that use more than eight processors, use this configuration: MAXDOP=8.

• For servers that have eight or less processors, use this configuration where N equals the number of processors: MAXDOP=0 to N.

• For servers that have NUMA configured, MAXDOP should not exceed the number of CPUs (cores) that are assigned to each NUMA node.

• For servers that have hyper-threading enabled, the MAXDOP value should not exceed the number of physical processors.


Expand A Volume with Diskpart command

C:\Documents and Settings\Dev>diskpart

Microsoft DiskPart version 5.2.3790.3959
Copyright (C) 1999-2001 Microsoft Corporation
On computer: SERVER1
DISKPART> list volumes
Microsoft DiskPart version 5.2.3790.3959
DISK - Prints out a list of disks.
PARTITION - Prints out a list of partitions on the current disk.
VOLUME - Prints out a list volumes.

DISKPART>list volume

Volume ### Ltr Label Fs Type Size Status Info

---------- --- ----------- ----- ---------- ------- --------- --------

Volume 0 D Data NTFS Partition 40 GB Healthy

Volume 1 C System NTFS Partition 10 GB Healthy System
Volume 2 E DVD-ROM 0 B Healthy

DISKPART> select volume 0
Volume 0 is the selected volume.

DISKPART>extend
DiskPart successfully extended the volume.

DISKPART>

SQL server Architecture

Useful SQL Server DBCC Commands

DBCC CHECKFILEGROUP - Checks all tables file group for any damage.


DBCC CHECKALLOC - To check that all pages in a db are correctly allocated.

DBCC SQLPERF - It gives report on current usage of transaction log in percentage.

DBCC SQLPERF (LOGSPACE)

This option (documented) returns data about the transaction log for all of the databases on the SQL Server, including Database Name, Log Size (MB), Log Space Used (%), and Status.

DBCC SQLPERF (UMSSTATS)

This option (undocumented) returns data about SQL Server thread management.

DBCC SQLPERF (WAITSTATS)

This option (undocumented) returns data about wait types for SQL Server resources.

DBCC SQLPERF (IOSTATS)

This option (undocumented) returns data about outstanding SQL Server reads and writes.

DBCC SQLPERF (RASTATS)

This option (undocumented) returns data about SQL Server read-ahead activity.

DBCC SQLPERF (THREADS)

This option (undocumented) returns data about I/O, CPU, and memory usage per SQL Server thread. [7.0, 2000]



DBCC CACHESTATS displays information about the objects currently in the buffer cache, such as hit rates, compiled objects and plans, etc.

DBCC DROPCLEANBUFFERS: Use this command to remove all the data from SQL Server’s data cache (buffer) between performance tests to ensure fair testing. You may first want to run the CHECKPOINT command first. Running CHECKPOINT will write all dirty buffers to disk.

DBCC FLUSHPROCINDB: Used to clear out the stored procedure cache for a specific database on a SQL Server, not the entire SQL Server.

DBCC FLUSHPROCINDB (@intDBID)

DBCC INDEXDEFRAG: In SQL Server 2000, Microsoft introduced DBCC INDEXDEFRAG to help reduce logical disk fragmentation.

DBCC INDEXDEFRAG (Database_Name, Table_Name, Index_Name)

DBCC MEMORYSTATUS: Lists a breakdown of how the SQL Server buffer cache is divided up, including buffer activity.

DBCC MEMORYSTATUS

DBCC OPENTRAN: This command is used to identify the oldest open transaction in a specific database.

DBCC OPENTRAN(‘database_name’)

DBCC PAGE: Use this command to look at contents of a data page stored in SQL Server.

DBCC PAGE ({dbid
dbname}, pagenum [,print option] [,cache] [,logical])

DBCC PINTABLE & DBCC UNPINTABLE: By default, SQL Server automatically brings into its data cache the pages it needs to work with. These data pages will stay in the data cache until there is no room for them, and assuming they are not needed, these pages will be flushed out of the data cache onto disk. At some point in the future when SQL Server needs these data pages again, it will have to go to disk in order to read them again into the data cache for use. If SQL Server somehow had the ability to keep the data pages in the data cache all the time, then SQL Server’s performance would be increased because I/O could be reduced on the server.

DECLARE @db_id int, @tbl_id int

USE Northwind

SET @db_id = DB_ID(‘Northwind’)

SET @tbl_id = OBJECT_ID(‘Northwind..categories’)

DBCC PINTABLE (@db_id, @tbl_id)

DECLARE @db_id int, @tbl_id int

USE Northwind

SET @db_id = DB_ID(‘Northwind’)

SET @tbl_id = OBJECT_ID(‘Northwind..categories’)

DBCC UNPINTABLE (@db_id, @tbl_id)

DBCC SHOWCONTIG: Used to show how fragmented data and indexes are in a specified table

DBCC SHOWCONTIG (Table_id, IndexID)

DBCC PROCCACHE: Displays information about how the stored procedure cache is being used.

DBCC PROCCACHE

DBCC REINDEX: Periodically (weekly or monthly) perform a database reorganization on all the indexes on all the tables in your database.

DBCC DBREINDEX(‘table_name’, fillfactor)

DBCC SHOW_STATISTICS: Used to find out the selectivity of an index. Generally speaking, the higher the selectivity of an index, the greater the likelihood it will be used by the query optimizer. You have to specify both the table name and the index name you want to find the statistics on.

DBCC SHOW_STATISTICS (table_name, index_name)

DBCC SQLMGRSTATS: Used to produce three different values that can sometimes be useful when you want to find out how well caching is being performed on ad-hoc and prepared Transact-SQL statements.

DBCC SQLMGRSTATS

Item Status

————————- ———–

Memory Used (8k Pages) 5446

Number CSql Objects 29098

Number False Hits 425490

Here’s what the above means:

• Memory Used (8k Pages): If the amount of memory pages is very large, this may be an indication that some user connection is preparing many Transact-SQL statements, but it not un-preparing them.

• Number CSql Objects: Measures the total number of cached Transact-SQL statements.

• Number False Hits: Sometimes, false hits occur when SQL Server goes to match pre-existing cached Transact-SQL statements. Ideally, this figure should be as low as possible.

DBCC SQLPERF (UMSSTATS): When you run this command, you get output like this. (Note, this example was run on a 4 CPU server. There is 1 Scheduler ID per available CPU.)

Statistic Value

——————————– ————————

Scheduler ID 0.0

num users 18.0

num runnable 0.0

num workers 13.0

idle workers 11.0

work queued 0.0

cntxt switches 2.2994396E+7

cntxt switches(idle) 1.7793976E+7

Scheduler ID 1.0

num users 15.0

num runnable 0.0

num workers 13.0

idle workers 10.0

work queued 0.0

cntxt switches 2.4836728E+7

cntxt switches(idle) 1.6275707E+7

Scheduler ID 2.0

num users 17.0

num runnable 0.0

num workers 12.0

idle workers 11.0

work queued 0.0

cntxt switches 1.1331447E+7

cntxt switches(idle) 1.6273097E+7

Scheduler ID 3.0

num users 16.0

num runnable 0.0

num workers 12.0

idle workers 11.0

work queued 0.0

cntxt switches 1.1110251E+7

cntxt switches(idle) 1.624729E+7

Scheduler Switches 0.0

Total Work 3.1632352E+7

Below is an explanation of some of the key statistics above:

• num users: This is the number of SQL Server threads currently in the scheduler.

• num runnable: This is the number of actual SQL Server threads that are runnable.

• num workers: This is the actual number of worker there are to process threads. This is the size of the thread pool.

• idle workers: The number of workers that are currently idle.

• cntxt switches: The number of context switches between runnable threads.

• cntxt switches (idle): The number of context switches to the idle thread.

• DBCC UPDATEUSAGE: The official use for this command is to report and correct inaccuracies in the sysindexes table, which may result in incorrect space usage reports. Apparently, it can also fix the problem of unreclaimed data pages in SQL Server. You may want to consider running this command periodically to clean up potential problems. This command can take some time to run, and you want to run it during off times because it will negatively affect SQL Server’s performance when running. When you run this command, you must specify the name of the database that you want affected.

• DBCC UPDATEUSAGE (‘databasename’)



















Wednesday, October 12, 2011

Server Level Security



sysadmin

Members of the sysadmin fixed server role can perform any activity in the server.

serveradmin

Members of the serveradmin fixed server role can change server-wide configuration options and shut down the server.

securityadmin

Members of the securityadmin fixed server role manage logins and their properties. They can GRANT, DENY, and REVOKE server-level permissions. They can also GRANT, DENY, and REVOKE database-level permissions if they have access to a database. Additionally, they can reset passwords for SQL Server logins.

Security NoteThe ability to grant access to the Database Engine and to configure user permissions allows the security admin to assign most server permissions. The securityadmin role should be treated as equivalent to the sysadmin role.

processadmin

Members of the processadmin fixed server role can end processes that are running in an instance of SQL Server.

setupadmin

Members of the setupadmin fixed server role can add and remove linked servers.

bulkadmin

Members of the bulkadmin fixed server role can run the BULK INSERT statement.

diskadmin

The diskadmin fixed server role is used for managing disk files.

dbcreator

Members of the dbcreator fixed server role can create, alter, drop, and restore any database.

public

Every SQL Server login belongs to the public server role. When a server principal has not been granted or denied specific permissions on a securable object, the user inherits the permissions granted to public on that object. Only assign public permissions on any object when you want the object to be available to all users.


Tuesday, October 11, 2011

SQL Server - Database Level Security

Fixed Database Roles:


SQL Server includes a Fixed Database Role. They are mainly intended to organize the administrative tasks. A user can also have more than one role. The fixed database roles include the following:

•Db_accessadmin: It helps a user to access a database, but not manage the database security level.

•Db_backupoperator: It performs backup, checkpoints, and DBCC commands, but cannot manage the database level security.

•Db_datareader: It is used to read all the data in the database. It is moreover an equivalent of a grant on all objects, and it can be overridden by a deny permission.

•Db_datawriter: It is used to write all the data in the database. It is moreover an equivalent of a grant on all objects, and it can be overridden by a deny permission.

•Db_ddladmin: It helps to issue DDL commands like create, alter, and drop.

•Db_denydatareader: It is being used to read from any table in the database. This Deny can easily override the Grant permission.

•Db_denydatawriter: It is mainly being used to block modifying data in any table in a database. They would override any object-level grant.

•Db_owner: It has all the permissions in the database. This is also not equivalent to server sysadmin role. An object-level deny will override membership in this role.

•Db_securityadmin: It helps to manage the database-level security roles and permission.

SQL Server error messages – severity levels – sys.messages


There are about 98318 error messages as of SQL Server 2008 - including repetitions for all the languages supported.

There are only 8938 errors types in sql server 2008 for English language and 11 languages are supported so 11 * 8938 = 98318


sys.messages catalog view represents one row per error message.

You can simply run select * from sys.messages to see all the error messages. You can see that every message has a unique message_id and the description in text column.

Another interesting column is severity. When an error is raised by the SQL Server Database Engine, the severity of the error indicates the type of problem encountered by SQL Server.

Following are the severity levels:

0-10

These are only Informational messages.

11-16

Errors that can be corrected by the user.

Example: ORDER BY items must appear in the select list if the statement contains a UNION, INTERSECT or EXCEPT operator.

11

A given object does not exist.

Example: Table '%.*ls' does not exist.

12

Indicates that queries are not requesting locks since they are using query hints.

Example: Could not continue scan with NOLOCK due to data movement.

13

Transaction deadlock errors.

Example: Transaction (Process ID %d) was deadlocked on %.*ls resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

14

Security-related errors.

Example: The %ls permission was denied on the object '%.*ls', database '%.*ls', schema '%.*ls'.

15

Syntax errors in T-SQL

Example: A TOP N value may not be negative.

16

Errors that can be corrected by the user.

17-19

Indicate software errors that cannot be corrected by the user.

17

Indicates that the statement caused SQL Server to run out of resources (such as memory, locks, or disk space for the database) or to exceed some limit set by the system administrator.

Example: Time-out occurred while waiting for buffer latch type %d for page %S_PGID, database ID %d.

18

Indicates an issue in the Database Engine

Example: Replication-%s: agent %s failed. %s

19

Indicates that a non-configurable Database Engine limit has been exceeded. The current batch gets terminated. Error messages with a severity level from 19 through 25 are written to the error log.

Example: There is insufficient system memory in resource pool '%ls' to run this query.

20-25

These errors indicate system problems and are fatal errors. Error messages with a severity level from 19 through 25 are written to the error log.

Example: Unable to deallocate a kept page.

20

Indicates that a statement has encountered a problem. Because the problem has affected only the current task, it is unlikely that the database itself has been damaged.

Example: Warning: Fatal error %d occurred at %S_DATE. Note the error and time, and contact your system administrator.

21

Unlike 20, this one indicates that a problem has been encountered that affects all tasks in the current database.

Example: An error occurred while writing an audit trace. SQL Server is shutting down. Check and correct error conditions such as insufficient disk space, and then restart SQL Server. If the problem persists, disable auditing by starting the server at the command prompt with the "-f" switch, and using SP_CONFIGURE.

22

Indicates that the table or index specified in the message has been damaged by a software or hardware problem.

Example: The row object is inconsistent. Please rerun the query.

23

Integrity of the entire database is in question because of a hardware or software problem.

Example: Possible schema corruption. Run DBCC CHECKCATALOG.

24

Indicates a media failure.

Example: The log record at LSN %S_LSN is corrupted.

Hope you like it.

sys.dm_os_memory_clerks shows duplicate entries for memory clerks



if you run the following code in SQL Server 2008 RTM:

1 select * from sys.dm_os_memory_clerks

2 where type = 'MEMORYCLERK_SQLBUFFERPOOL'

You will get the following output:






You can observe that there are 2 entries for the BUFFER POOL memory clerk. Initially, I thought that it is for different memory nodes and I was wondering that it could have something to do with NUMA. But if you observer that memory_node_id columns shows 0 for both the entries, leaving me completely baffled, more so when the memory_clerk_address for both the records are different.

The other entry (what we see as duplicate) is actually for DAC . SQL Server 2008 RTM shows the physical memory node (0) for DAC entry which is incorrect. This issue was fixed in R2. If you run the same DMV in SQL Server 2008 R2, you will get the correct output as follows:






Memory Node ID 64 is logical and belongs to DAC.

Moving System Databases

Moving System Databases


Moving the master Database


To move the master database, follow these steps.

1. From the Start menu, point to All Programs, point to Microsoft SQL Server, point to Configuration Tools, and then click SQL Server Configuration Manager.

2. In the SQL Server Services node, right-click the instance of SQL Server (for example, SQL Server (MSSQLSERVER)) and choose Properties.

3. In the SQL Server (instance_name) Properties dialog box, click the Advanced tab.

4. Edit the Startup Parameters values to point to the planned location for the master database data and log files, and click OK. Moving the error log file is optional.

The parameter value for the data file must follow the -d parameter and the value for the log file must follow the -l parameter. The following example shows the parameter values for the default location of the master data and log files.


-dC:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\

master.mdf;-eC:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\

LOG\ERRORLOG;-lC:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\

DATA\mastlog.ldf

If the planned relocation for the master data and log files is E:\SQLData, the parameter values would be changed as follows:


-dE:\SQLData\master.mdf;-eC:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\LOG\ERRORLOG;-lE:\SQLData\mastlog.ldf

5. Stop the instance of SQL Server by right-clicking the instance name and choosing Stop.

6. Move the master.mdf and mastlog.ldf files to the new location.

7. Restart the instance of SQL Server.



8. Verify the file change for the master database by running the following query.



SELECT name, physical_name AS CurrentLocation, state_desc

FROM sys.master_files

WHERE database_id = DB_ID('master');

GO

Moving the tempdb database



Note

Because tempdb is re-created each time the instance of SQL Server is started, you do not have to physically move the data and log files. The files are created in the new location when the service is restarted in step 3. Until the service is restarted, tempdb continues to use the data and log files in existing location.

1. Determine the logical file names of the tempdb database and their current location on the disk.


SELECT name, physical_name AS CurrentLocation

FROM sys.master_files

WHERE database_id = DB_ID(N'tempdb');

GO

2. Change the location of each file by using ALTER DATABASE.


USE master;

GO

ALTER DATABASE tempdb

MODIFY FILE (NAME = tempdev, FILENAME = 'E:\SQLData\tempdb.mdf');

GO

ALTER DATABASE tempdb

MODIFY FILE (NAME = templog, FILENAME = 'F:\SQLLog\templog.ldf');

GO

3. Stop and restart the instance of SQL Server.

4. Verify the file change.


SELECT name, physical_name AS CurrentLocation, state_desc

FROM sys.master_files

WHERE database_id = DB_ID(N'tempdb');

5. Delete the tempdb.mdf and templog.ldf files from the original location.



This procedure applies to move all system databases except the master and Resource databases.

1. Stop the instance of SQL Server if it is started.

2. Start the instance of SQL Server in master-only recovery mode by entering one of the following commands at the command prompt. The parameters specified in these commands are case sensitive. The commands fail when the parameters are not specified as shown.

• For the default (MSSQLSERVER) instance, run the following command:


NET START MSSQLSERVER /f /T3608

• For a named instance, run the following command:


NET START MSSQL$instancename /f /T3608

3. For more information, see How to: Start an Instance of SQL Server (net Commands).

4. For each file to be moved, use sqlcmd commands or SQL Server Management Studio to run the following statement.


ALTER DATABASE database_name MODIFY FILE( NAME = logical_name , FILENAME = 'new_path\os_file_name' )

For more information about using the sqlcmd utility, see Using the sqlcmd Utility.

5. Exit the sqlcmd utility or SQL Server Management Studio.

6. Stop the instance of SQL Server. For example, run NET STOP MSSQLSERVER.

7. Move the file or files to the new location.

8. Restart the instance of SQL Server. For example, run NET START MSSQLSERVER.

9. Verify the file change by running the following query.


SELECT name, physical_name AS CurrentLocation, state_desc

FROM sys.master_files

WHERE database_id = DB_ID(N'');







Wednesday, October 5, 2011

SQL CMD

SQLCMD -A -S SEGOTN2258

sqlcmd -S tcp:192.138.114.33,2021

How to find the SSIS package executed by which sql job

use msdb
go
select [name],Step_name,command,[server],database_name from sysjobsteps inner join sysjobs
on sysjobsteps.job_id = sysjobs.job_id
where [command] like '%master%'

Convert Litespeed backup to Native backup

Extractor.exe -F "" -E ""
===============================================================/> Convert LiteSpeed Backups to SQL Server Backups
The extraction utility (extractor.exe) allows you to create MTF compliant SQL Server backup files from LiteSpeed backup files through the command-line. The devices created by the extractor utility can be restored on any SQL Server using the native RESTORE DATABASE or RESTORE LOG commands. The utility must be run on the server where the backup files are located.

This topic covers:

Run the Extraction Utility

Syntax

Arguments

Example

Returns

Run the Extraction Utility
To run the extraction utility

Display a command prompt for your operating system.
If the LiteSpeed installation directory is not placed in the environment PATH, change the directory until you are in the directory containing extractor.exe. On a default installation this is: %PROGRAM_FILES%\Quest Software\LiteSpeed\SQL Server

Run extractor.exe with the appropriate arguments – see below for the command syntax.

Syntax
extractor.exe Extract
[-?]
[
[-F sqllitespeed_backup_file]
[-E base_file_name]
[-N file_number]
[-K encryption_key]
[-L 0|1|2 ]
[-I]
]



Note: Arguments are case-sensitive and can be preceded with "-" or "/".

Arguments
Argument
Description

-?
Displays the syntax summary of LiteSpeed extractor utility switches.

-E base_file_name
Specify the location and name of the Microsoft Tape Format (MSTF) base file.

This is the base file created by the extractor.exe, containing the native SQL Server backup.

The extractor utility will create one backup device file for each thread used in a LiteSpeed backup. If a LiteSpeed backup was performed using multiple backup files (e.g., a "striped" backup), one MSTF device will be created for each backup file specified in the LiteSpeed backup. If a full path is not specified, the extracted files will be created in the current directory.

Any additional files created will be named using the following naming convention:

MTF.BAKx

Where:

‘MTF.BAK’ is the specified Microsoft Tape Format base file.

x is a number or letter that represents the sequence of the files. x can take any value from [0..Y].

-F
The name of the LiteSpeed backup device file to be extracted. Multiple instances of this parameter can be supplied for multiple backup devices (e.g. striped backups).

-I
Used to re-initialize the extractor files.

0 - Do not reinitialize the extractor files (default value).

1 - Reinitialize backup service.

-L
Logging options 0..2.

-K encryption_key
String used to encrypt database backups and used for restore authorization. Default is No Encryption.

-N file_number
Number of the file to extract from backup.


Example
Extract the LiteSpeed backup Northwind.bak to native SQL Server backup NorthwindNative.bak:

extractor.exe -Fc:\temp\Northwind.bak -Ec:\temp\NorthwindNative.bak -N1

This generates native SQL Server backup files. You can restore from them using native SQL Server RESTORE commands.

Returns
0 (success) or non-zero (failure). Return codes represent the native error number returned from SQL Server for any errors encountered during the operation.

Access sql server using DAC

sqlcmd -Sadmin:.

sqlcmd -A -S instance_name

Change the recovery model of databases

--delcare variables
declare @vchrTable varchar(200),
@vchrMsg varchar(200)

--declare the cursor
DECLARE curTables CURSOR FOR
SELECT name from master..sysdatabases where not name in ('tempdb')

--open the cursor
OPEN curTables

--loop through the cursor lines
FETCH NEXT FROM curTables INTO @vchrTable
WHILE @@FETCH_STATUS = 0
BEGIN
set @vchrMsg = 'ALTER DATABASE [' + @vchrTable + '] SET RECOVERY SIMPLE;'
print @vchrMsg
exec (@vchrMsg)

FETCH NEXT FROM curTables INTO @vchrTable
END

--clean up
CLOSE curTables
DEALLOCATE curTables

====================================================
EXEC sp_MSForEachDB 'PRINT ''ALTER DATABASE [?] SET RECOVERY FULL''; PRINT ''GO'''

Moving tempDB Database

Moving tempDB Database
------------------------
ALTER DATABASE tempdb MODIFY FILE (NAME = 'tempdev', FILENAME = 'J:\SQL2005.1\Data\tempdb.mdf')
ALTER DATABASE tempdb MODIFY FILE (NAME = 'templog', FILENAME = 'J:\SQL2005.1\Data\tempdb.ldf')

Add file to TempDB filegroup (with Autogrow OFF)
------------------------------------------------

USE [master]
GO
ALTER DATABASE [tempdb] ADD FILE ( NAME = N'tempdev2', FILENAME = N'D:\Data\tempdev2.ndf' , SIZE = 10240KB , FILEGROWTH = 0)
GO
--------------------------
USE [master]
GO
ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'tempdev', SIZE = 1024000KB )

USE [master]
GO

ALTER DATABASE [tempdb] MODIFY FILE ( NAME = N'templog', SIZE = 512000KB )
GO

Fragmantation in Tables

SELECT object_name (object_id),* FROM sys.dm_db_index_physical_stats
(DB_ID(N'NIIP'), NULL, NULL, NULL , 'DETAILED')

dbcc showcontig

http://msdn.microsoft.com/en-us/library/ms175008.aspx

List size of all tables in SQL 2008

List size of all tables in SQL 2008

with table_space_usage ( schema_name, table_name, used, reserved, ind_rows, tbl_rows )
AS (SELECT
s.Name
, o.Name
, p.used_page_count * 8
, p.reserved_page_count * 8
, p.row_count
, case when i.index_id in ( 0, 1 ) then p.row_count else 0 end
FROM sys.dm_db_partition_stats p
INNER JOIN sys.objects as o ON o.object_id = p.object_id
INNER JOIN sys.schemas as s ON s.schema_id = o.schema_id
LEFT OUTER JOIN sys.indexes as i on i.object_id = p.object_id and i.index_id = p.index_id
WHERE o.type_desc = 'USER_TABLE' and o.is_ms_shipped = 0)

SELECT t.schema_name
, t.table_name
, sum(t.used) as used_in_kb
, sum(t.reserved) as reserved_in_kb
,sum(t.tbl_rows) as rows

FROM table_space_usage as t

GROUP BY t.schema_name , t.table_name

ORDER BY used_in_kb desc

How to check Restore history of a database

Restore history

select * from msdb..restorehistory
where destination_database_name like '%cmdb%'

RESTORE HEADERONLY FROM DISK = 'C:\AdventureWorks.BAK'
GO

List user-defined stored procedures, views, tables, or triggers:

o Stored Procedures:

Select Routine_name
From Information_schema.Routines
Where Routine_type = 'PROCEDURE' and Objectproperty
Object_id(Routine_name), 'IsMsShipped') = 0
Order by Routine_name

o Views:


Select Table_name as "View name"
From Information_schema.Tables
Where Table_type = 'VIEW' and Objectproperty
(Object_id(Table_name), 'IsMsShipped') = 0

o Tables:

Select Table_name as "Table name"
From Information_schema.Tables
Where Table_type = 'BASE TABLE' and Objectproperty
(Object_id(Table_name), 'IsMsShipped') = 0

o Triggers:

Select Object_name(so.parent_obj) as "Table name", so.[name] as "Trigger name"
From sysobjects so
Where Objectproperty(so.[id], 'IsTrigger')=1 and
Objectproperty(so.[id], 'IsMSShipped')=0
Order by 1,2

How to find Last Execution Date of Stored Procedure

To get last execution date of the stored procedure if SQL Server has the Query Plan for it.

SELECT


qs.sql_handle,qs.statement_start_offset,qs.statement_end_offset,
qs.creation_time,qs.last_execution_time,qp.dbid,qp.objectid,st.text
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
CROSS APPLY sys.dm_exec_text_query_plan(qs.plan_handle, DEFAULT, DEFAULT) AS qp
WHERE st.text like ‘%USP_CDS_GetUserDetails%’

Start Powershell Application with “RunAs” Admin

Check out how to start Windows Powershell Application with “RUNAS” administrator

PS C:\Users\sufian> start-process powershell_ise -verb runAs

Check Enabled Traces

How to check how many Traces are enabled on the server and how traces are set (globally or session)

In SQL Server, there are two types of trace flags: session and global. Session trace flags are active for a connection and are visible only for that connection. Global trace flags are set at the server level and are visible to every connection on the server.

The following example displays the status of all trace flags that are currently enabled globally.

DBCC TRACESTATUS(-1);
GOThe following example displays the status of trace flags 2528 and 3205.

DBCC TRACESTATUS (2528, 3205);
GOThe following example displays whether trace flag 3205 is enabled globally.

DBCC TRACESTATUS (3205, -1);
GOThe following example lists all the trace flags that are enabled for the current session.

DBCC TRACESTATUS();
GO

How to truncate Mirrored Database Log File

Mirroring is a new feature comes with SQLServer 2005.
Mirroring provides High Availibity of databases without any dataloss.
Mirroring Comes with three flavours
(1) High Performance (asynchronous).
(2) High safety without automatic failover (synchronous).
(3) High safety with automatic failover(synchronous) need withness server.

(Read more in BOL Look For : mirroring databases [SQL Server])

This document will focus on how to shrink log file while databases are participating in mirroring.

Database files participating in mirroring can’t be shirnked by using truncate_only option with Backup log command.

To shrink Log file of database participating in mirroring, Backup the log file at any location (can be a local drive or network location)

BackupLog to disk=‘D:\ff\dbname.trn’

Lets check if all transaction is written to disk.

DBCC LOGINFO(‘database name’)

Transaction which are written on disk will have status=0

In case if last transaction show status=2 then backup the database log once again.

Now shrink the file

DBCC SHRINKFILE(Logfileid)

or

DBCC SHRINKFILE(Filename,minsize)

The file will shrink to its maximum extent.