Search

Wednesday, September 9, 2015

Changes to SQL Server 2014 AlwaysOn Availability Groups

AlwaysOn Availability Groups provide high availability for multiple databases, and they can make use of multiple secondary replicas. Each secondary SQL Server replica has its own copy of the protected databases. AlwaysOn Availability Groups continuously synchronize transactions from the primary replica to each of the secondary replicas. This replication can be configured as synchronous or asynchronous to support local high availability or remote disaster recovery.
In addition, here are some major enhancements to AlwaysOn Availability Groups in SQL Server 2014 that you might want to be aware of:

Enhanced Availability for Read-Only Replicas

One of the advantages of AlwaysOn Availability Groups is that the secondary replicas can be used for reporting and backup. With SQL Server 2014, the availability of secondary replicas has been enhanced to allow read workloads to continue to run even in the case of lengthy network failures or the loss of quorum for the Windows Server Failover Cluster.

Increased Number of Replicas

Another important enhancement in SQL Server 2014 is the increased maximum number of secondaries. SQL Server 2012 supported a maximum of four secondary replicas. With SQL Server 2014, AlwaysOn Availability Groups now supports up to eight secondary replicas. The additional secondary replicas can be used to distribute read workloads and provide enhanced recoverability.

Integration with Windows Azure

SQL Server 2014 AlwaysOn Availability Groups use Windows Azure in a couple of different ways. On-premises SQL Server instances can use the new Windows Azure configuration options in the AlwaysOn Availability Group wizard to create one or more asynchronous secondary replicas on Windows Azure Infrastructure as a Service (IaaS) services. This enables you to manually fail over to a SQL Server instance running on Windows Azure. As with a typical secondary replica, Windows Azure replicas can be used to offload reporting workloads and to perform backups.
In addition, you can also use SQL Server 2014 AlwaysOn Availability Groups to provide high availability for SQL Server databases hosted in Windows Azure. SQL Server 2014 instances running on Windows Azure IaaS Services can configure a synchronous secondary replica in Azure for automatic failover in case of server or VM failure.

Enhanced Diagnostics

SQL Server 2014 AlwaysOn Availability Groups diagnostic and troubleshooting message display has been improved to offer more specific information. Additional columns have also been made more discoverable in the AlwaysOn Dashboard.

http://www.microsoft.com/en-us/server-cloud/products/sql-server/

Thursday, December 26, 2013

SQL Server – Performance Top Queries by Total CPU Time

select top 10 rank() over(order by total_worker_time desc,sql_handle,statement_start_offset) as row_no
,       (rank() over(order by total_worker_time desc,sql_handle,statement_start_offset))%2 as l1
,       creation_time
,       last_execution_time
,       (total_worker_time+0.0)/1000 as total_worker_time
,       (total_worker_time+0.0)/(execution_count*1000) as [AvgCPUTime]
,       total_logical_reads as [LogicalReads]
,       total_logical_writes as [logicalWrites]
,       execution_count
,       total_logical_reads+total_logical_writes as [AggIO]
,       (total_logical_reads+total_logical_writes)/(execution_count + 0.0) as [AvgIO]
,   case when sql_handle IS NULL
then ''
else ( substring(st.text,(qs.statement_start_offset+2)/2,(case when qs.statement_end_offset = -1        then len(convert(nvarchar(MAX),st.text))*2      else qs.statement_end_offset    end - qs.statement_start_offset) /2  ) )
end as query_text
,       db_name(st.dbid) as database_name
,       st.objectid as object_id
from sys.dm_exec_query_stats  qs
cross apply sys.dm_exec_sql_text(sql_handle) st
where total_worker_time > 0
order by total_worker_time  desc

List current SQL Server tempdb statements


SELECT ssu.session_id,
(ssu.internal_objects_alloc_page_count + sess_alloc) as allocated,
(ssu.internal_objects_dealloc_page_count + sess_dealloc) as deallocated
, stm.TEXT
from sys.dm_db_session_space_usage as ssu, 
sys.dm_exec_requests req
CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS stm ,
(select session_id, 
sum(internal_objects_alloc_page_count) as sess_alloc,
sum (internal_objects_dealloc_page_count) as sess_dealloc
from sys.dm_db_task_space_usage group by session_id) as tsk
where ssu.session_id = tsk.session_id
and ssu.session_id >50
and ssu.session_id = req.session_id
and ssu.database_id = 2
order by allocated DESC

How to check for SQL ghost records

Use the sys.dm_db_index_physical_stats view and check the ghost_record_count column.
The Ghost record count  is the row count in the index leaf level marked for deletion, but not yet deleted by the database engine

Ghost_record_count is the ghost record count awaiting removal by the ghost cleanup task in the allocation unit.

Version_Ghost_record_count  is the ghost records count sustained by a snapshot isolation transaction in an allocation unit.

SELECT
db_name(database_id),
object_name(object_id),
ghost_record_count,
version_ghost_record_count
FROM sys.dm_db_index_physical_stats(DB_ID(N'db_name'), OBJECT_ID(N'table_name'), NULL, NULL , 'DETAILED');

Calculate the Size of Index

SELECT ix.[name] AS IndexName,
SUM(ps.[used_page_count]) * 8 AS IndexSize_KB
FROM sys.dm_db_partition_stats AS ps
JOIN sys.indexes AS ix
ON ps.[object_id] = ix.[object_id]
AND ps.[index_id] = ix.[index_id]
GROUP BY ix.[name]

Friday, September 20, 2013

SQLOS's memory manager and SQL Server's Buffer Pool

SQLOS's memory manager consists of several components such as memory nodes, memory clerks, memory caches, and memory objects. Fig 1. depicts memory manager components and their relationship:

----------------

| Memory Object |

----------------

|
|
V
----------------

| Page Allocator |

----------------

/ \

/ \

\/ \/

--------------- ---------------

| Memory Clerk | | Caches |

--------------- ---------------

\ /

\/ \/

----------------

| Memory Node |

----------------

Fig. 1

 Memory Node

A memory node is not exposed to memory manager clients. It is internal SQLOS's object. The major goal for a memory node is to provide locality of allocation. It consists of several memory allocators. There are three major types of allocators. The first type is a set of page allocators. The second type is virtual allocator leveraging Windows VirtualAlloc APIs. The third type is a shared memory allocator which is fully based on Window's file mapping APIs.

The page allocators are the most commonly used allocators in SQLOS memory manager. The reason they are called page allocators is because they allocate memory in multiple of SQLOS's page. A size of a page is 8k, the same as a size of database page in SQL Server. As you will learn further this is not coincidence.

 There are four different types of page allocators. Single page allocator, multi page allocator, large page allocator and reserved page allocator. Single page allocator can only provide one page at the time. Multiple page allocator, as you might guess, provides a set of pages at the time. Large page allocator could be used to allocate large pages. SQLOS and SQL Server use large pages to minimize TLB misses when accessing hot data structures. Currently large pages are supported only on IA64 or x64 hardware with at least 8GB of RAM. A size of a large page on IA64 is 16MB. The last type of page allocators reserved could be used to allocate special set of pages reserved for emergency, i.e. when SQLOS is low on memory. Fig2. shows the memory node and its allocators.

---------------------- ---------------- ---------------------------
| Large Page Allocator |<-------- memory="" node="">| Reserved Page Allocator |
--------------------- / ---------------- \ ---------------------------
/ | \

/ | \

/ | \

/ | \

\/ \/ \/

-------------------- ---------------------- ----------------------
| VM & SM Allocator | | Single Page Allocator | | Multi Page Allocator |
-------------------- ---------------------- ----------------------
Fig. 2

 
At this point SQL Server doesn't have a dmv, dynamic management view, that would dump a set of all memory nodes and information about their allocators. Dbcc memorystatus, discussed further, comes pretty close but it dumps information about cpu nodes not about memory nodes. You might remember that CPU nodes are proper subset of memory nodes. It means that information presented by dbcc memorystatus is sufficient to understand memory distribution on the system.

 Memory Clerks

Memory nodes are hidden from memory manager users. If a client of memory manager needs to allocate memory it first creates a memory clerk. There are four types of memory clerks such as generic, cache store, user store and object store. The latter three a bit convoluted. Along with memory clerk functionality they provide data caching.

 One can think of a memory clerk as a bag of statistics. It supports the same type of allocators as memory nodes as well as it enables large memory consumers to hook into memory brokerage infrastructure. (I will describe infrastructure in one of the next posts). There are several global memory clerks provided by SQLOS. SQLOS's middle and large memory consumers are encouraged to use their own clerk so that one could understand memory consumption by a component. Memory clerks infrastructure enables us to track and control amount of memory consumed by a memory component. Each CPU node has a list of memory clerks that we can safely walk during runtime. SQL Server implements sys.dm_os_memory_clerks dmv to dump clerk information. In addition combined clerk information could be derived from dbcc memory status.

Memory objects

SQLOS's memory object is a heap. A memory object requires a memory clerk to allocate its memory. We support three types of memory objects. A variable memory objects is a regular heap. An incremental memory object is a mark/shrink heap. This allocation policy is very handy during compilations and execution processes. Usually both of the processes happen in two phases. First phase is to grow memory usage and the second is to shrink memory usage. If the process is isolated we don't have to call any of destructors when freeing memory. It significantly improves performance. The last type of memory object is fixed size. As you can guess components can use such policy when they need to allocate objects of a given size.

A payload for a given memory object is 8kb. It is exactly the same as a SQLOS's page size. It also means that a memory object could be created from memory clerk leveraging single page allocator. (This is yet another very important point! Keep this in mind until I will cover SQL Server's Buffer Pool) SQL Server exposes a dmv to dump all memory objects in its process: sys.dm_os_memory_objects.

If you notice both memory clerks and memory objects dmvs expose page allocator column. Also I depicted page allocator in Fig.1. Under the hood memory object uses memory clerks's page allocator interface to allocate pages. This is useful to know when you want to join memory clerk and memory object dmvs.

 So far I have described how SQLOS's memory manager structured inside. Now it is time to start talking how all this fits into SQL Server.

 
Buffer Pool

Now we got to the point where the life becomes very interesting. In this part all the pieces that I covered so far including memory management should start fall in their places.

Remember SQL Server has two memory settings that you can control using sp_conifigure. They are max and min server memory. I am not sure if you know but these two setting really control the size of the buffer pool. They do not control overall amount of physical memory consumed by SQL Server. In reality we can't control amount of memory consumed by SQL Server because there could be external components loaded into server's process.

When SQL Server starts, during initialization, Buffer Pool first decides how much of VAS it needs to reserve for its usage. It bases its decision on the amount of physical memory, RAM, present on the box. If amount of physical memory is equal or larger than amount of VAS it can use, remember that VAS is limited resource especially on x86, it will leave 256MB of VAS for external components plus a number of threads SQL Server is configured to use multiplied by 512KB. You might remember that 512KB is SQL Server's thread stack size. In default configuration with physical memory larger than 2GB, Buffer Pool will leave 256MB+256*512KB = 384MB of VAS space. Some people name this region as MemToLeave but in reality it is in correct. SQL Server might end up using this part of VAS itself and I will show you how it could happen latter on. You might also remember -g parameter that some people recommend to use when SQL Server starts outputting "Can't Reserve Virtual Address Space" errors. First 256MB is exactly what -g parameter controls. If you specify -g 512MB, amount of VAS that BP won't use is 512MB+256*512KB = 640MB. There is no point in specifying -g 256MB. This input parameter is the same as default value.


Once BP decides amount of VAS it will use. It reserves all of it right a way. To observe such behavior you might want to monitor SQL Server's virtual bytes from perfmon or you could use vasummary view I talked about in my previous posts. In normal case Buffer Pool can't get this much memory in one chunk so if you take a closer look at SQL Server's VAS you will see several large regions reserved. This behavior is very different from many other servers that you might have seen. Some people report it as a VAS leak in SQL Server. In reality this behavior is by design.

 Buffer Pool commits pages on demand. Depending on internal memory requirements and external memory state, it calculates its target, amount of memory it thinks it should commit before it can get into memory pressure. To keep system out of paging target is constantly recalculated. Target memory can't exceed max memory that represents max server memory settings. Even if you set min server memory equal to max server memory Buffer Pool will only commit its memory on demand. You can observe this behavior by monitoring corresponding profiler event.

 
The size of SQL Server database page is 8KB. Buffer Pool is a cache of data pages. Consequently Buffer Pool operates on pages of 8KB in size. It commits and decommits memory blocks of 8KB granularity only. If external components decide to borrow memory out of Buffer Pool they can only get blocks of 8KB in size. These blocks are not continues in memeory. Interesting, right? It means that Buffer Pool can be used as underneath memory manager forSQL Server components as long as they allocate buffers of 8KB. (Sometimes pages allocated from BP are referred as stolen)

 
Here is where SQLOS and Buffer Pool meet. See Fig.3

 
-----------------
| Memory Node |
-----------------
|
|
V
------------------------
| Single Page Allocator |
------------------------
|
|
V
-----------------
| Buffer Pool |
-----------------

Fig. 3

 
SQLOS' memory manager can be dynamically configured to use specific single page allocator. This is exactly what SQL Server does during a startup it configures Buffer Pool to be SQLOS's single page allocator. From that point on all dynamic single page allocations are provided by Buffer Pool. For example remember that memory object's payload is 8KB. When a component creates a memory object the allocation is served by SQLOS's single page allocator which is BP.

 When describing the memory manager I mentioned that every large component has its own memory clerk. It means that Buffer Pool has its own memory clerk as well. How is it possible, BP leverages SQLOS memory clerk but SQLOS' memory manager relies on BP? This is common chicken and egg problem that you often can observe in operating systems. The key here is that Buffer Pool never uses any type of page allocator from SQLOS. It only leverages Virtual and AWE SQLOS's interfaces.

-----------------
| Buffer Pool |
-----------------
|
|
V
--------------------------
| Memory Clerk (VM/AWE) |
--------------------------
|
|
V
-----------------
| Memory Node |
-----------------

Fig. 4


All SQL Server's components optimized for 8KB allocations so that they can allocate memory through SQLOS's single page allocator and consequently through Buffer Pool. However there are cases when a component requires large buffers. If it happens allocation will be either satisfied by memory node's multi page allocator or by virtual allocator. As you might guess that memory will be allocated outside of Buffer Pool. This is exactly why I don’t like term MemToLeave, SQL Server does allocate memory out of that area!

 
Buffer Pool and AWE mechanism

When describing SQLOS memory manager and Buffer Pool, the discussion would be incomplete without describtion of how AWE fits in all of this. It is really important to understand how Buffer Pool allocates its memory when SQL Server configured to use AWE mechanisms. First, please remember, BP leverages SQLOS's memory clerk interfaces to allocate both VAS and physical pages through AWE. Second, there are several differences that you need to keep in mind. First BP reserves VAS in 4MB chunks instead of "single" large region. This enables SQL Server to release VAS when process is under VAS pressure. (We didn't have all bits and pieces to do this when server is not configured to use AWE mechanisms). Then it allocates all of its memory using AWE mechanism on demand. This is very big difference between SQL2000 and Yukon. In SQL Server 2000 BP would allocate all of its memory when using AWE mechanism right a way.

 
Buffer Pool is a preferred memory allocator for the whole server. In AWE mode it allocates its memory leveraging AWE mechanism. It means that all allocations allocated through SQLOS's single page allocator will come from pages allocated through AWE. This is what many people really missing. Let me make the point again: When Server is configured for AWE mode, most of it allocations are allocated through AWE mechanism. This is exactly the reason why you won't see private bytes and memory usage growing for SQL Server in this mode.

 Since data pages are use relative addressing, i.e. self contained, Buffer Pool can map and unmap them into and out of process's VAS. Other components could have done the same if they were not relying on the actual allocation address. Unfortunately there are no components right now other than BP that can take advantage of AWE mechanism.
 

Optimising System Memory for SQL Server

Virtual Memory and Memory Management
On the x86 family of processors, all processes are provided with 4Gb of virtual memory. By default, the first 2Gb is allocated to the operating system kernel and the latter 2Gb to the user process. This virtual memory isn’t real ‘physical’ memory – as processes make memory allocations, physical storage is provided, mixed between physical memory and the system paging file*. Windows transparently handles copying data to and from the paging file, so that the application can allocate more memory than physically exists in the machine and, so multiple applications can have equal access to the machine’s physical RAM.
* (this is how multiple applications can run on a system with 512Mb of RAM, each with a virtual address space of 4GB – it’s not real memory, but it seems like it to the application)

Tuning Virtual Memory
Windows NT 4.0 introduced the concept of the /3Gb switch (added to the [operating systems] section of the boot.ini file), which allows system administrators to modify the allocation of virtual memory between the OS and user processes. By adding the /3GB switch (and restarting), Windows will allocate just 1Gb to the kernel mode address space, allowing a process’s user mode address space to increase to 3Gb.
In addition to the /3GB switch, Windows XP, Windows Server 2003 & 2008 includes the /USERVA switch. This allows finer granularity over the amount of virtual memory provided, to the kernel mode and user mode address space. To use the /USERVA switch, simply indicate how much memory should be provided for user mode address space. For example, /USERVA=2560 configures 2.5GB for user mode space and leaves the remaining 1.5GB for the kernel. When using the /USERVA switch, the /3GB switch must also be present.
To add either the the /3GB or /USERVA switches, go to System Properties -> Startup and Recovery and click Edit under System Startup. Once reconfigured, you’re [operating systems] section should look something like this:
[operating systems]
multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Windows Server 2003, Enterprise" /noexecute=optout /fastdetect /3GB

[operating systems]
multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Windows Server 2003, Enterprise" /noexecute=optout /fastdetect /3GB /USERVA=2560
More information on the /3GB and /USERVA switches can be found in the Microsoft KB articles 316739 and 810371.
Utilising all Physical Memory: Physical Address Extension (PAE)
PAE support was added by Intel starting with the Pentium Pro family and later* and provides a memory-mapping model, which allows Windows to access to up to 64Gb of physical memory, rather than the standard 4Gb. In PAE mode, the memory management unit implements page directory entries (PDEs) and page table entries (PTEs) that are 36-bits wide (rather than the standard 32-bits) and adds a page directory pointer table to manage these high-capacity tables and indexes into them, allowing the operating system to recognise up to 64Gb.
In practice, this means that although Windows processes are still given a 4Gb allocation of virtual memory (virtual memory is still allocated using 32-bit pointers, limiting their maximum size to 4Gb), multiple processes can immediately benefit from the increased RAM as they are less likely to encounter physical memory restrictions and begin paging.
A specific version of the Windows kernel is required to use PAE, either Ntkrnlpa.exe for uniprocessor systems, or Ntkrpamp.exe for multiprocessor systems, both are located in the \i386\Driver.cab file. No additional work needs to be undertaken by the system administrator, apart from adding the /PAE switch in a similar fashion to the /3GB or /USERVA switches. If however you are running hardware that supports hot-adding memory , the PAE switch will be added by default (hot-add memory is only supported by Windows Server 2003 Enterprise and Datacenter editions). Note: 64-bit versions of Windows do not support PAE. The PAE switch can be added with or without the /3GB switch, as detailed later.
To manually add the PAE switch add the following to your boot.ini file:
[operating systems]
multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Windows Server 2003, Enterprise" /noexecute=optout /fastdetect /PAE
The following table details the maximum physical memory that a Windows version can recognise, with the PAE switch enabled:
  • Windows 2000 Server – 4Gb Maximum
  • Windows 2000 Advanced Server – 8Gb RAM Maximum
  • Windows 2000 Datacenter Server – 32Gb RAM Maximum
  • Windows Server 2003 Web Edition – 2Gb RAM Maximum
  • Windows Server 2003 Standard Edition – 4Gb RAM Maximum
  • Windows Server 2003 Enterprise Edition – 32Gb RAM Maximum
  • Windows Server 2003 Datacenter Edition – 64Gb RAM Maximum
  • Windows Server 2008 Web Edition – 4Gb RAM Maximum
  • Windows Server 2008 Standard Edition – 4Gb RAM Maximum
  • Windows Server 2008 Enterprise Edition – 64Gb RAM Maximum
  • Windows Server 2008 Datacenter Edition – 64Gb RAM Maximum
  • Windows Server 2008 Datacenter Edition (Server Core) – 64Gb RAM Maximum
Unless you have a system with more than 4Gb of physical memory, there is little point in enabling PAE; however, PAE can be enabled on Windows XP SP2, Windows Server 2003 and later 32-bit versions of Windows, to support hardware enforced Data Execution Prevention (DEP).
I’ve provided only a brief overview of the Physical Address Extensions here; for more background reading please see the following: Microsoft KB articles 283037 and 268363, Windows Hardware Developer Central article Physical Address Extension – PAE Memory and Windows.
* the PAE extension is supported on AMD chipsets, although I can’t find any hard evidence on the AMD website.

Addressing Windows Extensions (AWE)
Unlike the PAE switch, the AWE facility in Windows exists to allow applications – such as SQL Serverto access more than 4GB of physical memory. AWE removes the 4Gb physical memory limit of 32-bit software architecture by enabling code to allocate large chunks of physical memory and then, map access to the physical memory into a window of virtual memory that is 32-bit addressable. Because AWE allows the OS to allocate memory above the 4Gb boundary, there is little point enabling it on a system with 4Gb or less of physical RAM.
One thing to note with AWE memory is that it is never swapped to the system paging file (i.e. disk). If you review the AWE API, you’ll see that the methods refer to physical memory allocation: AWE memory is physical memory that is never swapped to or from the system paging file. This explains why that in order to use the ‘Use AWE to Allocate Memory’ flag in SQL Server, requires the ‘Lock Pages in Memory’ Local Security Policy setting (see Using Address Windowing Extensions (AWE) with SQL Server) – pages can only be locked in memory if this local security policy is set. This also explains why (or how) applications such as SQL Server, Exchange etc. when using AWE to allocate memory can consume such great amounts of physical RAM.

Best Practice Configurations
Based on the information provided above, Microsoft recommend the following physical memory / operating system memory switch combinations:
  • 4Gb Physical RAM – /3GB switch (or /USERVA switch)
  • > 4Gb Physical RAM – /3GB and /PAE switch
  • > 16Gb Physical RAM – /PAE switch
A Final Note about the /3GB Switch

You will notice that in the table above recommends that a server with greater than 16Gb of physical RAM should not be configured with the /3GB switch. When you apply the /3GB switch, you limit the size of the virtual memory address space available to the kernel to 1Gb (from the usual 2Gb), which is too small for the virtual memory manager to store the memory mapping tables needed to access more than 16Gb of RAM. As a result, the memory manager imposes a virtual memory limit of 16 GB on a system with both the /3GB and /PAE enabled. Even if a system has 32 GB or more of physical memory, if both options are enabled, only 16 GB of memory will be recognised.
However, although 16Gb is a hard upper limit imposed by the kernel, most workloads will actually show decreased throughput on systems with 12Gb of memory, and many on systems/workloads with as low as 8GB of memory. Therefore, ensure you thoroughly test the use of the /3GB switch within you UAT/reference environment before applying it to live systems.
I’m going to close Part I here because there is plenty for the reader to take onboard before we start to look at how these considerations affect SQL Server. Part II will be

SQL SERVER Architecture – Terminology

The major components of SQL Server are:
  1. Relational Engine
  2. Storage Engine
  3. SQL OS

Now we will discuss and understand each one of them.
 
1) Relational Engine:
 
Also called as the query processor, Relational Engine includes the components of SQL Server that determine what your query exactly needs to do and the best way to do it. It manages the execution of queries as it requests data from the storage engine and processes the results returned.
Different Tasks of Relational Engine:
  1. Query Processing
  2. Memory Management
  3. Thread and Task Management
  4. Buffer Management
  5. Distributed Query Processing
2) Storage Engine:
 
Storage Engine is responsible for storage and retrieval of the data on to the storage system (Disk, SAN etc.). to understand more, let’s focus on the concepts.
When we talk about any database in SQL server, there are 2 types of files that are created at the disk level – Data file and Log file. Data file physically stores the data in data pages. Log files that are also known as write ahead logs, are used for storing transactions performed on the database.
Let’s understand data file and log file in more details:
Data File: Data File stores data in the form of Data Page (8KB) and these data pages are logically organized in extents.
Extents: Extents are logical units in the database. They are a combination of 8 data pages i.e. 64 KB forms an extent. Extents can be of two types, Mixed and Uniform. Mixed extents hold different types of pages like index, system, data etc (multiple objects). On the other hand, Uniform extents are dedicated to only one type (object).
Pages: As we should know what type of data pages can be stored in SQL Server, below mentioned are some of them:
  • Data Page: It holds the data entered by the user but not the data which is of type text, ntext, nvarchar(max), varchar(max), varbinary(max), image and xml data.
  • Index: It stores the index entries.
  • Text/Image: It stores LOB ( Large Object data) like text, ntext, varchar(max), nvarchar(max), varbinary(max), image and xml data.
  • GAM & SGAM (Global Allocation Map & Shared Global Allocation Map): They are used for saving information related to the allocation of extents.
  • PFS (Page Free Space): Information related to page allocation and unused space available on pages.
  • IAM (Index Allocation Map): Information pertaining to extents that are used by a table or index per allocation unit.
  • BCM (Bulk Changed Map): Keeps information about the extents changed in a Bulk Operation.
  • DCM (Differential Change Map): This is the information of extents that have modified since the last BACKUP DATABASE statement as per allocation unit.
Log File: It also known as write ahead log. It stores modification to the database (DML and DDL).
  • Sufficient information is logged to be able to:
    • Roll back transactions if requested
    • Recover the database in case of failure
    • Write Ahead Logging is used to create log entries
      • Transaction logs are written in chronological order in a circular way
      • Truncation policy for logs is based on the recovery model

SQL OS:
 
This lies between the host machine (Windows OS) and SQL Server. All the activities performed on database engine are taken care of by SQL OS. It is a highly configurable operating system with powerful API (application programming interface), enabling automatic locality and advanced parallelism. SQL OS provides various operating system services, such as memory management deals with buffer pool, log buffer and deadlock detection using the blocking and locking structure. Other services include exception handling, hosting for external components like Common Language Runtime, CLR etc.

SQL Server 2014: A Closer Look

Microsoft SQL Server 2014 was announced by Quentin Clark during the Microsoft TechEd 2013 keynote. Designed and developed with our cloud-first principles in mind, SQL Server 2014 builds on the momentum of SQL Server 2012, released just 14 months ago. We are excited to share a closer look at some of the exciting capabilities included in SQL Server 2014 that will help you unlock real-time insights with mission critical and cloud performance.
SQL Server 2014 helps organizations by delivering:
  • Mission Critical Performance across all database workloads with In-Memory for online transaction processing (OLTP), data warehousing and business intelligence built-in as well as greater scale and availability
  • Platform for Hybrid Cloud enabling organizations to more easily build, deploy and manage database solutions that span on-premises and cloud
  • Faster Insights from Any Data with a complete BI solution using familiar tools like Excel

Mission Critical Performance with SQL Server 2014
SQL Server 2014 delivers new in-memory capabilities built into the core database for OLTP and data warehousing, which complement existing in-memory data warehousing and business intelligence capabilities for a comprehensive in-memory database solution. In addition to in-memory, there are new capabilities to improve the performance and scalability for your mission critical applications.
In Memory Built-In
  • New In-Memory OLTP – built in to core SQL Server database and uniquely flexible to work with traditional SQL Server tables allowing you to improve performance of your database applications without having to refresh your existing hardware. We are seeing customers such as EdgeNet and bwin achieve significant performance gains to scale and accelerate their business.
  • Enhanced In-Memory ColumnStore for Data Warehousing – now updatable with even faster query speeds and with greater data compression for more real-time analytics support.
  • New buffer pool extension support to non-volatile memory such as solid state drives (SSDs) – Increase performance by extending SQL Server in-memory buffer pool to SSDs for faster paging.
  • New Enhanced Query Processing – speeds all SQL Server queries regardless of workload.
Enhanced Availability, Security and Scalability
  • Enhanced AlwaysOn – Built upon the significant capabilities introduced with SQL Server 2012, delivers mission critical availability with up to 8 readable secondaries and no downtime during online indexing operations.
  • Greater scalability of compute, networking and storage with Windows Server 2012 R2 –
- Increased scale – Continue to benefit from scale for up to 640 logical processors and 4TB of memory in a physical environment and up to 64 virtual processors and 1TB of memory per VM.
- Network Virtualization – Abstracts networking layer so that it you can easily migrate SQL Server from one datacenter to another.
- Storage Virtualization with Storage Spaces – Create pools of storage and storage tiers allowing your hot data to access the premium storage and cold data to access standard storage improving resilience, performance and predictability.
  • Enhanced Resource Governance – With Resource Governor, SQL Server today helps you with scalability and predictable performance, and in SQL Server 2014, new capabilities allow you to manage IO, in addition to compute and memory to provide more predictable performance.
  • Enhanced Separation of Duties – Achieve greater compliance with new capabilities for creating role and sub-roles. For example, a database administrator can now manage the data without seeing sensitive data or personally identifiable information.

Platform for Hybrid Cloud
SQL Server 2014 creates a strong platform for hybrid cloud where cloud scale can be leveraged to extend the scalability and availability of on-premises database applications as well as reduce costs.
Simplified Cloud Backup and Disaster Recovery
  • Backup to Azure Storage – reduce costs and achieve greater data protection by backing up your on-premises database to Azure Storage at an instance level. Optimize backup policy with intelligence built in to SQL Server that monitors and tracks backup usage patterns to provide optimal cloud backup. Backups can be automatic or manual, and in case of an on-premises failure, a backup can be restored to a Windows Azure Virtual Machine.
  • AlwaysOn integration with Windows Azure Infrastructure Services – Benefit from Microsoft’s global data centers by deploying a Windows Azure Virtual Machine as an AlwaysOn secondary for cost-effective global data protection. Increase performance and scale reporting for your global business units by running reporting off the readable secondaries in Windows Azure. Run backups on the secondaries in Windows Azure to increase data protection and performance.
  • SSMS Wizard for deploying AlwaysOn secondaries in Window Azure – Easily deploy an AlwaysOn secondaries to Windows Azure Virtual Machine with a point and click experience within SQL Server Management Studio (SSMS).
Easy Migration of On-Premises SQL Servers to Windows Azure Virtual Machines
  • SSMS Migration Wizard for Windows Azure Infrastructure Services – Easily migrate an on-premises SQL Server database to a Windows Azure Virtual machine with a point and click experience in SSMS. The newly deployed database application can be managed through SSMS or System Center 2012 R2.
Faster Insights on Any Data
SQL Server 2014 is at the heart of our modern data platform which delivers a comprehensive BI solution that simplifies access to all data types big and small with additional solutions like HDInsight, Microsoft’s 100% Apache compatible Hadoop distribution and project code name “Data Explorer”, which simplifies access to internal or external data. New data platform capabilities like Polybase included in Microsoft Parallel Data Warehouse allows you to integrate queries across relational and non-relational data using your existing SQL Server skills.
With SQL Server 2014, you can accelerate insights with our new in-memory capabilities with faster performance across workloads. You can continue to refine and manage data using Data Quality Services and Analysis Services in SQL Server and finally analyze the data and unlock insights with powerful BI tools built into Excel and SharePoint.
 

What is the difference between Checkpoint and Lazy Writer


Row
Checkpoint
Lazy writer
1
Checkpoint is used by sql engine to keep database recovery time in check
Lazy writer is used by SQL engine only to make sure there is enough memory left in sql buffer pool to accommodate new pages
2
Check point always mark entry in T-log before it executes either sql engine or manually
Lazy writer doesn’t mark any entry in T-log
3
To check occurrence of checkpoint , we can use below query
select * from ::fn_dblog(null,null)
WHERE [Operation] like ‘%CKPT’
To check occurrence of lazy writer we can use performance monitor
SQL Server Buffer Manager Lazy writes/sec
4
Checkpoint only check if page is dirty or not
Lazy writer clears any page from memory when it satisfies all of 3 conditions.
1. Memory is required by any object and available memory is full
2. Cost factor of page is zero
3. Page is not currently reference by any connection
5
Checkpoint is affected by two parameters
1. Checkpoint duration: is how long the checkpoint can run for.
2. Recovery interval: affects how often it runs.
Lazy writer is affected by
1. Memory pressure
2. Reference counter of page in memory
6
Check point should not be very low , it can cause increasing recovery time of database
No. of times lazy writer is executing per second should always be low else it will show memory pressure
7
Checkpoint will run as per defined frequency
No memory pressure, no lazy writer
8
Checkpoint tries to write as many pages as fast as possible
Lazy writer tries to write as few as necessary
9
checkpoint process does not put the buffer page back on the free list
Lazy writer scans the buffer cache and reclaim unused pages and put it n free list
10
We can find last run entry of checkpoint in Boot page
Lazy writer doesn’t update boot page
11
Checkpoint can be executed by user manually or by SQL engine
Lazy writer cant be controlled by user
12
It keeps no. of dirty pages in memory to minimum
It helps to reduce paging
13
Auto frequency can be controlled using recovery interval in sp_configure
Works only @ memory pressure , It uses clock algorithm for cleaning buffer cache
14
It will be automatically executed before every sql statement which requires consistent view of database to perform task like (Alter, backup, checkdb, snapshot …..)
It kicks pages out of memory when reference counter of page reaches to zero
15
Command : Checkpoint
No command available
16
It comes in picture to find min lsn whenever t-log truncates
No entry in T-log
17
Checkpoint is affected by Database recovery model
Lazy writer doesn’t get impacted with recovery model of database
18
To get checkpoint entry in error log
DBCC TRACEON(3502, -1)
Not Applied
19
Members of the SYSADMIN, DB_OWNER and DB_BACKUPOPERATOR can execute checkpoint maually
Not Applied