Friday 13 April 2012

Jobs and Maintenance Plans

Jobs are good. Especially mundane, repetitive, unnoticed-when-they're-done-well jobs that you don't have time or the will to do yourself. There are certain tasks that need to be done on a regular basis, tasks which you can automate to be executed by SQL Server Agent, consistently and reliably.

A Job can have many steps, running tasks sequentially.
A Maintenance Plan is much the same.

Both Jobs and Maintenance Plans can send you emails on success, failure or completion. You'll probably find you favour one, but use both. Personally, I favour Maintenance Plans when I want to do a bunch of things, and Jobs when I want to do just one.

To create a Maintenance Plan
  • Expand Management, right-click Maintenance Plans and select New Maintenance Plan, giving it a name
  • From the toolbox on the left, drag whatever tasks you want to include on to the blank designer surface on the right
  • Right-click and Edit to customise each Task.
  • You can attach as many instance of Notify Operator Task as you like
  • You'll notice each task you have selected has an arrow hanging off it. Drag these to other tasks to create a flow
  • If you have 2 tasks connected, you can right-click an arrow and select Success, Failure or Completion
  • Above the designer surface you can define a Schedule, and add Subplans
  • Once completed, it is good practice to right-click the Maintenance Plan and Execute to test the Plan

To create a Job
  • Expand SQL Server Agent, right-click Jobs and select New Job
  • Name and Categorise (optional) your Job
  • Click Steps and add the Steps you want to execute, defining what happens On Success or On Failure
  • Click Schedules and define when and how often your Job should run
  • Alerts can be defined
  • Notifications can be setup, assuming you have setup Database Mail
  • Once completed, it is good practice to right-click the Job and Start Job at Step... to test the job functions as expected

Alerts and Errors

It's always better to know if something goes wrong when it goes wrong, not when someone complains to you about it. SQL Server can be configured to send alert and notifications by email, once you've setup Database Mail.

Alerts can be used to notify an operator or execute a job. SQL Server generates events which enter the Windows Application Log. On startup, SQL Server Agent registers itself as a callback service with the Windows Application Event log. When SQL Server Agent receives a logged event, it compares it to the alerts defined. When a match is found, it fires an alert.

It is important to understand:
  • The nature of errors
  • The locations errors can occur when T-SQL code is executed
  • The data returned by errors
  • The severities errors can exhibit
  • How to configure the SQL Server error log

Errors raised by the database have the following attributes:
  • Error number
  • Error message
  • Severity - (0-0=information, 11-16=user can correct, 17-19=software errors, 20-24=system errors, 25=SQL Server terminating)
  • State – value used to determine the location in code at which the error occurred
  • Procedure name – name of stored procedure or trigger in which the error occurred (if applicable)
  • Line number – Which line of a batch, stored procedure, trigger or function the error fired

You can create alerts to respond to individual error numbers or to respond to all errors of a specific severity level. It is good practice to fire alerts when errors have a severity greater than 19.

The list of system supplied error messages can be seen:

SELECT * FROM sys.messages
WHERE language_id=1033
ORDER BY message_id


The SQL Server Error Log
A new SQL Server Error Log is created with each restart. Six log files are kept by default. The sys.messages view in msdb shows which system error messages are logged by default. You can control the logging behaviour of individual errors by using the sp_altermessage system stored procedure.

Create an Alert
Alerts are created using the GUI in SSMS or by calling sp_add_alert. When defining an alert, you can specify a SQL Server Agent job that should be started when the alert occurs. Here is an Alert example:

EXEC msdb.dbo.sp_add_alert
@name=’AdventureWorks Transaction Log Full’
@message_id=9002,
@delay_between_responses=0,
@database_name=’AdventureWorks’,
@job_id=’255060-5606-6540-0bbf6fgb50’;
GO

Customised error messages can be created using sp_addmessage, and viewed using RAISERROR.
When an alert fires, two things can be configured to happen:
1) A job can start (only one, though this can start other jobs)
2) An Operator can be notified, using the sp_add_notification procedure:

EXEC msdb.dbo.sp_add_notification
@alert_name=’AdventureWorks Transaction Log Full’,
@operator_name=’SQL Admins’,
@notification_method=1;
GO

Wednesday 11 April 2012

How to restore system databases

System databases are less likely to be affected by system failures however, you may need to restore them at some point so you must back them up. Not all system databases can be recovered in the same way.

master
The master database holds all system level configurations. SQL Server requires the master database before it can even run. If the master database becomes corrupt, you must first create a temporary master database, then replace it with a backup.

To obtain a temporary master database you can:
a) Use SQL Server setup program to rebuild the system databases (after which you will need to restore all as all will be rebuilt
b) Use a file-level backup of the master database files to restore the master database. This file-level backup must have been taken when the master database was not in use (ie when SQL Server was not running)

If the GUI interface isn’t working, you must then restore master database through sqlcmd using:

RESTORE DATABASE master
FROM ‘D:\AWsys.bak’
WITH REPLACE

This should be done in single-user mode (-m flag in command-line)
Check Books Online for more info on this subject

model
The model database is the template for all databases created on the SQL Server instance. When this is corrupt, the instance of SQL Server cannot start.

In the case of a corrupt model database, the instance must be started with the –T3608 trace flag as a command-line parameter. This only starts the master database. After this, the model database can be restored using RESTORE DATABASE.

msdb
The msdb is used to schedule job and alerts, and recording details of operators. It records history, including backup and restore operations. If msdb become corrupt, SQL Server Agent will not start. msdb can be restored like user databases, then the SQL Server Agent can be restarted.

resource
This is a read-only, hidden database. It can however, become corrupt by failures in areas such as I/O subsystems or memory. If so, it can be restored by a file-level restore in Windows or by running the setup program for SQL Server.

tempdb
This is a temporary database. It cannot be backed up, nor does it ever need to be as it is recreated each time SQL Server is started.

How to restore a user database

  1. Right click on the Server, select Tasks and Restore
  2. In General pane, select From device and locate the .bak file to restore from.
  3. Select all but the final element from within the .bak file to restore
  4. Select database name from To database
  5. In Options select RESTORE WITH NORECOVERY
  6. Run through options 1-4 again, selecting the transaction log to restore.
  7. In Options select RESTORE WITH RECOVERY
  8. The database has now been restored, recovered and ready for use
In the options above, it is possible to select a restore to a point in time.

If a database is restored onto another server, the backup information is not restored with the database, as it is held in the msdb database of the original system.

There may be a time when a damaged page messes up your database. First of all, you need to determine which page is damaged. This can be done by looking in System databases > msdb > Tables > dbo.suspect_pages. Then restore the page like so:

RESTORE DATABASE AdventureWorks
PAGE = '1:160' -- '1'=File_ID, '610'=page_ID
FROM AWBackup -- this is your backup device
WITH NORECOVERY

Monday 9 April 2012

SQL Server Architecture

SQL Server is constructed from a series of many small components that work together, which fall into 3 categories
  1. Query Execution Layer
    This has three sub categories:
    • The Parser – checks you’ve followed the T-SQL rules
    • The Algebrizer – which lists what you want to achieve and converts it to a series of logical operations
    • The Query Optimizer – which considers the different ways your query can be executed. It finds an acceptable plan, which is not necessarily the most efficient.
  2. Storage Engine Layer
    This manages how data is stored on disk and in memory. The main sub-categories:
    • The Access Methods component – works with how you access data
    • The Page Cache – stores cached copies of data pages to minimise data retrieval time.
    • The Locking and Transaction Management components – maintain consisteny, integrity of data, with help of database log file.
  3. SQL OS Layer
    This provides operating system functionality (ie interfaces) to SQL Server components.
    The most important functions provided by this layer are memory management and scheduling.
The most important resources SQL Server utilizes from the server platform are CPU, memory and I/O.

CPU and I/O
When a SQL Server component needs to execute code, it creates a task which represents the time it’ll take to run. These tasks are scheduled by Windows. SQL Server tasks spend most of their time waiting for something external to happen, usually I/O. When a task needs to wait for a resource, it is placed in a waiting list until the resource is available. When it is told the resource is available, it then waits for a share of CPU time, allocated by SQL OS.
SQL Server keeps records of how long tasks spend waiting and the types of resources they wait for. Query the following:
sys.dm_os_waiting_tasks
sys.dm_os_wait_stats


Memory
The Buffer Pool is the main memory object of the SQL Server, and is divided into 8KB pages – the same size as database data pages. It has three sections:
  1. Free Pages – not yet used
  2. Stolen Pages – used pages
  3. Data Cache – used for caching database data pages. Data modification happens here, then the pages are considered dirty and so the changes are written to the database by a process called CheckPoint.
    The Data Cache uses a least recently used (LRU) algorithm to determine which pages to drop from memory when cache space is needed.
SQL Server memory and Windows memory
The memory manager within SQL OS manages the SQL Server memory. It does this by checking the Windows memory, and calculating a value it thinks appropriate to use. Once this has been exceeded, it answers requests for memory from components by freeing up memory used by other components. Basically, it impacts on component performance.

Physical and Logical I/O
  1. Logical I/O – when a requested page is available in the Buffer Cache
  2. Physical I/O – when a requested page is not available in the Buffer Cache and must be read from the data page into the Buffer Cache
You can minimise the number of Physical I/O operations by:
  • Providing enough memory for the data cache
  • Optimising the physical/logical layout (including indexes)
  • Optimise queries to request as few I/O operations as possible
You can monitor logical and physical I/O operation counts by:
SET STATISTICS IO ON;
You can monitor the overall physical I/O operations by querying the sys.dm_io_virtual_file_stats system function. The values are cumulative from the last system restart.

Parallelism
The Query Optimiser decides if tasks should be synchronised and monitored (which happens if they’re expensive) and only does so if it’s worth it. Basically, Parallelism involves spreading tasks across CPUs, rather than running them sequentially.
Further reading : SQL Server threads, Affinity Mask, MAXDOP, Lazy Writer,

Planning Server Resource Requirements

There is no formula for this difficult task. You should consider the following:
  • If there are any case studies, perhaps you have similar systems in your own organisation
  • Consider things at a business level, rather than a server level (ie ‘How long does an order take to save’ rather than ‘How long does an I/O operation take?’)
  • Simulate a database of the same size as the final, operational one
  • Monitor things after the system goes live.
Planning CPU Requirements
  • You should have as many CPUs as possible.
  • Anything over 30% will lead to slow systems. Peaks above this are only a problem if they are sustained.
  • Any server running production based SQL Server should not have any other major services running
Planning Memory Requirements
SQL Server uses memory while queries are being executed. Each user executing a query needs a separate memory space known as an execution context. A lack of memory can impact a system in such a way that it appears to be a lack of CPU or I/O.
Planning Network Requirements
• Consider the use of multiple network adapters
• Most low-level network protocols (ie Ethernet) become unusable quite quickly
• Remember things like backups cause high volumes of network I/O, which can slow down connections
• Database mirroring and replication might need dedicated connections to perform correctly
Planning storage and I/O Requirements
In larger organisations, the following will be determined by a team of people, each with their own skillset.
  • Determine requirements of the application
  • Determine the style of storage used (SAN vs DAS).
  • DAS (direct attached storage) generates better, more predictable results. For DAS storage, RAID levels 1,5 and 10 are the most common in use on SQL Server systems.
  • For most systems, the number of drives/spindles will matter more than the size of the disk. Often, a single large disk will not provide sufficient I/O operations per second or enough data throughput (MB/sec) to be workable.
  • Write caches can substantially improve SQL Server I/O performance, but make sure hardware caches guarantee a write, even after a system failure. Many drive write caches cannot survive failures and can lead to database corruptions.
Stress testing (SQLIOSIM)
A common problem occurring in database deployments is when the server platform is delivered and installed and immediately commissioned once SQL Server is installed on it. There are many subtle I/O problems that can occur and can be hard to evaluate.
Pre-installation stress-testing can help using tools such as SQLIOSIM


Further reading: NUMA, SAN vs DAS storage, SSD based systems, SQLIOSIM

How to setup Notifications

A Notification, Alert or Error is sent to an Operator, which must be setup first.

Configure SQL Server Agent Operators
  1. Under SQL Server Agent, right-click ‘Operators’ and add a New operator
  2. Right-click ‘SQL Server Agent’, select ‘Properties’, ‘Alert System’ then Enable mail profile and select a Mail Profile.
  3. Now right-click a job from your list, select ‘Properties’ and select ‘Notifications’. Now check ‘Email’, selecting the Operator and when to send the email (on job completion, success or failure).
  4. You will need to restart the SQL Server Agent for the effects to take place.
To add a Notification within a Maintenance Plan
  1. Right-click and ‘Modify’ maintenance plan
  2. Drag an instance of Notify Operator Task from the ‘Toolbox’ on the left
  3. Right click and ‘Edit’ the task. You will need to set up a connection
  4. Select an Operator and define your own email content
  5. Now highlight the task you want to be notified about. Drag the arrow that hangs from it onto the Notify Operator Task. To define if the notification should be sent on success, failure or completion, right-click the arrow.

How to setup Database Mail

Expand Management and you'll find Database Mail. While the configurations of Database Mail are stored within msdb, SQL Server Agent caches profile info in memory, so it can still send emails if SQL Server is down.

Configure Database Mail
  1. Right-click ‘Database Mail’ (under ‘Management’) and select Configure Database Mail
  2. Follow instructions and setup a New Profile
  3. Continue, selecting your new profile as the Default
  4. Send test email:

USE msdb;
GO
EXEC sp_send_dbmail @profile_name='Profile_Name',
@recipients='youremail@Example.com',
@subject='Test message',
@body='This is the body of the test message.'

NB. When you want SQL Server Agent to send you Notifications (e.g. when a job fails) you need to add yourself as an Operator and restart the agent

You can track the delivery status of an individual message using the following views:

  • dbo.sysmail_allitems
  • dbo.sysmail_sentitems
  • dbo.sysmail_unsentitems

SMTP Relay
Most SMTP servers are configured to deny all email relay. This means it will not forward any emails from another server. Servers that allow this are open to spamming from other servers. SQL Server Agent demands the SMTP server be open to email relay.

Transaction Log Shipping

Transaction Log Shipping involves copying the logs from a Primary Server to any number of Secondary servers. A Monitor server can also be involved, and alert you of any problems.

TLS is usually made up of 3 parts:
  1. Backup job - Performed on primary server. Logs history to local server and monitor server and deletes old backup files and history info.
  2. Copy job - Performed on secondary server. Copies backup files to secondary server, logs history on secondary server and monitor server.
  3. Restore job - Performed on secondary server. Restores the backup files to the database, logs history on secondary server and monitor server.
How to setup Transaction Log Shipping
  • Open database properties and select Transaction Log Shipping
  • Select checkbox to ‘Enable this as a primary database in a log shipping configuration’ (if the database is in SIMPLE recovery mode, you’ll get an error)
  • Click Backup Settings…
  • Add network and local path to a shared folder where backups can be made and click Schedule to define how often backups are made. Click Ok.
  • Click Add… to add a secondary server then Connect and provide the login details
  • Check ‘Yes, generate a full backup…’ (unless your database is already initialised on secondary server)
  • Click the Copy Files tab and enter the Destination folder for copied files (on secondary server) and again click the button to define a Schedule
  • Click Restore Transaction Log tab and select Standby mode and Disconnect users in the database when restoring backups, again defining a Schedule.
  • Click OK
  • Click OK again

NB: In order to bring the database on the secondary server back online:

RESTORE DATABASE <db_name> WITH RECOVER

SQL Server Agent

SQL Server Agent
  • Runs as a Windows service
  • Must be running to execute jobs, fire alerts, contact operators
  • Start mode should be set to Automatic
  • You should automate its restart on the server:
    Start > Admin Tools > Computer Management > Services and Applications > Services > SQL Server Agent (dbl-click) > Recovery > change failure actions to Restart the Service
  • You can also restart it in the SQL Server Agent Properties.
SQL Server Agent Security
Jobs need to be able to access many types of objects in SQL Server, as well as operating system files and folder. The account that is used to start-up the service defines its security level. The Local Service and Network Service accounts can be used though neither is recommended. A Windows domain account should be used, with only the required permissions.

By default only members of the sysadmin fixed server role can administer SQL Server Agent. There are fixed server roles for working specifically with SQL Server Agent:
  • SQLAgentUserRole - Permissions only on local jobs and job schedules they own
  • SQLAgentReaderRole - Can view list of multi-server jobs, properties, histories, schedules.
  • SQLAgentOperatorRole - Can execute, stop, start all local jobs. Can delete job history for local jobs.
When users who are not members of one of these roles are connected to SQL Server, SQL Server Agent node in Object Explorer is not visible.



Friday 6 April 2012

Integrity check

A corrupt database is very rare, though it can happen due to issues with memory or I/O subsystems, so it’s important to look out for it. You can use:
DBCC CHECKDB (AdventureWorks);
GO

There are various parameters you can include in the code above (see Books Online), as well as similar checks you can do, though these are all included within DBCC CHECKDB.

It is good practice to run DBCC CHECKDB on a database prior to backup, and fairly regularly. It does consume a fair amount of memory and I/O. Also, it takes up a lot of space on disk, as it takes a snapshot of the database to check, as well as taking up space in the tempdb.

There are 2 repair options. For both of these, the database needs to be in SINGLE_USER mode:
REPAIR_BUILD – rebuilds the corrupt pages based on non-corrupt pages. There is no data loss, but this only works with certain forms of corruption
REPAIR_ALLOW_DATA_LOSS – almost always produces data loss. It’d be preferable to restore the database.

SQL Profiler

SQL Server Profiler can be used to monitor activity from client applications to SQL Server. When working with SQL Server Profiler, it is helpful to understand SQL Trace. For more information, see SQL Trace. Neither should run without a watchful eye, as they can impact on performance.
  • Open up Profiler
  • Connect to the server
  • In Trace Properties window click Run
  • In Events Selection select the events to Trace. Double-click DatabaseName and select LIKE to define which db to trace
  • Switch back to SSMS and run a query
  • Switch back to Profiler and click Stop Trace
  • In the Results grid, click individual statements to see the details shown in the lower pane

Central Management Servers

Within SQL Server Management Studio you can define Registered Servers under Central Management Server and run a single query across multiple servers. This is very useful for monitoring.
CMS uses Windows Authentication only. Before registering servers, you need to ensure each server you register has the same windows user (login and password). You must create this user as a login within SSMS.
Permissions within SQL Server
Let's have the username 'ServerMaster'. We need no special permissions on the Windows Server, basic User group membership is fine. Within SSMS Servermaster needs VIEW SERVER STATE and VIEW DATABASE STATE to be granted to them. These are very basic rights. If you log in to SSMS using the ServerMaster login, you won’t be able to create or destroy anything.
Within SSMS on the server you setup your CMS, Servermaster is a db_owner of msdb, simply to allow the configuration of CMS. There is no need to add this permission on any other server.
a) Create logins:
  1. Create Windows user for ‘ServerMaster'
  2. Login to SSMS and execute (replacing ‘DOMAIN’):
USE [master]
GO

CREATE LOGIN [DOMAIN\ServerMaster]
FROM WINDOWS
GO

GRANT VIEW SERVER STATE TO [DOMAIN\ServerMaster];
GO
GRANT VIEW DATABASE STATE TO [DOMAIN\ServerMaster];
GO
b) Setup a Central Management Server
  1. Open SSMS without logging in to any database. Select View then Registered Servers
  2. Expand Database Engine, right-click Central Management Servers and select Register Central Management Server. Select your base server (eg local). It can be a SSMS instance created just for CMS.
  3. Right-click your Central Management Server and select New Server Group (eg MyServers). You can have as many group as you like.
  4. Right-click the Server Group and select New Server Registration. Do this to register each server using Windows Authentication
  5. To query all servers, right-click the Server Group and select New Query.

SQL Trace

Trace flags are valuable DBA tools that allow you to enable or disable a database function temporarily. Once you turn on a trace flag, it remains on until you either manually turn it off or restart SQL Server. Some trace flags, like deadlock detection, should begin on startup.

When deadlocks occur, trace flag 1204 and trace flag 1222 return information that is captured in the SQL Server 2005 error log. Trace flag 1204 reports deadlock information formatted by each node involved in the deadlock. Trace flag 1222 formats deadlock information, first by processes and then by resources. It is possible to enable both trace flags to obtain two representations of the same deadlock event.

You must enable the trace as 'global' using '-1', eg:

DBCC TRACEON (1204, -1)
DBCC TRACEOFF (1204, -1)

Without the '-1' the trace is a session trace
Use the DBCC TRACESTATUS command to determine which trace flags are currently active.

Use the -T command-line startup option of Sqlservr.exe. This is a recommended best practice because it makes sure that all statements will run with the trace flag enabled. These include commands in startup scripts eg:

-dC:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\master.mdf;-eC:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\Log\ERRORLOG;-lC:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\mastlog.ldf;-T1204;-T3226

NB SQL Server must be restarted for the changes to take effect

Optimizing SQL Trace in the Books Online lists some SQL Trace performance guidelines:
  • Run Profiler remotely instead of directly on server
  • Avoid including events that occur frequently (e.g. Lock:Acquired) unless absolutely needed
  • Include only event classes needed
  • Specify limiting filters to reduce the number of events
  • Avoid redundant data (e.g. SQL:BatchStarting and SQL:BatchCompleted)
  • Avoid running large traces with Profiler; consider a server-side SQL Trace instead
  • Limit server-side trace file size and manage space usage
  • Run traces for shorter periods of time. You can control the length of time that a trace runs by enabling a stop time. This is especially important if you cannot limit the event classes or filter an event. Enabling a stop time ensures that the performance incurred does not last indefinitely.

The following may be useful:
  • sp_trace_create
  • sp_trace_setevent
  • sp_trace_setfilter
  • sp_trace_setstatus
  • fn_trace_geteventinfo
  • fn_trace_getfilterinfo
  • fn_trace_getinfo
  • fn_trace_gettable

Database structure

Data is stored by SQL Server databases in one of three sets of files:
  1. Primary Data File – starting point of the database which points to other files within the database. Each database has one primary data file, defaulting to the file extension .mdf

  2. Secondary Data Files – Optional, user defined data files that can be used to spread the data through more files for performance and/or maintenance reasons. They can be used to spread data across multiple disks by putting each file on a different disk. Also, if a database exceeds the maximum size for a single Windows file, you can use secondary data files so the database can continue to grow. The recommended extension for secondary data files is .ndf

    Pages live in files, and each file in a database is given a unique file ID number. To identify a page in a database, both the page number and the file ID are required. Each page is 8KB in size. After allowing header information that is needed on each page, there is a region of around 8060 bytes for data. Data pages hold data from a single database object.

    Groups of 8 contiguous pages are referred to as an extent. SQL Server uses extents to simplify page management. Uniform extents have pages containing data from only one object while Mixed extents have pages containing data from different objects.

    The first 8 pages from database objects are always stored on mixed extents. After that, data pages for the objects are stored on uniform extents. In both primary and secondary data files, a small number of pages is allocated to the tracking of extent usage within the file.

  3. Log files – hold information used to recover the database, when necessary. Each database must have at least one log file (extension .ldf)

    All transactions are written to the log file using the write-ahead logging (WAL) mechanism to ensure the integrity of the database, and to support rollbacks of transactions.

    When data pages need to be changed, they are called and changed in memory. The dirty pages are then written to the transaction log. Later, a background process known as checksum writes the dirty pages to the database files. For this reason, the pages in the log are critical to the ability of SQL Server to recover the database.

    SQL Server only writes to a single log file at any point in time. Additional log files are only used when space is not available in the active log file.

Creating Indexes

Imagine a catalogue without an index. It'd take ages to find the exact pair of chrome and ivory toenail curlers you were looking for. A table of view without any index is much the same, with all the data just plonked into a heap

Index types
Unique Indexes - A unique index is one in which no two rows are permitted to have the same index key value. A clustered index on a view must be unique.

Clustered Indexes - With few exceptions, every table should have one clustered index (and only one). Besides improving query performance, a clustered index can be rebuilt or reorganized on demand to control table fragmentation. A clustered index can also be created on a view.

Nonclustered Indexes - You can create multiple nonclustered indexes on a table or indexed view. Generally, nonclustered indexes are created to improve the performance of frequently used queries not covered by the clustered index.

How to create an index
Here we create a clustered index on the ID column of the Exployees table

CREATE CLUSTERED INDEX Index_Name_Clstd ON Exployees(ID);


Here we create a nonclustered index on the Salary column of the Exployees table

CREATE NONCLUSTERED INDEX Index_Name_NonClstd ON Exployees(Salary);


Here we create a unique index on the StartDate column of the Exployees table

CREATE UNIQUE INDEX Index_Name_Unique ON Exployees(StartDate);


Best Practice
  • Table should have primary key
  • Table should have minimum of one clustered index
  • Table should have appropriate amount of non-clustered index
  • Clustered index does not have to be on the Primary Key (but it is by default)
  • Clustered index should be on one column to be used most frequently in queries
  • Consider fill factor when creating indexes
  • Consider Index Tuning Wizard for recommendations and DETA for more recommendations based on usage
  • Following priority order should be followed when any index is created a) WHERE clause, b) JOIN clause, c) ORDER BY clause, d) SELECT clause

User Access to Objects

Permissions for principals to access objects are assigned using GRANT, DENY and REVOKE.

This article discusses permissions on:
  1. Table and View
  2. Stored procedures
  3. Schema

Table and View permissions
Permissions are set for the principal to SELECT, INSERT, UPDATE and DELETE.

Grant – Allows the permissions
GRANT SELECT ON OBJECT::Marketing.Salesperson
TO Peter;
GO
Deny – Denies the permissions
DENY SELECT ON OBJECT::Marketing.Salesperson
(Firstname, Address, Phone) -- (this is optional, and shows how column-level security can be applied)
TO Peter;
GO
Revoke – Removes the last permission set (removes either Grant of Deny)
REVOKE SELECT ON OBJECT::Marketing.Salesperson
TO Peter;
GO

Stored procedures permissions
Permissions are set for the principal to EXECUTE, ALTER, VIEW DEFINITION
Grant – Allows the permissions
GRANT EXECUTE ON Reports.GetProductColours --(here we have not used the OBJECT:: ref. It’s optional)
TO Peter;
GO
Deny – Denies the permissions
DENY ALTER ON OBJECT:: Reports.GetProductColours
(Firstname, Address, Phone) -- (this is optional, and shows how column-level security can be applied)
TO Peter;
GO
Revoke – Removes the last permission set (removes either Grant of Deny)
REVOKE VIEW DEFINITION ON OBJECT:: Reports.GetProductColours
TO Peter;
GO

Schema permissions
A schema is really just a container for objects. In the above examples we see a Marketing schema and a Reports schema. They are helpful to organise a database. They are created with the CREATE SCHEMA statement, and formally referred to by a name of the form: Server.Database.Schema.Object.
'dbo' is a schema. 'sys' is another.

Each user (apart from Windows group logins) is assigned a default schema, used when a user refers to an object without specifying the schema name. Consider this:
SELECT ProductID, Name, Price FROM Products
There could be a Products table in more than one schema. What will happen here is SQL Server will look first in the user’s default schema for a Products table, after that it will look in the dbo schema.
It would be better to write the two-part name:
SELECT ProductID, Name, Price FROM Production.Products
GRANT, DENY, REVOKE are all still used for schemas, but as a schema can contain tables, views, stored procedures, functions etc, you can set the permissions mentioned above for all these:
GRANT SELECT ON SCHEMA:: Production
TO Peter;
GO
DENY EXECUTE ON SCHEMA:: Production
TO Peter;
GO

Server and Database Roles

Useful Terminology
  • Authentication – Who you are
  • Authorisation – What you can do
  • Principal – Someone requesting access to a securable. A group of these is a role
  • Securable – A resource that can be secured and you can control access to (table, view etc)
  • Role – A group for principals
There are 2 types of roles:
  1. Server Roles
  2. Database Roles
1) Server roles
Here is a list of server roles, a general description of what they’re used for, and a list of the major permissions associated with each:

Role
Description
Server-level permission
sysadmin
Perform any action
CONTROL SERVER
dbcreator
Create and alter databases
ALTER ANY DATABASE
diskadmin
Manage disk files
ALTER RESOURCES
serveradmin
(similar to sysadmin)
Configure server-wide settings
ALTER ANY ENDPOINT,
ALTER RESOURCES,
ALTER SERVER STATE,
 ALTER SETTINGS,
SHUTDOWN,
VIEW SERVER STATE
securityadmin
Manage and audit server logins
ALTER ANY LOGIN
processadmin
Manage SQL Server processes
ALTER ANY CONNECTION,
ALTER SERVER STATE
bulkadmin
Run the BULK INSERT statement
ADMINISTER BULK OPERATIONS
setupadmin
Configure replication and linked servers
ALTER ANY LINKED SERVER
public
Default role assigned to logins
VIEW ANY DATABASE,
CONNECT on default endpoints
(permissions can be altered)

  • Query sys.server_principals for a list of available SQL Server roles
  • Query sys.sysusers for a list of SQL Server user groups
  • Query sys.server_role_members for current SQL Server roles
Roles and permissions are not the same. If you have a role, you have certain permissions. If you have permissions, you do not necessarily have a role.

To assign a role
First, create a login:
CREATE LOGIN TestLogin WITH PASSWORD = 'Pa$$w0rd',
CHECK_POLICY = OFF;
GO

Then add someone to a server role:
EXEC sp_addsrvrolemember TestLogin, serveradmin;
GO
To drop someone from a server role:
EXEC sp_dropsrvrolemember TestLogin, serveradmin;
GO
The following lets you view the current logins and their server roles:
SELECT r.name AS RoleName,
p.name AS PrincipalName
FROM sys.server_role_members AS srm
INNER JOIN sys.server_principals AS r
ON srm.role_principal_id = r.principal_id
INNER JOIN sys.server_principals AS p
ON srm.member_principal_id = p.principal_id;
GO

2) Database roles
Unlike server roles, it is possible to create user-defined roles with much more specific permissions, as well as assigning a user to different roles on different databases. In fact it is better to create user-defined roles as fixed-roles often offer permissions more extensive than the user needs.
Here is a list of fixed database roles:

Role
Description
db_owner (dbo)
Perform any configuration and maintenance activities on the DB and can DROP it (sa and all members of sysadmin auto assigned)
db_securityadmin
Modify role membership and manage permissions
db_accessadmin
Add/remove access to the DB for logins
db_backupoperator
Backup the DB
db_ddladmin
Run any DDL command in the DB
db_datawriter
Add, delete, or change data in all user  tables
db_datareader
Read all data from all user  tables
db_denydatawriter
Cannot add, delete, or change data in user tables
db_denydatareader
Cannot read any data in user tables

To assign a login to a database role
First, create the login on the database (previously it was on master):
CREATE USER TestLogin
FOR LOGIN TestLogin;

Then assign them a role

EXEC sp_addrolemember db_backupoperator, TestLogin;
GO
To drop a login from a database role:
EXEC sp_droprolemember db_backupoperator, TestLogin;
GO
The following lets you view the current logins and their database roles:
SELECT r.name AS RoleName,
p.name AS PrincipalName
FROM sys.database_role_members AS drm
INNER JOIN sys.database_principals AS r
ON drm.role_principal_id = r.principal_id
INNER JOIN sys.database_principals AS p
ON drm.member_principal_id = p.principal_id;
GO



The msdb database

msdb file size
The msdb database holds data about jobs, maintenance, alerts, history and backups. It needs to be backed up, and can be restored just like any user database. Its size can become very large if left unattended.

The following query shows the top 10 msdb table sizes:

USE msdb;
go
SELECT TOP 10 SERVERNAME=@@SERVERNAME,DB_NAME=DB_NAME(),TABLE_NAME=OBJECT_NAME(I.ID),INDEX_NAME=I.NAME,INDID,USED, ROWS, SIZE_N_MB = ROUND((USED*8.0/1024.0),2),
ROWMODCTR,STATISTICDT=STATS_DATE(I.ID,INDID)
FROM SYSINDEXES I, SYSOBJECTS O
WHERE I.ID = O.ID
AND INDID IN ( 0,1)
AND XTYPE = 'U'
ORDER BY USED DESC

For cleaning up your MSDB tables prefixed with DTA_ simply open up the Database Engine Tuning Advisor and delete the old sessions.

msdb backup/restore history tables
The increase in size is often due to the backup/restore history getting out of hand. Run the following to see when backup/restore history goes back to:

SELECT TOP 1 backup_start_date
FROM msdb.dbo.backupset WITH (NOLOCK)
ORDER BY backup_set_id ASC
GO

SELECT COUNT(*) AS 'TotalRecords',
MIN(restore_date) AS 'MinDate',
MAX(restore_date) AS 'MaxDate'
FROM msdb.dbo.restorehistory
GO

Before trying to clear anything, you should create some additional indexes to drastically speed up the time it takes the procedures that clear records to execute:

use msdb;
go

create nonclustered index IX_backupset_media_set_id on dbo.backupset (media_set_id);
create nonclustered index IX_restorefile_restore_history_id on dbo.restorefile (restore_history_id);
create nonclustered index IX_restorefile_restore_history_id on dbo.restorefilegroup (restore_history_id);

The following will let you see stats for the clean-up operation and delete entries older than 90 days:

SET STATISTICS IO ON;
GO
SET STATISTICS TIME ON;
GO

USE MSDB
DECLARE @backup_date DATETIME
BEGIN
set @backup_date=(select dateadd (dd, -90, getDate()))
EXEC SP_DELETE_BACKUPHISTORY @backup_date
END

The log file for msdb may grow hugely after deleting unwanted data. The following will show the space available:

SELECT name ,size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS AvailableSpaceInMB
FROM sys.database_files;

DBCC SQLPERF(LOGSPACE);
GO

And the following will try to reduce the log size to 80MB, if space allows;

DBCC SHRINKFILE (msdbLog, 80);
GO

More useful tips
A job can be added to a maintenance plan in order to clean up history prior to backup:
The physical backup files are preserved, even if all the history is deleted.

Thursday 5 April 2012

Activity Monitor

Activity Monitor is a tool inside SSMS showing information about processes, waits, I/O resource performance and recent expensive queries. To start Activity Monitor right-click the server name and click Activity Monitor.

Activity Monitor can be used to kill processes that are not completing:
  1. View processes
  2. Filter the Task State to SUSPENDED
  3. Note the value in the Blocked By column - the SPID
  4. Filter the Task State to ALL
  5. Filter the Session ID to the value of the SPID
  6. Note the value in the Head Blocker column. A 1 indicates this is the cause of blockage
  7. Right-click the process and view the details. You’ll see the code causing the blockage
  8. Right-click the session and Kill process
  9. If the Task State column has no option to filter by SUSPENDED then all is well

N.B. There may be times when the Head Blocker passes from one process to the next (for instance if the problematic query was executed multiple times)

Further reading:  Activity Monitor explained

How to create logins

Windows login
A login can be created by right-clicking on Security and selecting New Login or using:

CREATE LOGIN [AdventureWorks\Salespeople]
FROM WINDOWS
WITH DEFAULT_DATABASE=[tempdb];  -- optional, master is the default
GO

You can remove a login using DROP LOGIN, unless they’re logged in. If they are logged in, kill their session by finding their session ID in the SSMS Activity Monitor
SQL Server login
SQL Server logins are created for individual identities, in a similar way to Windows logins:

CREATE LOGIN SalesManager
WITH PASSWORD= ‘Pa$$w0rd’,
CHECK_POLICY=OFF; -- Windows password policies are enforced. The default value is ON.
GO

Password changing and Login expiry
Passwords can be reset using the GUI interface or via the ALTER LOGIN statement.
Database Users
A database user is a principal within a database, mapped to a server login.

CREATE USER Salespeople -- this doesn’t have to be the same as the server login
FOR LOGIN [AdventureWorks\Salespeople]; -- windows login
GO

or

CREATE USER SalesManager -- this doesn’t have to be the same as the server login
FOR LOGIN SalesManager;
GO

A database user cannot be disabled like a server login can. They can be altered (and retain the same SID) though, like so:

ALTER USER Peter WITH NAME = James;
GO

Backing up databases

How to perform a full backup of a database

BACKUP DATABASE AdventureWorks
TO DISK = ‘L:\SQLBackups\AW.bak’
WITH INIT;

INIT tells SQL Server to create the file if it doesn’t exist, and to overwrite the existing file if it does
NOINIT tells SQL Server to create the file if it doesn’t exist, and to append the existing file if it does
NOINIT is the default.

or
  1. Right-click database, select Tasks and Backup
  2. In General pane ensure Recovery model is set to FULL and check Destination
  3. Ensure Backup type is set to Full
  4. In Options ensure Continue on error is checked
  5. Click OK
How to perform a differential backup of a database

BACKUP DATABASE AdventureWorks
TO DISK = ‘L:\SQLBackups\AW.bak’
WITH DIFFERENTIAL, INIT;

or

  1. Right-click database, select Tasks and Backup
  2. In General pane ensure Recovery model is set to FULL and check Destination
  3. Ensure Backup type is set to Differential
  4. In Options ensure Continue on error is checked
  5. Click OK
How to perform a transaction log backup

BACKUP LOG AdventureWorks
TO DISK = ‘L:\SQLBackups\AW.bak’
WITH NOINIT;

or

  1. Right-click database, select Tasks and Backup
  2. In General pane ensure Recovery model is set to FULL and check Destination
  3. Ensure Backup type is set to Transaction Log
  4. In Options ensure Continue on error is checked
  5. In Options under Transaction log select either Truncate the transaction log  or Backup the tail of the log, and leave the database in the restoring state
  6. Click OK


A quick overview of database backups

Backups are one of the most important things to get right.  You need backups in case your server blows up or someone drops a database while drunk at a Christmas party, but first you need a backup strategy or schedule. To formulate this, consider the following:
  • Database size - how long will backups take
  • Office hours  - what times the database is in use
  • Average rate of change to database (per hour) during office hours
  • Recovery Time Objectives (RTO) – ie the database must not be out of operation for more than 2 hours
  • Recovery Point Objectives (RPO) – ie when database is recovered, no more than 3mins of transactions may be lost
You can create backups on the fly, but you'll want to automate their creation using Jobs.

There are 3 types of backup to consider. All 3 should be done in a combination best suited to the business. You can back them all up to a backup device. Click the links below to see instructions about backing up to the backup device 'AW.bak' :
  1. Full backup – the data files and the active part of the transaction log
  2. Differential backup – the data from the time you made your last full backup
  3. Transaction log backup – records all transactions.
It is worth remembering that a backup represents the state the database was when the backup finished not began, which is good.

System databases must be backed up also, and you should seriously consider offsite backups.



Further reading: Jobs, System database backups, Tail log backups, Media sets, File/filegroup backups, Compression

New DBA checklist

If you're a new DBA, you're probably a little overwhelmed by all the stuff you need to learn. The following is a fairly comprehensive list of topics to research. This list assumes you are coming into the job with a bunch of production servers already setup and are vaguely ordered by importance. I will link to subsequent posts containing guidance when I write them.

When I began as a DBA my company had just a few servers with about 15 databases. Some points mentioned below had been considered, others hadn't.

Tuesday 3 April 2012

Which version of SQL Server do I have?

From within SQL Server Management Studio, simply click the New Query button and type:

SELECT @@VERSION


Hit return and there you have it.

In the beginning

Interesting. I seem to have started a blog. Ok. I didn't think I'd ever do that. I think I'll make it about my job. Maybe I can use it to keep track of things I'm doing as I get to grips with my newly appointed position as a DBA. Maybe I'll treat it as a diary, which I can look back on in years to come and blush thinking 'did I really think that and write that!?' Maybe someone else will stumble upon it and read snippets. I hope they realise this is not an actual diary, so they shouldn't feel bad about reading it. I suppose these things are all possible.

I wonder if you can tell I'm not exactly sure how to start. Maybe I'll go and make some coffee. Good advice for any DBA.

Now that the coffee is brewing, I should explain a little bit about myself. I became a DBA about 6 months ago. There was a job in my office, so I applied. I was warned by my boss that DBAs live underground, rarely see the light and don't know how to iteract with real people. I replied that this is how I was anyway.

After completing the Microsoft course Maintaining a 2008 R2 SQL Server I began experimenting on a test server in my office, and was then slowly weened on to the production servers. I've decided to start this blog to keep track of everything I'm learning. I now have notes all over the place about all manner of topics, and wanted a place to store finalised notes. I wanted to draw a line under a thought I'd had. I wanted to be able to say 'Amen to that topic, what's next?'

I'll do my best to keep this in mind while posting.