SlideShare ist ein Scribd-Unternehmen logo
1 von 295
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432 © The Norns Laboratories, 2009
Introductory class 30 minutes © The Norns Laboratories, 2009
Agenda Introduction of the instructor Introduction of the participants Review course schedule, and exam requirements Q & A © The Norns Laboratories, 2009
About the instructor © The Norns Laboratories, 2009 VitaliyFursov, MSc, PMP, CSPO 			An experienced software developer, architect, and project manager with a record of 		major accomplishments 	directing  the delivery of software development projects of 		a various sizes and complexity. 		Extensive experience in designing, implementing, and supporting data management  		solutions for major players in financial industry,  transportation, retail, telecom, and 		government. 		Recognized and recruited to guide development projects by companies like IBM 		Global Services to design Loan Origination Systems and Online Banking Systems for US largest banks in 2002-2005. Designed most complex retail management solution currently used by major US universities and colleges to operate campus retail units, such as bookstores, computer stores, campus transportation systems, food courts, etc. Recently consulted US government agency on a project of developing video portal designed to serve up to 10 million users. Extensive experience leading project teams, introducing PMO to the R&D organizations, and reduced cost of development. Long term Agile methods practitioner, Certified Scrum Product Owner, PMP designation holder. 	Experienced mentor, coach, trainer in several technical disciplines, professional public speaker. Volunteer at number of international organizations, such as PMI, Agile Alliance, Scrum Alliance, Toastmasters.  	After business hours, father of 3 kids, farmer, writer, poet, jazz music composer.
Introduction of participants Your name What do I know about SQL Server What do I want to know about SQL Server What do I enjoy doing at work What do I enjoy doing outside of my work © The Norns Laboratories, 2009
Course Schedule © The Norns Laboratories, 2009
Exam stats Time: 180 minutes, 61 question spread over 6 testlets (cases), passing score 700 points, only multiple choice questions, no simulations.  About 3 minutes per question grouped per testlet.  9 question testlet shall be completed at around  27 minutes. Time left on one testlet is not added to the next.  The 180 minutes should be regarded as an indication for the maximum exam length. © The Norns Laboratories, 2009
Q&A © The Norns Laboratories, 2009
Installing and Configuring SQL Server 2008 2.5 hours © The Norns Laboratories, 2009
Agenda Determining Hardware and Software Requirements Selecting SQL Server Editions Installing and Configuring SQL Server Instances Configuring Database Mail (self-study) Practicing Exam Questions © The Norns Laboratories, 2009
Hardware and Software requirements © The Norns Laboratories, 2009
SQL Server Editions Enterprise Standard Workgroup Express Compact Developer Evaluation © The Norns Laboratories, 2009
Installing SQL Server Understanding Collation Modes Understanding Authentication Models Understanding SQL Server Instance concept Multiple Instances, Default Instance, Named Instances SQL Server Configuration Manager Installing Sample Database © The Norns Laboratories, 2009
SQL Server Configuration Manager Starting, stopping, pausing, and restarting a service Changing service accounts and service account passwords Managing the start-up mode of a service Configuring service start-up parameters 	After you have completed the initial installation and configuration of your SQL Server services, the primary action that you will perform within SQL Server Configuration Manager is to change service account passwords periodically.  	When changing service account passwords, you no longer have to restart the SQL Server instance for the new credential settings to take effect. © The Norns Laboratories, 2009
Database Mail Database Mail provides a notification capability to SQL Server instances. 	Database Mail uses the Simple Mail Transfer Protocol (SMTP) relay service that is available on all Windows machines to transmit mail messages. When a mail send is initiated, the message along with all of the message properties is logged into a table in the MSDB database. On a periodic basis, a background task that is managed by SQL Server Agent executes. When the mail send process executes, all messages within the send queue that have not yet been forwarded are picked up and sent using the appropriate mail profile. If SQL Server Agent is not running, messages will accumulate in a queue within the MSDB database. © The Norns Laboratories, 2009
Configuring Database Mail © The Norns Laboratories, 2009 1. To enable Database Mail feature: EXEC sp_configure 'Database Mail XPs',1 GO RECONFIGURE WITH OVERRIDE GO 2. Configure Database Mail under the Management node of the SQL Server instance. 3. Click Next on the Welcome screen. 4. Select Set Up Database Mail By Performing The Following Tasks and click Next. 5. Specify a name for your profile and click the Add button to specify settings for a mail account. 6. Fill in the Account Name, E-mail Address, Display Name, Reply E-mail, and Server Name fields. 7. Select the appropriate SMTP Authentication mode for your organization and, if using Basic authentication, specify the username and password.
Database Mail Profiles Public profile – can be accessed by any user with the ability to send mail. Private profile – can be accessed only by those users who have been granted access to the mail profile explicitly. Any mail profile could be designated as the default. When sending mail, if a mail profile is not specified, SQL Server uses the mail profile designated as the default to send the message. © The Norns Laboratories, 2009
Practice Test, Review and Questions 10 questions, lesson 1, time 20 minutes © The Norns Laboratories, 2009
Test settings © The Norns Laboratories, 2009
Database Configuration and Maintenance 2 hours © The Norns Laboratories, 2009
Agenda Files and Filegroups Manipulating objects between filegroups Transaction Logs FILESTREAM Data tempdb Database Creating Database Database Recovery Models Database Auto Options Change Tracking Access  Parameterization Collations Sequences Database Integrity Checks © The Norns Laboratories, 2009
Files and Filegroups .mdf, .ndf, .ldf – default file extensions Filegroupsscehmas: Option 1 Data filegroup Index filegroup Option 2 Read only tables filegroup Read-write tables filegroup Index filegroup Option 3 Read only tables filegroup Read-write tables filegroup Index filegroug Key table 1 filegroup Key table 2 filegroup Key table 3 filegroup Based on your application, filegroups can be created to resolve IO performance problems by spreading the database over additional spindles alleviating disk queuing. © The Norns Laboratories, 2009
How to create a new filegroups? 	USE CustomerDB_OLD;GOALTER DATABASE CustomerDB_OLDADD FILEGROUP FG_ReadOnlyGO © The Norns Laboratories, 2009
How to add files to a filegroup? ALTER DATABASE CustomerDB_OLD ADD FILE  (  NAME = FG_READONLY1, FILENAME = 'C:ustDB_RO.ndf', SIZE = 5MB, MAXSIZE = 100MB, FILEGROWTH = 5MB ) TO FILEGROUP FG_READONLY; GO © The Norns Laboratories, 2009
How to create objects in the new filegroup?  -- Table CREATE TABLE dbo.OrdersDetail ( OrderIDint NOT NULL, ProductIDint NOT NULL, CustomerIDint NOT NULL,  UnitPrice money NOT NULL, OrderQtysmallint NOT NULL ) ON FG_READONLY -- Index CREATE INDEX IDX_OrderID ON dbo.OrdersDetail(OrderID) ON FG_READONLY GO © The Norns Laboratories, 2009
How to move an object from the primary file group to another file group? To move an existing table with a clustered index, issue the following command: -- Table - The base table is stored with the -- clustered index, so moving the clustered  -- index moves the base tableCREATE CLUSTERED INDEX IDX_ProductID ON dbo.OrdersDetail(ProductID) ON FG_ReadOnlyGO To move a non-clustered index, issue the following command: -- Non-clustered indexCREATE INDEX IDX_OrderID ON dbo.OrdersDetail(OrderID) WITH (DROP_EXISTING = ON)ON FG_ReadOnlyGO If the table does not have a clustered index and needs to be moved, then create the clustered index on the table specifying the new file group. This process will move the base table and clustered index to the new file group. Then the clustered index can be dropped.  Reference these commands: -- Table without a clustered index + drop indexCREATE CLUSTERED INDEX IDX_ProductID ON dbo.OrdersDetail(ProductID) ON FG_ReadOnlyGO DROP INDEX IDX_ProductID ON dbo.OrdersDetail(ProductID)GO © The Norns Laboratories, 2009
How to determine which objects exist in a particular filegroup? SELECT o.[name], o.[type], i.[name], i.[index_id], f.[name]FROM sys.indexesiINNER JOIN sys.filegroups fON i.data_space_id = f.data_space_idINNER JOIN sys.all_objects oON i.[object_id] = o.[object_id]WHERE i.data_space_id = 2 --* New FileGroup*GO © The Norns Laboratories, 2009
Transaction Logs © The Norns Laboratories, 2009 ACID (atomicity, consistency, isolation, durability) is a set of properties that guarantee that database transactions are processed reliably. In the context of databases, a single logical operation on the data is called a transaction. An example of a transaction is a transfer of funds from one bank account to another, even though it might consist of multiple individual operations (such as debiting one account and crediting another). A Transaction Log is a history of actions executed by a database management system to guarantee ACID properties over crashes or hardware failures. Physically, a log is a file of updates done to the database, stored in stable storage.
FILESTREAM data FILESTREAM feature associates files with a database. The files are stored in a folder on the operating system, but are linked directly into a database where the files can be backed up, restored, full-text-indexed, and combined with other structured data. To store FILESTREAM data within a database, you need to specify where the data will be stored. You define the location for FILESTREAM data in a database by designating a filegroup within the database to be used for storage with the CONTAINS FILESTREAM property.  © The Norns Laboratories, 2009 ,[object Object],[object Object]
work tables to store intermediate results for spools or sorting; 
Row versions that are generated by data modification transactions in a database that uses read-committed using row versioning isolation or snapshot isolation transactions;
Row versions that are generated by data modification transactions for features, such as: online index operations, Multiple Active Result Sets (MARS), and AFTER triggers.
Operations within tempdb are minimally logged.
tempdb is re-created every time SQL Server is started.
There is never anything in tempdb to be saved from one session of SQL Server to another.
Backup and restore operations are not allowed on tempdb.,[object Object]
What should be the size of tempdb? The best way to estimate the size of tempdb is by running your workload in a test environment. Use ALTER DATABASE command to set its size with a safety factor  that you feel is appropriate.  Never allow auto-grow for tempdb.  Auto-grow causes a pause during processing when you can least afford it Less of an issue with instant file initialization Auto-grow leads to physical fragmentation Remember that tempdb is created every time you restart a SQL Server but its size is set to either default of Model database or the size you had set using ALTER DATABASE command (the recommended option) © The Norns Laboratories, 2009
1 file vs. multiple files for tempdb Spread TempDB across at least as many equal sized files as there are COREs or CPUs. Since allocation in SQL Server is done using proportional fill, the allocation will be evenly distributed and so is the access/manipulation of the allocation structures across all files.  Note, you can always have more files than COREs but you may not see much improvement. © The Norns Laboratories, 2009
Creating Database Execute the following code to create a database: CREATE DATABASE TK432 ON PRIMARY ( NAME = N'TK432_Data', FILENAME = N'c:estK432.mdf' , SIZE = 8MB , MAXSIZE = UNLIMITED, FILEGROWTH = 16MB ), FILEGROUP FG1 ( NAME = N'TK432_Data2', FILENAME = N'c:estK432.ndf' , SIZE = 8MB , MAXSIZE = UNLIMITED, FILEGROWTH = 16MB ), FILEGROUP Documents CONTAINS FILESTREAM DEFAULT ( NAME = N'Documents', FILENAME = N'c:estK432Documents' ) LOG ON ( NAME = N'TK432_Log', FILENAME = N'c:estK432.ldf' , SIZE = 8MB , MAXSIZE = 2048GB , FILEGROWTH = 16MB ) GO Execute the following code to change the default filegroup: ALTER DATABASE TK432 MODIFY FILEGROUP FG1 DEFAULT GO © The Norns Laboratories, 2009
Database Recovery Models ALTER DATABASE database_name SET RECOVERY { FULL | BULK_LOGGED | SIMPLE } You need to know which types of backups are possible for each recovery model. © The Norns Laboratories, 2009
Auto Options AUTO_CLOSE AUTO_SHRINK AUTO_CREATE_STATISTICS AUTO_UPDATE_STATISTICS AUTO_UPDATE_STATISTICS_ASYNCH © The Norns Laboratories, 2009
Change Tracking New to SQL Server 2008 version – versioning of each changed row in a table.  CHANGE_RETENTION  AUTO_CLEANUP © The Norns Laboratories, 2009
Access Database status modes: ONLINE READ_ONLY / READ_WRITE SINGLE_USER / RESTRICTED_USER / MULTI_USER OFFLINE EMERGENCY ROLLBACK IMMEDIATE ROLLBACK AFTER<number of seconds> © The Norns Laboratories, 2009
Parameterization Forced parameterization changes the literal constants in a query to parameters when compiling a query. Forced parameterization should not be used for environments that rely heavily on indexed views and indexes on computed columns. Generally, the PARAMETERIZATION FORCED option should only be used by experienced database administrators after determining that doing this does not adversely affect performance. Distributed queries that reference more than one database are eligible for forced parameterization as long as the PARAMETERIZATION option is set to FORCED in the database whose context the query is running. Setting the PARAMETERIZATION option to FORCED flushes all query plans from the plan cache of a database, except those that currently are compiling, recompiling, or running. Plans for queries that are compiling or running during the setting change are parameterized the next time the query is executed. Setting the PARAMETERIZATION option is an online operation that it requires no database-level exclusive locks. Forced parameterization is disabled (set to SIMPLE) when the compatibility of a SQL Server database is set to 80, or a database on an earlier instance is attached to an instance of SQL Server 2005 or later.  The current setting of the PARAMETERIZATION option is preserved when reattaching or restoring a database. When the PARAMETERIZATION option is set to FORCED, the reporting of error messages may differ from that of simple parameterization: multiple error messages may be reported in cases where fewer message would be reported under simple parameterization, and the line numbers in which errors occur may be reported incorrectly. © The Norns Laboratories, 2009
Collation Sequences Each SQL Server collation specifies three properties: The sort order to use for Unicode data types (nchar, nvarchar, and ntext). A sort order defines the sequence in which characters are sorted, and the way characters are evaluated in comparison operations. The sort order to use for non-Unicode character data types (char, varchar, and text). The code page used to store non-Unicode character data. Note  You cannot specify the equivalent of a code page for the Unicode data types (nchar, nvarchar, and ntext). The double-byte bit patterns used for Unicode characters are defined by the Unicode standard and cannot be changed. © The Norns Laboratories, 2009
Database Integrity Checks USE [master] GO ALTER DATABASE [AdventureWorks2008] SET PAGE_VERIFY CHECKSUM GO When DBCC CHECKDB is executed, SQL Server performs all the following actions: Checks page allocation within the database Checks the structural integrity of all tables and indexed views Calculates a checksum for every data and index page to compare against the stored checksum Validates the contents of every indexed view Checks the database catalog Validates Service Broker data within the database To accomplish these checks, DBCC CHECKDB executes the following commands: DBCC CHECKALLOC, to check the page allocation of the database DBCC CHECKCATALOG, to check the database catalog DBCC CHECKTABLE, for each table and view in the database to check the structural integrity © The Norns Laboratories, 2009
Practice Test, Review and Questions 10 questions, lesson 1, time 20 minutes © The Norns Laboratories, 2009
Tables 3 hours © The Norns Laboratories, 2009
Basics of Data Modeling Subject Area Reflecting SA in the model Put stuff where it belongs. Tables follow some very basic rules—columns define a group of data that you need to store, and you add one row to the table for each unique group of information. The columns that you define represent the distinct pieces of information that you need to work with inside your database, such as a city, product name, first name, last name, or price. © The Norns Laboratories, 2009
Nullability You can specify whether a column allows nulls by specifying NULL or NOT NULL for the column properties. Just as with every command you execute, you should always specify explicitly each option that you want, especially when you are creating objects.  If you do not specify the nullability option, SQL Server uses the default option when creating a table, which could produce unexpected results. In addition, the default option is not guaranteed to be the same for each database because you can modify this by changing the ANSI_NULL_DEFAULT database property. NULL value does not equal another NULL and NULLs cannot be compared. © The Norns Laboratories, 2009
COLLATE Collation sequences control the way characters in various languages are handled. When you install an instance of SQL Server, you specify the default collation sequence for the instance.  You can set the COLLATE property of a database to override the instance collation sequence, which SQL Server then applies as the default collation sequence for objects within the database.  You can override the collation sequence for an entire table. You can override the collation sequence for an individual column. 	By specifying the COLLATE option for a character-based column, you can set language-specifi c behavior for the column. © The Norns Laboratories, 2009
IDENTITY Identities are used to provide a value for a column automatically when data is inserted.  You cannot update a column with the identity property.  Columns with any numeric data type, except float and real, can accept an identity property because you also have to specify a seed value and an increment to be applied for each subsequently inserted row.  You can have only a single identity column in a table. Although SQL Server automatically provides the next value in the sequence, you can insert a value into an identity column explicitly by using the SET IDENTITY_INSERT <table name> ON command.  You can also change the next value generated by modifying the seed using the DBCC CHECKIDENT command. © The Norns Laboratories, 2009
NOT FOR REPLICATION By applying the NOT FOR REPLICATION option, SQL Server does not reseed the identity column when the replication engine is applying changes. © The Norns Laboratories, 2009
Computed Columns When you create a computed column, only the definition of the calculation is stored. A computed column cannot be used as a DEFAULT or FOREIGN KEY constraint definition or with a NOT NULL constraint definition.  However, a computed column can be used as a key column in an index or as part of any PRIMARY KEY or UNIQUE constraint, if the computed column value is defined by a deterministic expression and the data type of the result is allowed in index columns. For example, if the table has integer columns a and b, the computed column a+b may be indexed, but computed column a+DATEPART(dd, GETDATE()) cannot be indexed because the value may change in subsequent invocations.  A computed column cannot be the target of an INSERT or UPDATE statement.  © The Norns Laboratories, 2009
Row and Page Compression © The Norns Laboratories, 2009 Row-level compression allows you to compress individual rows to fit more rows on a page, which in turn reduces the amount of storage space for the table because you don’t need to store as many pages on a disk.  Because you can uncompress the data at any time and the uncompress operation must always succeed, you cannot use compression to store more than 8,060 bytes in a single row. Page compression reduces only the amount of disk storage required because the entire page is compressed. To compress any newly added, uncompressed pages, you need to execute an ALTER TABLE. . .REBUILD statement with the PAGE compression option.
Modeling world’s currencies © The Norns Laboratories, 2009
Primary Keys The primary key defines the column(s) that uniquely identify every row in the table. You must specify all columns within the primary key as NOT NULL. You can have only a single primary key constraint defined for a table.  When you create a primary key, you also designate whether the primary key is clustered or nonclustered.  A clustered primary key, the default SQL Server behavior, causes SQL Server to store the table in sorted order according to the primary key. The default option for a primary key is clustered. When a clustered primary key is created on a table that is compressed, the compression option is applied to the primary key when the table is rebuilt. © The Norns Laboratories, 2009
Foreign Keys You use foreign keys to implement referential integrity between tables within your database. By creating foreign keys, you can ensure that related tables cannot contain invalid, orphaned rows.  Foreign keys create what is referred to as a parent-child relationship between two tables and ensures that a value cannot be written to the child table that does not already exist in the parent table. For example, it would not make any sense to have an order for a customer who does not exist. To create a foreign key between two tables, the parent table must have a primary key, which is used to refer to the child table. In addition, the data types between the parent column(s) and child column(s) must be compatible. If you have a multicolumn primary key, all the columns from the parent primary key must exist in the child table to define a foreign key. © The Norns Laboratories, 2009
CASCADING One of the options for a foreign key is CASCADE.  You can configure a foreign key such that modifications of the parent table are cascaded to the child table.  For example, when you delete a customer, SQL Server also deletes all the customer’s associated orders.  Cascading is an extremely bad idea. It is very common to have foreign keys defined between all the tables within a database.  If you were to issue a DELETE statement without a WHERE clause against the wrong table, you could eliminate every row, in every table within your database, very quickly.  By leaving the CASCADE option off for a foreign key, if you attempt to delete a parent row that is referenced, you get an error. © The Norns Laboratories, 2009
Default constraints Default constraints allow you to specify a value that is written to the column if the application does not supply a value.  Default constraints apply only to new rows added with an INSERT, BCP, or BULK INSERT statement.  You can define default constraints for either NULL or NOT NULL columns.  If a column has a default constraint and an application passes in a NULL for the column, SQL Server writes a NULL to the column instead of the default value.  SQL Server writes the default value to the column only if the application does not specify the column in the INSERT statement. © The Norns Laboratories, 2009
Adding a Check Constraint © The Norns Laboratories, 2009 Check constraints limit the range of values within a column. Check constraints can be created at a column level and are not allowed to reference any other column in the table.  Table-level check constraints can reference any column within a table, but they are not allowed to reference columns in other tables.
CREATE TABLE script USE [MyFirstDatabase] GO /****** Object:  Table [Currencies].[Currencies]    Script Date: 11/09/2009 22:22:23 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [Currencies].[Currencies]( 	[currencyId] [int] NOT NULL, 	[countryName] [nvarchar](64) NOT NULL, 	[currencyName] [nvarchar](64) NOT NULL, 	[currencyCode] [nchar](4) NULL,  CONSTRAINT [PK_Currencies] PRIMARY KEY CLUSTERED  ( 	[currencyId] ASC )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY] ) ON [PRIMARY] GO © The Norns Laboratories, 2009
CREATE TABLE script (cont.) CREATE TABLE [Currencies].[CurrencyUnits]( 	[currencyUnitId] [int] NOT NULL, 	[name] [nvarchar](64) NOT NULL, 	[value] [money] NOT NULL, 	[image] [image] NULL, 	[currencyId] [int] NOT NULL,  CONSTRAINT [PK_CurrencyUnits] PRIMARY KEY CLUSTERED  ( 	[currencyUnitId] ASC )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY] ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY] GO ALTER TABLE [Currencies].[CurrencyUnits]  WITH CHECK ADD  CONSTRAINT [FK_CurrencyUnits_Currencies] FOREIGN KEY([currencyId]) REFERENCES [Currencies].[Currencies] ([currencyId]) GO ALTER TABLE [Currencies].[CurrencyUnits] CHECK CONSTRAINT [FK_CurrencyUnits_Currencies] GO ALTER TABLE [Currencies].[CurrencyUnits]  WITH CHECK ADD  CONSTRAINT [CK_CurrencyUnits_ValueGT0] CHECK  (([value]>(0))) GO ALTER TABLE [Currencies].[CurrencyUnits] CHECK CONSTRAINT [CK_CurrencyUnits_ValueGT0] GO EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'Value Greater Than 0' , @level0type=N'SCHEMA',@level0name=N'Currencies', @level1type=N'TABLE',@level1name=N'CurrencyUnits', @level2type=N'CONSTRAINT',@level2name=N'CK_CurrencyUnits_ValueGT0' GO © The Norns Laboratories, 2009
Using Schema’s 	CREATE SCHEMA [Currencies] AUTHORIZATION dbo 	GO 	It is recommended that you do not create tables and view or assign permissions within a CREATE SCHEMA statement.  	Any CREATE SCHEMA statement that is executed must be in a separate batch. 	ALTER SCHEMA [Currencies] TRANSFER dbo.CurrencyUnits 	GO 	ALTER SCHEMA [Currencies] TRANSFER dbo.Currencies 	GO © The Norns Laboratories, 2009
Indexes 2 hours © The Norns Laboratories, 2009
Balanced Trees (B-Trees) © The Norns Laboratories, 2009 A B-tree is constructed of a root node that contains a single page of data, one or more optional intermediate level pages, and one or more optional leaf level pages. The core concept of a B-tree can be found in the first word of the name: balanced. A B-tree is always symmetrical, with the same number of pages on both the left and right halves at each level. The leaf-level pages contain entries sorted in the order that you specified. The data at the leaf level contains every combination of values within the column(s) that are being indexed. The number of index rows on a page is determined by the storage space required by the columns that are defined in the index.
Index Levels A data page = 8,192 bytes (or 8,060 bytes of actual user data).  If you build an index on an INT column, each row in the table will require 4 bytes of storage in the index. © The Norns Laboratories, 2009 1 1 0 ? 2,015
Indexing limits You can define an index with a maximum of 16 columns. The maximum size of the index key is 900 bytes. A table without a clustered index is referred to as a heap. When you have a heap, page chains are not stored in sorted order. © The Norns Laboratories, 2009
Covering Indexes When an index is built, every value in the index key is loaded into the index. In effect, each index is a mini-table containing all the values corresponding to just the columns in the index key.  It is possible for a query to be entirely satisfied by using the data in the index. An index that is constructed such that SQL Server can completely satisfy queries by reading only the index is called a covering index. © The Norns Laboratories, 2009
Included Columns Indexes can be created using the optional INCLUDE clause.  Included columns become part of the index at only the leaf level. Values from included columns do not appear in the root or intermediate levels of an index and do not count against the 900-byte limit for an index. This way you can construct covering indexes that can have more than 16 columns and 900 bytes by using the INCLUDE clause. © The Norns Laboratories, 2009
Query optimizer Ways to create statistics in SQL Server 2008: The optimizer automatically creates single-column statistics as needed as a side effect of optimizing SELECT, INSERT, UPDATE, DELETE, and MERGE statements if AUTO_CREATE_STATISTICS is enabled, which is the default setting. Note: The optimizer only creates nonfiltered statistics in these cases. There are two basic statements in SQL Server 2008 that explicitly generate the statistical information described above: CREATE INDEX generates the declared index in the first place, and it also creates one set of statistics for the column combinations constituting the index keys (but not other included columns). CREATE STATISTICS only generates the statistics for a given column or combination of columns. Note: If the CREATE INDEX defines a predicate, the corresponding statistics are created with the same predicate. In addition, there are several other ways to create statistics or indexes. Ultimately, though, each issues one of the above two commands. Use sp_createstatsto create statistics for all eligible columns (all except XML columns) for all user tables in the current database. A new statistics object will not be created for columns that already have a statistics object. Use dbccdbreindexto rebuild one or more indexes for a table in the specified database. In SQL Server Management Studio, expand the folder under a Table object, right click the Statistics folder, and choose New Statistics. Use the Database Engine Tuning Advisor to create indexes. © The Norns Laboratories, 2009
CREATE STATISTICS CREATE STATISTICS FirstLast2 ON Person.Contact(FirstName,LastName) WITH SAMPLE 50 PERCENT The auto update statistics feature described above may be turned off at different levels: On the database level, disable auto update statistics by using command ALTER DATABASE dbname SET AUTO_UPDATE_STATISTICS OFF At the table level, disable auto update statistics using the NORECOMPUTE option of the UPDATE STATISTICS command or CREATE STATISTICS command. Use sp_autostats to display and change the auto update statistics setting for a table, index, or statistics object.  Re-enabling the automatic updating of statistics can be done similarly using ALTER DATABASE, UPDATE STATISTICS, or sp_autostats. © The Norns Laboratories, 2009
FILLFACTOR The FILLFACTOR option for an index determines the percentage of free space that is reserved on each leaf-level page of the index when an index is created or rebuilt.  The free space reserved leaves room on the page for additional values to be added, thereby reducing the rate at which page splits occur.  The FILLFACTOR is represented as a percentage full.  For example, a FILLFACTOR = 75 means that 25 percent of the space on each leaf-level page is left empty to accommodate future values. © The Norns Laboratories, 2009
Defragmenting an Index ALTER INDEX { index_name | ALL } ON <object> { REBUILD [ [ WITH ( <rebuild_index_option> [ ,...n ] ) ] | [ PARTITION = partition_number [ WITH ( <single_partition_rebuild_index_option> [ ,...n ] )] ] ] | DISABLE | REORGANIZE [ PARTITION = partition_number ] [ WITH ( LOB_COMPACTION = { ON | OFF } ) ] | SET ( <set_index_option> [ ,...n ] ) }[ ; ] When you defragment an index, you can use either the REBUILD or REORGANIZE options. © The Norns Laboratories, 2009
Index REBUILD The REBUILD option rebuilds all levels of the index and leaves all pages filled according to the FILLFACTOR setting of an index.  The rebuild of an index effectively re-creates the entire B-tree structure, so unless you specify the ONLINE option, a shared table lock is acquired, preventing any changes until the rebuild operation completes. © The Norns Laboratories, 2009
Index REORGANIZE The REORGANIZE option removes fragmentation only at the leaf level.  Intermediate-level pages and the root page are not defragmented during a reorganize.  REORGANIZE is always an online operation that does not incur any long-term blocking. © The Norns Laboratories, 2009
Disabling an index An index can be disabled by using the ALTER INDEX statement as follows: ALTER INDEX { index_name | ALL } ON <object> DISABLE [ ; ] When an index is disabled, the definition remains in the system catalog but is no longer used. SQL Server does not maintain the index as data in the table changes, and the index cannot be used to satisfy queries.  If a clustered index is disabled, the entire table becomes inaccessible. To enable an index, it must be rebuilt to regenerate and populate the B-tree structure.  ALTER INDEX { index_name | ALL } ON <object> REBUILD [ ; ] © The Norns Laboratories, 2009
Full Text Indexing Full text indexes can be created against CHAR/VARCHAR, XML, and VARBINARY columns. When you full text index a VARBINARY column, you must specify the filter to be used by the word breaker to interpret the document content. Thesaurus files allow you to specify a list of synonyms or word replacements for search terms. Stop lists exclude a list of words from search arguments and a full text index. © The Norns Laboratories, 2009
Full Text Catalog The first step in building a full text index is to create a storage structure. Unlike relational indexes, full text indexes have a unique internal structure that is maintained within a separate storage format called a full text catalog.  Each full text catalog contains one or more full text indexes. The generic syntax for creating a full text catalog is CREATE FULLTEXT CATALOG catalog_name [ON FILEGROUP filegroup ] [IN PATH 'rootpath'] [WITH <catalog_option>] [AS DEFAULT] [AUTHORIZATION owner_name ] <catalog_option>::= ACCENT_SENSITIVITY = {ON|OFF} FILEGROUP clause specifies the filegroup that you want to use to store any full text indexes. ACCENT_SENSITIVITY allows you to configure whether the full text engine considers accent marks when building or querying a full text index. AS DEFAULT clause works the same as the DEFAULT option for a filegroup. AUTHORIZATION option specifies the owner of the full text catalog. © The Norns Laboratories, 2009
Change Tracking The CHANGE_TRACKING option for a full text index determines how SQL Server maintains the index when the underlying data changes. When set to AUTO, SQL Server automatically updates the fulltext index as the data is modified. When set to MANUAL, you are responsible for periodically propagating the changes into the full text index. © The Norns Laboratories, 2009
Stemmers SQL Server uses stemmers to allow a full text index to search on all inflectional forms of asearch term, such as drive, drove, driven, and driving.  Stemming is language-specific. Althoughyou could employ a German word breaker to tokenize English, the German stemmer cannotprocess English. © The Norns Laboratories, 2009
Querying Full Text Data SELECT ProductDescriptionID, Description FROM Production.ProductDescription WHERE FREETEXT(Description,N'bike') GO All search terms used with full text are Unicode strings. If youpass in a non-Unicodestring, the query still works, but it is much less efficient because the optimizer cannot useparameter sniffing to evaluate distribution statistics on the full text index.  Make certainthat all terms you pass in for full text search are always typed as Unicode for maximumperformance. © The Norns Laboratories, 2009
THESAURUS FILES A thesaurus file exists for each supported language.  All thesaurus files are XML filesstored in the FTDATA directory underneath your default SQL Server installation path. The thesaurus files are not populated, so to perform synonym searches, you need topopulate the thesaurus files.  © The Norns Laboratories, 2009
Stop Lists Stop listsare used to excludewords that you do not want included in a full text index. CREATE FULLTEXT STOPLIST ProductStopList; GO ALTER FULLTEXT STOPLIST ProductStopList ADD 'bike' LANGUAGE 1033; GO ALTER FULLTEXT INDEX ON Production.ProductDescription SET STOPLIST ProductStopList GO © The Norns Laboratories, 2009
Distributing and Partitioning Data 2.5 hours © The Norns Laboratories, 2009
Distributing and Partitioning Data Table partitioning was introduced in Microsoft SQL Server 2005 as a means to split large tables across multiple storage structures. Previously, objects were restricted to a single filegroup that could contain multiple files. However, the placement of data within a filegroup was still determined by SQL Server. Table partitioning allows tables, indexes, and indexed views to be created on multiple filegroups while also allowing the database administrator (DBA) to specify which portion of the object will be stored on a specific filegroup. © The Norns Laboratories, 2009
The process for partitioning For partitioning of a table, index, or indexed view do the following: Create a partition function. Create a partition scheme mapped to a partition function. Create the table, index, or indexed view on the partition scheme. © The Norns Laboratories, 2009
Creating a Partition Function A partition function defines the boundary points that will be used to split data across apartition scheme.  The data type for a partition function can be anynative SQL Server data type, except: text, ntext,  image,  varbinary(max),  timestamp,  xml,  varchar(max) © The Norns Laboratories, 2009
Partition Function CREATE PARTITION FUNCTION mypartfunction (int) AS RANGE LEFT FOR VALUES (10,20,30,40,50,60) © The Norns Laboratories, 2009 CREATE PARTITION FUNCTION mypartfunction (int) AS RANGE RIGHT FOR VALUES (10,20,30,40,50,60)
Practice Partitioning © The Norns Laboratories, 2009 Self-paced Training Kit, page 140
Creating a Partition Scheme A partition scheme defines the storage structures and collection of filegroups that you want to use with a given partition function.  CREATE PARTITION SCHEME partition_scheme_name AS PARTITION partition_function_name [ ALL ] TO ( { file_group_name | [ PRIMARY ] } [ ,...n ] ) Create partition scheme as described on p.143-144 Run the following commands to check on results: SELECT * FROM sys.partition_range_values; SELECT * FROM sys.partition_schemes; © The Norns Laboratories, 2009
Creating Partitioned Tables and Indexes CREATE TABLE Employee (EmployeeIDint NOT NULL, FirstNamevarchar(50) NOT NULL, LastNamevarchar(50) NOT NULL) ON mypartscheme(EmployeeID); GO CREATE NONCLUSTERED INDEX idx_employeefirtname ON dbo.Employee(FirstName) ON mypartscheme(EmployeeID); GO © The Norns Laboratories, 2009
Split and Merge Operators The SPLIT operator introduces a new boundary point into a partition function. MERGE eliminates a boundary point from a partition function. The general syntax is as follows: ALTER PARTITION FUNCTION partition_function_name() {SPLIT RANGE ( boundary_value ) | MERGE RANGE ( boundary_value ) } [ ; ] © The Norns Laboratories, 2009
Altering a Partition Scheme You can add filegroups to an existing partition scheme to create more storage space for a partitioned table. The general syntax is as follows: ALTER PARTITION SCHEME partition_scheme_name NEXT USED [ filegroup_name ] [ ; ] The NEXT USED clause has two purposes: It adds a new filegroup to the partition scheme, if the specified filegroup is not already part of the partition scheme. It marks the NEXT USED property for a filegroup. The filegroup that is marked with the NEXT USED flag is the filegroup that contains the next partition that is created when a SPLIT operation is executed. © The Norns Laboratories, 2009
Switch Operator SQL Server stores data on pages in a doubly linked list. To locate and access data, SQL Server performs the following basic process: 1. Resolve the table name to an object ID. 2. Locate the entry for the object ID in sys.indexes to extract the first page for the object. 3. Read the first page of the object. 4. Using the Next Page and Previous Page entries on each data page, walk the page chain to locate the data required. © The Norns Laboratories, 2009 SWITCH operator allows to exchange partitions between tables in a perfectly scalable manner with no locking, blocking, or deadlocking.
Practice Review script provided in a file  	Practice Distributing and Partitioning.sql Run each step individually. Observe results, and describe each step purpose in comments.  © The Norns Laboratories, 2009
Importing and Exporting Data 1.5 hours © The Norns Laboratories, 2009
Bulk Copy Program (BCP) BCP is a program that allows: import data from a file into a table;  export data from a table to a file. bcp {[[database_name.][owner].]{table_name | view_name} | "query"} {in | out | queryout | format} data_file [-mmax_errors] [-fformat_file] [-x] [-eerr_file] [-Ffirst_row] [-Llast_row] [-bbatch_size] [-n] [-c] [-w] [-N] [-V (60 | 65 | 70 | 80)] [-6] [-q] [-C { ACP | OEM | RAW | code_page } ] [-tfield_term] [-rrow_term] [-iinput_file] [-ooutput_file] [-apacket_size] [-Sserver_name[nstance_name]] [-Ulogin_id] [-Ppassword] [-T] [-v] [-R] [-k] [-E] [-h"hint [,...n]"] C:gt;bcp master..sysobjects out c:estysobjects.txt -c -t, -T -S <servername> C:gt;bcpAdventureWorks.Sales.SalesOrderDetail out c:estdventureWorks.Sales.SalesOrderDetail.txt -c –t, -T -S <servername> © The Norns Laboratories, 2009
BCP (continues)… The switches used are: 	-c Output in ASCII with the default field terminator (tab) and row terminator (crlf) 	-t override the field terminator with "," 	-T use a trusted connection. Note that U –P may be used for username/password 	-S connect to this server to execute the command Note that, like DTS/SSIS, BCP is a client utility, hence you need to supply the connection information. For transfer of data between SQL servers, in place of –c, use –n or -N for native data format (-N = Unicode). This is much faster and avoids data conversion problems.  © The Norns Laboratories, 2009
BULK INSERT 	BULK INSERT is a T-SQL command that allows import data from a file into a table.  	BULK INSERT cannot export data. BULK INSERT [ database_name . [ schema_name ] . | schema_name . ] [ table_name | view_name ] FROM 'data_file' [ WITH ( [ [ , ] BATCHSIZE = batch_size ] [ [ , ] CHECK_CONSTRAINTS ] [ [ , ] CODEPAGE = { 'ACP' | 'OEM' | 'RAW' | 'code_page' } ] [ [ , ] DATAFILETYPE = { 'char' | 'native'| 'widechar' | 'widenative' } ] [ [ , ] FIELDTERMINATOR = 'field_terminator' ] [ [ , ] FIRSTROW =first_row ] [ [ , ] FIRE_TRIGGERS ] [ [ , ] FORMATFILE = 'format_file_path' ] [ [ , ] KEEPIDENTITY ] [ [ , ] KEEPNULLS ] [ [ , ] KILOBYTES_PER_BATCH =kilobytes_per_batch ] [ [ , ] LASTROW = last_row ] [ [ , ] MAXERRORS = max_errors ] [ [ , ] ORDER ( { column [ ASC | DESC ] } [ ,...n ] ) ] [ [ , ] ROWS_PER_BATCH = rows_per_batch ] [ [ , ] ROWTERMINATOR = 'row_terminator' ] [ [ , ] TABLOCK ] [ [ , ] ERRORFILE = 'file_name' ] )] © The Norns Laboratories, 2009
BULK INSERT (continues…) DECLARE @bulk_cmdvarchar(1000) SET @bulk_cmd = 'BULK INSERT MyFirstDatabase..SalesOrderDetail FROM ''C:estdventureWorks.Sales.SalesOrderDetail.txt''  WITH (DATAFILETYPE = ''char'', FIELDTERMINATOR = '','')' EXEC(@bulk_cmd) GO SELECT * FROM SalesOrderDetail GO © The Norns Laboratories, 2009
SSIS – Import/Export Wizard  The Import and Export Wizard uses a subset of the SSIS feature set to move data between a source and destination. Self-paced Training Kit, p.167-171, Practice 2. © The Norns Laboratories, 2009
Designing Policy Based Management 1 hr. © The Norns Laboratories, 2009
Designing Policies SQL Server 2008 has a new feature called Policy Based Management, also known as the Declarative Management Framework (DMF), to tackle the problem of standardizing your SQL Server instances.  Policy Based Management introduces the following new objects that are used to design and check for compliance: Facets Conditions Policies Policy targets Policy categories © The Norns Laboratories, 2009
Facets and Conditions Policies are created from a predefined set of facets. Facets define the type of object or option to be checked, such as database, Surface Area, or login. SQL Server ships with 74 facets, implemented as .NET assemblies, each with a unique set of properties. Each facet contains a subgroup of SQL Server 2008 configuration settings and other events that you can control. You pair these facets with conditions in order to create a policy. Conditions are the values that are allowed for the properties of a facet, the configuration settings, or other events contained within that facet. © The Norns Laboratories, 2009
Policies Policies are created for a single condition and set to either enforce or check compliance.  The execution mode can be set as follows: On demand –Evaluates the policy when directly executed by a user On change, prevent – Creates data definition language (DDL) triggers to prevent a change that violates the policy On change, log only – Checks the policy automatically when a change is made using the event notification infrastructure On schedule – Creates a SQL Server Agent job to check the policy on a defined schedule If a policy contains a condition that was defined using the advanced editor, the only available execution mode is On Demand. © The Norns Laboratories, 2009
Policy Categories Policy categories can be used to group one or more policies into a single compliance unit. If not specified, all policies belong to the DEFAULT category.  To check or enforce policies, you create a subscription to one or more policies. Subscription occurs at two levels—instance and database.  A member of the sysadmin role can subscribe an instance to a policy category.  Once subscribed, the owner of each database within the instance can subscribe their database to a policy category. Each policy category has a Mandate property that applies to databases.  When a policy category is set to Mandate and a sysadmin subscribes the instance to a policy category, all databases that meet the target set are controlled by the policies within the policy category.  A policy subscription to a policy category set to Mandate cannot be overridden by a database owner. © The Norns Laboratories, 2009
Creating New Condition © The Norns Laboratories, 2009
Practice PBM Self-paced Training Kit, p.184-191, Practices 1-5 © The Norns Laboratories, 2009
Backing up and Restoring Database 3 hrs. © The Norns Laboratories, 2009
Backups Backups are taken to reduce the risk of data loss. Because it is more common to back up a database than to restore one, the backup engine is optimized for the backup process.  The only two parameters required for a backup are the name of the database and the backup device. Up to 64 devices could be used for a backup. Because the backup process is not concerned with the ordering of pages, multiple threads can be used to write pages to the backup device. When a backup is initiated, the backup engine grabs pages from the data files as quickly as possible, without regard to the order of pages. © The Norns Laboratories, 2009
Backup Types Full Captures all pages within a database that contain data. Pages that do not contain data are not included in the backup. The database is fully operational during a full backup. The only operations that are not allowed during the a full backup are: Adding or removing a database file Shrinking a database Partial Captures only the filegroups that can change. Read only filegroups are not included to minimize the size of the backup. Differential Captures all extents that have changed since the last full backup. The primary purpose of a differential backup is to reduce the number of transaction log backups that need to be restored. A differential backup has to be applied to a full backup and can’t exist until a full backup has been created. Transaction log Every change made to a database has an entry made to the transaction log.  Filegroup Individual file or a filegroup backup. © The Norns Laboratories, 2009
BACKUP DATABASE BACKUP DATABASE { database_name | @database_name_var } TO <backup_device> [ ,...n ] [ <MIRROR TO clause> ] [ next-mirror-to ] [ WITH { DIFFERENTIAL | <general_WITH_options> [ ,...n ] } ] <backup_device>::= { { logical_device_name | @logical_device_name_var } | { DISK | TAPE } = { 'physical_device_name' | @physical_device_name_var } } <MIRROR TO clause>::= MIRROR TO <backup_device> [ ,...n ] <general_WITH_options> [ ,...n ]::= --Backup Set Options COPY_ONLY | { COMPRESSION | NO_COMPRESSION } | DESCRIPTION = { 'text' | @text_variable } | NAME = { backup_set_name | @backup_set_name_var } | PASSWORD = { password | @password_variable } | { EXPIREDATE = { 'date' | @date_var } | RETAINDAYS = { days | @days_var } } --Media Set Options { NOINIT | INIT } | { NOSKIP | SKIP } | { NOFORMAT | FORMAT } | MEDIADESCRIPTION = { 'text' | @text_variable } | MEDIANAME = { media_name | @media_name_variable } | MEDIAPASSWORD = { mediapassword | @mediapassword_variable } | BLOCKSIZE = { blocksize | @blocksize_variable } --Error Management Options { NO_CHECKSUM | CHECKSUM } | { STOP_ON_ERROR | CONTINUE_AFTER_ERROR } © The Norns Laboratories, 2009
Configuring Backup Devices USE [master] GO EXEC master.dbo.sp_addumpdevice 	@devtype = N'disk',  	@logicalname = N'New Backup Device',  	@physicalname = N'C:estew Backup Device.bak' GO © The Norns Laboratories, 2009
Backups Mirroring One of the maxims of disaster recovery is that you can’t have enough copies of your backups.  The MIRROR TO clause provides a built-in capability to create up to four copies of a backup in a single operation.  When you include the MIRROR TO clause, SQL Server retrieves the page once from the database and writes a copy of the page to each backup mirror. If you back up to tape, you must mirror to tape. If you back up to disk, you must mirror to disk. During a restore operation, you can use any of the mirrors.  © The Norns Laboratories, 2009
Backup best practices Design and implement a well thought backup strategy to suit the needs of your organization Perform backups more often Decrease backup times by using a compression Use various media for backups Increase number of backup copies Keep backup copies at different places Allocate only a single backup per file Use of meaningful names for the backup files © The Norns Laboratories, 2009
Database backup strategy © The Norns Laboratories, 2009
Transaction Log Backups Every change made to a database has an entry made to the transaction log. Each row is assigned a unique number internally called the Log Sequence Number (LSN). The contents of a transaction log are broken down into two basic parts: Inactive - contains all the changes that have been committed to the database. Active - contains all the changes that have not yet been committed Based on the sequence number, it is possible to restore one transaction log backup after another to recover a database to any point in time by simply following the chain of transactions as identified by the LSN. Before you can issue a transaction log backup, you must execute a full backup. © The Norns Laboratories, 2009
BACKUP LOG Command BACKUP LOG { database_name | @database_name_var } TO <backup_device> [ ,...n ] [ <MIRROR TO clause> ] [ next-mirror-to ] [ WITH { <general_WITH_options> | <log-specific_optionspec> } [ ,...n ] ][;] © The Norns Laboratories, 2009
Differential Backups A differential backup contains all pages changed since the last full backup. For example, if a full backup was taken at midnight and a differential backup occurred every four hours, both the 4 A.M. backup and the 8 A.M. backup would contain all the changes made to the database since midnight. Each database in the header has a special page called the Differential Change Map (DCM). DCM keeps the counter of changes occurred since last full backup. A full backup zeroes out the contents of the DCM. © The Norns Laboratories, 2009
COPY_ONLY Option The COPY_ONLY option allows to create a backup that can be used to create the development or test environment as it does not affect the database state or set of backups in production.  A full backup with the COPY_ONLY option does not reset the differential change map page and therefore has no impact on differential backups.  A transaction log backup with the COPY_ONLY option does not remove transactions from the transaction log. © The Norns Laboratories, 2009
Filegroup Backups File or filegroup backups are used to reduce the footprint of a backup, as it only targets a portion of a database to be backed up. Because for successful recovery of a database, you need all the files underneath a filegroup to be in exactly the same state, it is good idea to backup a filegroup, but not an individual files.  Filegroup backups can be used in conjunction with differential and transaction log backups to recover a portion of the database in the event of a failure.  The database can remain online and accessible to applications during the restore operation. Only the portion of the database being restored is off-line. © The Norns Laboratories, 2009
Partial Backups BACKUP DATABASE database_nameREAD_WRITE_FILEGROUPS[,<file_filegroup_list>]TO <backup_device> When executed, SQL Server backs up the primary filegroup, all read/write filegroups, and any explicitly specified read-only filegroups. Partial Backups are only used for a purpose of saving backup space by excluding read only filegroups from backup.  © The Norns Laboratories, 2009
Identifying bad pages By executing the following command, SQL Server detects and quarantines corrupted pages: ALTER DATABASE <dbname> SET PAGE_VERIFY CHECKSUM If the database is participating in a Database Mirroring session, a copy of the corrupt page is retrieved from the mirror. If the page on the mirror is intact, the corrupt page is repaired automatically with the page retrieved from the mirror. To protect databases from massive corruption, SQL Server 2008 limits the allowed number of corrupted pages to a total of 1,000 per database. If the corrupt page limit reached, SQL Server takes the database off-line and places it in a suspect state to protect it from further damage. © The Norns Laboratories, 2009
Maintenance Plans (SSIS) Maintenance plans provide a mechanism to graphically create job workflows that support common administrative functions such as backup, re-indexing, and space management. Tasks that are supported by maintenance plans are: Backing up of databases and transaction logs Shrinking databases Re-indexing Updating of statistics Performing consistency checks The most common tasks performed by maintenance plans are database backups. © The Norns Laboratories, 2009
© The Norns Laboratories, 2009 Certificates and Master Keys ,[object Object]
SQL Server offers two levels of encryption: database-level and cell-level. Both use the key management hierarchy.
When TDE is enabled on a database, all backups are encrypted. http://technet.microsoft.com/en-us/library/cc278098.aspx
Enabling TDE To enable TDE, you must have the normal permissions associated with creating a database master key and certificates in the master database. You must also have CONTROL permissions on the user database. To enable TDE perform the following steps in the master database: If it does not already exist, create a database master key (DMK) for the master database. Ensure that the database master key is encrypted by the service master key (SMK). CREATE MASTER KEY ENCRYPTION BY PASSWORD = ‘some password’; Either create or designate an existing certificate for use as the database encryption key (DEK) protector. For the best security, it is recommended that you create a new certificate whose only function is to protect the DEK. Ensure that this certificate is protected by the DMK. CREATE CERTIFICATE tdeCert WITH SUBJECT = ‘TDE Certificate’; Create a backup of the certificate with the private key and store it in a secure location. (Note that the private key is stored in a separate file—be sure to keep both files). Be sure to maintain backups of the certificate as data loss may occur otherwise. BACKUP CERTIFICATE tdeCert TO FILE = ‘path_to_file’WITH PRIVATE KEY (FILE = ‘path_to_private_key_file’,ENCRYPTION BY PASSWORD = ‘cert password’); Optionally, enable SSL on the server to protect data in transit.Perform the following steps in the user database. These require CONTROL permissions on the database. Create the database encryption key (DEK) encrypted with the certificate designated from step 2 above. This certificate is referenced as a server certificate to distinguish it from other certificates that may be stored in the user database. CREATE DATABASE ENCRYPTION KEYWITH ALGORITHM = AES_256ENCRYPTION BY SERVER CERTIFICATE tdeCert Enable TDE. This command starts a background thread (referred to as the encryption scan), which runs asynchronously. ALTER DATABASE myDatabase SET ENCRYPTION ON © The Norns Laboratories, 2009
Service Master Key (SMK) The Service Master Key is the root of the SQL Server encryption hierarchy. It is generated automatically the first time it is needed to encrypt another key. By default, the Service Master Key is encrypted using the Windows data protection API and using the local machine key. Each time that you change the SQL Server service account or service account password, the service master key is regenerated.  The first action that you should take after an instance is started is to back up the service master key.  You should also back up the service master key immediately following a change to the service account or service account password. BACKUP SERVICE MASTER KEY TO FILE = 'path_to_file' ENCRYPTION BY PASSWORD = 'password' © The Norns Laboratories, 2009
Database Master Key (DMK) Database master key(DMK) isthe root of the encryption hierarchy in a database.  To ensure that you can access certificates,asymmetric keys, and symmetric keys within a database, you need to have a backup of theDMK.  BACKUP MASTER KEY TO FILE = 'path_to_file' ENCRYPTION BY PASSWORD = 'password' Before you can back up a DMK, it must be open. By default, a DMK is encrypted with theservice master key. If the DMK is encrypted only with a password, you must first open theDMK by using the following command: USE <database name>; OPEN MASTER KEY DECRYPTION BY PASSWORD = '<SpecifyStrongPasswordHere>'; © The Norns Laboratories, 2009
Certificates Certificates are used to encrypt data as well as digitally sign code modules. Although you could create a new certificate to replace the digital signature in the event of the loss of a certificate, you must have the original certificate to access any data that was encrypted with the certificate.  Certificates have both a public and a private key. You should back up a certificate immediately after creation by using the following command: BACKUP CERTIFICATE certname TO FILE = 'path_to_file' [ WITH PRIVATE KEY ( FILE = 'path_to_private_key_file' , ENCRYPTION BY PASSWORD = 'encryption_password' [ , DECRYPTION BY PASSWORD = 'decryption_password' ] ) ] You can back up just the public key by using the following command: BACKUP CERTIFICATE certname TO FILE = 'path_to_file‘ However, if you restore a backup of a certificate containing only the public key, SQL Server generates a new private key. © The Norns Laboratories, 2009
Validating a Backup To validate a backup, execute the following command: 	RESTORE VERIFYONLY FROM <backup device> When a backup is validated, SQL Server performs the following checks: Calculates a checksum for the backup and compares to the checksum stored in the backup file Verifies that the header of the backup is correctly written and valid Transits the page chain to ensure that all pages are contained in the database and can be located © The Norns Laboratories, 2009
Database Restores All restore sequences begin with either a full backup or filegroup backup.  When restoring backups, you have the option to terminate the restore process at any point and make the database available for transactions.  After the database or filegroup being restored has been brought online, you can’t apply any additional differential or transaction log backups to the database. © The Norns Laboratories, 2009
Restoring a Full Backup RESTORE DATABASE { database_name | @database_name_var } [ FROM <backup_device> [ ,...n ] ] [ WITH {[ RECOVERY | NORECOVERY | STANDBY = {standby_file_name | @standby_file_name_var } ] | , <general_WITH_options> [ ,...n ] | , <replication_WITH_option> | , <change_data_capture_WITH_option> | , <service_broker_WITH options> | , <point_in_time_WITH_options—RESTORE_DATABASE> } [ ,...n ] ] <general_WITH_options> [ ,...n ]::= --Restore Operation Options MOVE 'logical_file_name_in_backup' TO 'operating_system_file_name' [ ,...n ] | REPLACE | RESTART | RESTRICTED_USER When a RESTORE command is issued, if the database does not already exist within the instance, SQL Server creates the database along with all files underneath the database. The REPLACE option is used to force the restore over the top of an existing database. © The Norns Laboratories, 2009
Database state after the Restore has completed If you want the database to be online and accessible for transactions after the RESTORE operation has completed, you need to specify the RECOVERY option.  When a RESTORE is issued with the NORECOVERY option, the restore completes, but the database is left in a RECOVERING state such that subsequent differential and/or transaction log backups can be applied.  The STANDBY option can be used to allow you to issue SELECT statements against the database while still issuing additional differential and/or transaction log restores.  If you restore a database with the STANDBY option, an additional file is created to make the database consistent as of the last restore that was applied. © The Norns Laboratories, 2009
Restoring a Differential Backup A differential restore uses the same command syntax as a full database restore.  When the full backup has been restored, you can then restore the most recent differential backup. © The Norns Laboratories, 2009
Restoring a Transaction Log Backup RESTORE LOG { database_name | @database_name_var } [ <file_or_filegroup_or_pages> [ ,...n ] ] [ FROM <backup_device> [ ,...n ] ] [ WITH {[ RECOVERY | NORECOVERY | STANDBY = {standby_file_name | @standby_file_name_var } ] | , <general_WITH_options> [ ,...n ] | , <replication_WITH_option> | , <point_in_time_WITH_options—RESTORE_LOG> } [ ,...n ] ] <point_in_time_WITH_options—RESTORE_LOG>::= | { STOPAT = { 'datetime' | @datetime_var } | STOPATMARK = { 'mark_name' | 'lsn:lsn_number' } [ AFTER 'datetime' ] | STOPBEFOREMARK = { 'mark_name' | 'lsn:lsn_number' } [ AFTER 'datetime' ] The STOPAT command allows to specify a date and time to which SQL Server restores.  The STOPATMARK and STOPBEFOREMARK options allows to specify either an LSN or a transaction log MARK to use for the stopping point in the restore operation. © The Norns Laboratories, 2009
Restore a Corrupt Page Page corruption occurs when the contents of a page are not consistent.Usually occurs when disk controller begins to fail. Strategy for recovery:  Indexfiles – drop and re-create Data files –restore Page restore has several requirements: The database must be in either the Full or Bulked-logged recovery model. You must be able to create a transaction log backup. A page restore can apply only to a read/write filegroup. You must have a valid full, file, or filegroup backup available. The page restore cannot be executed at the same time as any other restore operation. © The Norns Laboratories, 2009
Page Restore Process Retrieve the PageID of the damaged page. Using the most recent full, file, or filegroup backup, execute the following command: RESTORE DATABASE database_name PAGE = 'file:page [ ,...n ]' [ ,...n ] FROM <backup_device> [ ,...n ] WITH NORECOVERY Restore any differential backups with the NORECOVERY option. Restore any additional transaction log backups with the NORECOVERY option. Create a transaction log backup. Restore the transaction log backup from step #5 using the WITH RECOVERY option. © The Norns Laboratories, 2009
Best Effort Restore Because pages are restored in sequential order, as soon as the first page has been restored to a database, anything that previously existed is no longer valid.  If a problem with the backup media was subsequently encountered and the restore aborted, you would be left with an invalid database that could not be used.  SQL Server has the ability to continue the restore operation even if the backup media is damaged. When it encounters an unreadable section of the backup file, SQL Server can continue past the source of damage and continue restoring as much of the database as possible.  This feature is referred to as best effort restore. To restore from backup media that has been damaged, you need to specify the CONTINUE_AFTER_ERROR option for a RESTORE DATABASE or RESTORE LOG command. © The Norns Laboratories, 2009
Database Snapshots A Database Snapshot is a point-in-time, read-only, copy of a database. Database Snapshot is available only in SQL Server 2008 Enterprise. Database Snapshot is not compatible with FILESTREAM. If you create a Database Snapshot against a database with FILESTREAM data, the FILESTREAM filegroup is disabled and not accessible. CREATE DATABASE database_snapshot_name ON (NAME = logical_file_name, FILENAME = 'os_file_name') [ ,...n ] AS SNAPSHOT OF source_database_name © The Norns Laboratories, 2009
Reverting Data Using a Database Snapshot 	RESTORE DATABASE <database_name> FROM DATABASE_SNAPSHOT = <database_snapshot_name> Only a single Database Snapshot can exist for the source database. Full-text catalogs on the source database must be dropped and then re-created after the revert completes. Because the transaction log is rebuilt, the transaction log chain is broken. Both the source database and Database Snapshot are off-line during the revert process. The source database cannot be enabled for FILESTREAM. © The Norns Laboratories, 2009
Automating SQL Server 2 hrs © The Norns Laboratories, 2009
SQL Server Agent Service SQL Server Agent Service is a scheduling engine for SQL Server. © The Norns Laboratories, 2009
Practice SQL Automation Jobs – Self-paced Training Kit, p.237-240 Alerts – Self-paced Training Kit, p.243-245 © The Norns Laboratories, 2009
Practice Test, Review and Questions 10 questions, lesson 1, time 20 minutes © The Norns Laboratories, 2009
Designing SQL ServerSecurity 5 hours © The Norns Laboratories, 2009
Exam objectives Manage logins and server roles. Manage users and database roles. Manage SQL Server instance permissions. Manage database permissions. Manage schema permissions and object permissions. Audit SQL Server instances. Manage transparent data encryption (TDE). Configure surface area. © The Norns Laboratories, 2009
Identity and Access Control (Database Engine) When configuring security for users, services and other accounts to access the system, you must have to work with:  Principals (users and login accounts),  Roles (groups of Principals),  Securable objects (Securables) and  Permissions. © The Norns Laboratories, 2009
Principals of the Database Engine Principals are entities that can request SQL Server resources.  Like other components of the SQL Server authorization model, principals can be arranged in a hierarchy.  The scope of influence of a principal depends on the scope of the definition of the principal:  Windows server database Every principal has a security identifier (SID). Windows-level principals Windows Domain Login Windows Local Login SQL Server-level principal SQL Server Login Database-level principals Database User Database Role Application Role © The Norns Laboratories, 2009
sa Login The SQL Server sa log in is a server-level principal.  It is created by default when an instance is installed.  In SQL Server 2005 and SQL Server 2008, the default database of sa is master. © The Norns Laboratories, 2009
public Database Role Every database user belongs to the public database role.  When a user has not been granted or denied specific permissions on a securable, the user inherits the permissions granted to public on that securable. © The Norns Laboratories, 2009
INFORMATION_SCHEMA and sys Every database includes two entities that appear as users in catalog views:  INFORMATION_SCHEMA  sys These entities are required by SQL Server. They are not principals, and they cannot be modified or dropped. © The Norns Laboratories, 2009
Certificate-based SQL Server Logins Server principals with names enclosed by double hash marks (##) are for internal system use only.  The following principals are created from certificates when SQL Server is installed, and should not be deleted. ##MS_SQLResourceSigningCertificate##  ##MS_SQLReplicationSigningCertificate##  ##MS_SQLAuthenticatorCertificate##  ##MS_AgentSigningCertificate##  ##MS_PolicyEventProcessingLogin##  ##MS_PolicySigningCertificate##  ##MS_PolicyTsqlExecutionLogin##  © The Norns Laboratories, 2009
Client and Database Server By definition, a client and a database server are security principals and can be secured.  These entities can be mutually authenticated before a secure network connection is established.  SQL Server supports the Kerberos authentication protocol, which defines how clients interact with a network authentication service. © The Norns Laboratories, 2009
Database Users A database user is a principal at the database level.  Every database user is a member of the public role. By default, the database includes a guest user when a database is created.  Permissions granted to the guest user are inherited by users who do not have a user account in the database. The guest user cannot be dropped, but it can be disabled by revoking its CONNECT permission.  The CONNECT permission can be revoked by executing REVOKE CONNECT FROM GUEST within any database other than master or tempdb. © The Norns Laboratories, 2009
Application Roles An application role is a database principal that enables an application to run with its own, user-like permissions.  You can use application roles to enable access to specific data to only those users who connect through a particular application.  Unlike database roles, application roles contain no members and are inactive by default.  © The Norns Laboratories, 2009 ,[object Object]
Application roles are enabled by using sp_setapprole, which requires a password.
Because application roles are a database-level principal, they can access other databases only through permissions granted in those databases to guest. Therefore, any database in which guest has been disabled will be inaccessible to application roles in other databases.,[object Object]
SIDs and IDs Server-Level Identification Number (SID) identifies the security context of the login and is unique within the server instance.  Database-Level Identification Number (ID) identifies the user as a securable within the database.  The maximum number of database users is determined by the size of the user ID field.  The value of a user ID must be zero or a positive integer.  In SQL Server 2000, the user ID is stored as a smallint consisting of 16 bits, one of which is the sign. For this reason, the maximum number of user IDs in SQL Server 2000 is 215 = 32,768.  In SQL Server 2005 and later versions, the user ID is stored as an int consisting of 32 bits, one of which is the sign. These additional bits make it possible to assign 231 = 2,147,483,648 ID numbers. © The Norns Laboratories, 2009
Kerberos Authentication and SQL Server Kerberos is a network authentication protocol provides a highly secure method to authenticate client and server entities (security principals) on a network.  These security principals use authentication that is based on master keys and encrypted tickets. In the Kerberos protocol model, every client/server connection begins with authentication. If authentication is successful, session setup completes and a secure client/server session is established. SQL Server supports Kerberos indirectly through the Windows Security Support Provider Interface (SSPI) when SQL Server is using Windows Authentication. SQL Server 2008 supports Kerberos authentication on the following protocols: TCP/IP Named pipes Shared memory © The Norns Laboratories, 2009
Create a SQL Server Login To create a SQL Server login that uses Windows Authentication using Transact-SQL: 	CREATE LOGIN <name of Windows User> FROM WINDOWS;  	GO To create a SQL Server login that uses SQL Server Authentication (Transact-SQL) 	CREATE LOGIN <login name>  	WITH PASSWORD = '<password>' ;  	GO © The Norns Laboratories, 2009
Create a Database User USE <database name>  GO CREATE USER <new user name> FOR LOGIN <login name> ;  GO © The Norns Laboratories, 2009
Create a Database Schema USE <database name> GO CREATE SCHEMA <new schema name>  AUTHORIZATION [new schema owner] ;  GO © The Norns Laboratories, 2009
Server-Level Roles Server-Level Roles are security principals that group other principals. Roles are like groups in the Microsoft Windows operating system. Server-level roles are also named fixed server roles because you cannot create new server-level roles. Server-level roles are server-wide in their permissions scope. You can add SQL Server logins, Windows accounts, and Windows groups into server-level roles. Each member of a fixed server role can add other logins to that same role. © The Norns Laboratories, 2009
Server-level roles’ capabilities sysadmin Members can perform any activity in the server. serveradmin Members can change server-wide configuration options and shut down the server. securityadmin Members can manage logins and their properties. They can GRANT, DENY, and REVOKE server-level permissions. They can also GRANT, DENY, and REVOKE database-level permissions. Additionally, they can reset passwords for SQL Server logins. processadmin Members can end processes that are running in an instance of SQL Server. setupadmin Members can add and remove linked servers. bulkadmin Members can run the BULK INSERT statement. diskadmin The role is used for managing disk files. dbcreator Members can create, alter, drop, and restore any database. public Every SQL Server login belongs to the public server role. Only assign public permissions on any object when you want the object to be available to all users. © The Norns Laboratories, 2009
Database-Level Roles Database-level roles are database-wide in their permissions scope. There are two types of database-level roles in SQL Server:  fixed database roles that are predefined in the database and  flexible database roles that you can create. Members of the db_owner and db_securityadmindatabase roles can manage fixed database role membership.  Only members of the db_owner database role can add members to the db_owner fixed database role.  There are also some special-purpose fixed database roles in the msdb database. You can add any database account and other SQL Server roles into database-level roles. Each member of a fixed database role can add other logins to that same role. © The Norns Laboratories, 2009
Fixed database-level Roles’ Capabilities db_owner  can perform all configuration and maintenance activities on the database, and can also drop the database. db_securityadmin  can modify role membership and manage permissions. Adding principals to this role could enable unintended privilege escalation. db_accessadmin  can add or remove access to the database for Windows logins, Windows groups, and SQL Server logins. db_backupoperator  can back up the database. db_ddladmin  run any Data Definition Language (DDL) command in a database. db_datawriter  can add, delete, or change data in all user tables. db_datareader  can read all data from all user tables. db_denydatawriter  cannot add, modify, or delete any data in the user tables within a database. db_denydatareader cannot read any data in the user tables within a database. © The Norns Laboratories, 2009
msdb Roles db_ssisadmin, db_ssisoperator, db_ssisltduser can administer and use SSIS.  dc_admin, dc_operator, dc_proxy can administer and use the data collector.  PolicyAdministratorRole can perform all configuration and maintenance activities on Policy-Based Management policies and conditions.  ServerGroupAdministratorRole, ServerGroupReaderRole can administer and use registered server groups. Every database user belongs to the public database role. When a user has not been granted or denied specific permissions on a securable object, the user inherits the permissions granted to public on that object. © The Norns Laboratories, 2009
Credentials A credential is a record that contains the authentication information (credentials) required to connect to a resource outside SQL Server. This information is used internally by SQL Server. Most credentials contain a Windows user name and password. The information stored in a credential enables a user who has connected to SQL Server by way of SQL Server Authentication to access resources outside the server instance.  	CREATE CREDENTIAL credential_name 	WITH IDENTITY = 'identity_name' [ , SECRET = 'secret' ]  	[ FOR CRYPTOGRAPHIC PROVIDER cryptographic_provider_name ] © The Norns Laboratories, 2009
Securables Securables are the resources to which the SQL Server Database Engine authorization system regulates access.  Some securables can be contained within others, creating nested hierarchies called "scopes" that can themselves be secured.  The securable scopes are  server database schema. © The Norns Laboratories, 2009
Permissions Every SQL Server securable has associated permissions that can be granted to a principal. Returning the complete list of grantable permissions 	SELECT * FROM fn_builtin_permissions(default); Returning the permissions on a particular class of objects 	SELECT * FROM fn_builtin_permissions('assembly') Returning the permissions granted to the executing principal on an object 	SELECT * FROM fn_my_permissions('Orders55', 'object'); © The Norns Laboratories, 2009
Network Protocols and TDS Endpoints When the SQL Server Database Engine communicates with an application, it formats the communication in a Microsoft communication format called a tabular data stream (TDS) packet.  The network SQL Server Network Interface (SNI) protocol layerencapsulates the TDS packet inside a standard communication protocol, such as TCP/IP or named pipes.  The server creates a SQL Server object called a TDS endpoint for each network protocol. On the server, the TDS endpoints are installed by SQL Server during SQL Server installation.  Acting very similar to firewalls on the network, endpoints are a layer of security at the border between applications and a SQL Server instance.  © The Norns Laboratories, 2009
Server Network Protocols The network protocols necessary to communicate with SQL Server from another computer are often not enabled for SQL Server during installation.  © The Norns Laboratories, 2009 ,[object Object]
The shared memory protocol is enabled by default on all installations, but can only be used to connect to Database Engine from a client application on the same computer. ,[object Object]
SQL Server Asymmetric Keys Public Key Cryptography (PKI) is a form of message secrecy in which a user creates a public key and a private key.  The private key is kept secret, whereas the public key can be distributed to others.  Although the keys are mathematically related, the private key cannot be easily derived by using the public key.  The public key is used to encrypt data and the private key is used to decrypt data.  A message that is encrypted by using the public key can only be decrypted by using the correct private key.  Since there are two different keys, these keys are asymmetric. © The Norns Laboratories, 2009
SQL Server Certificates A certificate is a digitally signed security object that contains a public (and optionally a private) key for SQL Server.  Certificates and asymmetric keys are both ways to use asymmetric encryption.  Certificates are often used as containers for asymmetric keys because they can contain more information such as expiry dates and issuers.  There is no difference between the two mechanisms for the cryptographic algorithm, and no difference in strength given the same key length.  Generally, you use a certificate to encrypt other types of encryption keys in a database, or to sign code modules. Certificates and asymmetric keys can decrypt data that the other encrypts.  Generally, you use asymmetric encryption to encrypt a symmetric key for storage in a database. A public key does not have a particular format like a certificate would have, and you cannot export it to a file. © The Norns Laboratories, 2009
Using a Certificate in SQL Server To creation a certificate requires CREATE CERTIFICATE permission on the database. Only Windows logins, SQL Server logins, and application roles can own certificates. Groups and roles cannot own certificates. Creating a self-signed certificate 	USE AdventureWorks;  	CREATE CERTIFICATE HiTech01  	ENCRYPTION BY PASSWORD = 'pGFD4bb925DGvbd2439587y'  	WITH SUBJECT = ‘HiTech Institute',  	EXPIRY_DATE = '10/31/2010';  	GO © The Norns Laboratories, 2009
SQL Server Encryption Encryption is the process of obfuscating data by the use of a key or password.  This can make the data useless without the corresponding decryption key or password.  Encryption does not solve access control problems. However, it enhances security by limiting data loss even if access controls are bypassed.  You can use encryption in SQL Server for connections, data, and stored procedures. The following table contains more information about encryption in SQL Server. © The Norns Laboratories, 2009
Encryption Hierarchy SQL Server encrypts data with a hierarchical encryption and key management infrastructure.  Each layer encrypts the layer below it by using a combination of certificates, asymmetric keys, and symmetric keys.  Asymmetric keys and symmetric keys can be stored outside of SQL Server in an Extensible Key Management (EKM) module. © The Norns Laboratories, 2009
How does encryption applies? SQL Server provides the following mechanisms for encryption: Transact-SQL functions Asymmetric keys Symmetric keys Certificates Transparent Data Encryption © The Norns Laboratories, 2009
Simple Symmetric Encryption Creating a symmetric key 	CREATE SYMMETRIC KEY VitaliyFursov007  	WITH ALGORITHM = AES_256 ENCRYPTION BY CERTIFICATE HiTech01;  	GO © The Norns Laboratories, 2009
Practice data encryption Read, run, and understand script provided in file “Practice Data Encryption.sql” © The Norns Laboratories, 2009
Auditing SQL Server provides several features that you can use for auditing activities and changes on your SQL Server system. These features enable administrators to implement a defense-in-depth strategy that they can tailor to meet the specific security risks of their environment. © The Norns Laboratories, 2009
Understanding SQL Server Audit Auditing an instance of SQL Server or a SQL Server database involves tracking and logging events that occur on the system.  Beginning in SQL Server 2008 Enterprise, you can also set up automatic auditing by using SQL Server Audit. There are several levels of auditing for SQL Server, depending on government or standards requirements for your installation. SQL Server Audit provides the tools and processes you must have to enable, store, and view audits on various server and database objects. © The Norns Laboratories, 2009
SQL Server Audit Components An audit is the combination of several elements into a single package for a specific group of server actions or database actions.  The components of SQL Server Audit combine to produce an output that is called an audit, just as a report definition combined with graphics and data elements produces a report. SQL Server Audit uses Extended Events to help create an audit.  © The Norns Laboratories, 2009
SQL Server Audit The SQL Server Audit object collects a single instance of server or database-level actions and groups of actions to monitor. The audit is at the SQL Server instance level. You can have multiple audits per SQL Server instance. When you define an audit, you specify the location for the output of the results. This is the audit destination. The audit is created in adisabled state, and does not automatically audit any actions. After the audit is enabled, the audit destination receives data from the audit. © The Norns Laboratories, 2009
Server Audit Specification The Server Audit Specification object belongs to an audit. You can create one server audit specification per audit, because both are created at the SQL Server instance scope. The server audit specification collects many server-level action groups raised by the Extended Events feature. You can include audit action groups in a server audit specification. Audit action groups are predefined groups of actions, which are atomic events occurring in the Database Engine. These actions are sent to the audit, which records them in the target. © The Norns Laboratories, 2009
Database Audit Specification The Database Audit Specification object also belongs to a SQL Server audit.  You can create one database audit specification per SQL Server database per audit. The database audit specification collects database-level audit actions raised by the Extended Events feature.  You can add either audit action groups or audit events to a database audit specification.  Audit events are the atomic actions that can be audited by the SQL Server engine.  Audit action groups are predefined groups of actions. Both are at the SQL Server database scope. These actions are sent to the audit, which records them in the target.  © The Norns Laboratories, 2009
Audit Target The results of an audit are sent to a target, which can be: File Windows Security event log Windows Application event log.  Writing to the Security log is not available on Windows XP. Logs must be reviewed and archived periodically to make sure that the target has sufficient space to write additional records. © The Norns Laboratories, 2009
Using SQL Server Audit Create an audit and define the target. Create either a server audit specification or database audit specification that maps to the audit. Enable the audit specification. Enable the audit. Read the audit events by using the Windows Event Viewer, Log File Viewer, or the fn_get_audit_file function. © The Norns Laboratories, 2009 SELECT * FROM sys.fn_get_audit_file ('C:estudit.sqlaudit',default,default); GO
Monitoring MicrosoftSQL Server 4 hrs. © The Norns Laboratories, 2009
Exam objectives Collect performance data by using System Monitor . Collect trace data by using SQL Server Profiler Identify SQL Server service problems . Identify concurrency problems . Locate error information . © The Norns Laboratories, 2009
System Monitor 	System Monitor, commonly referred to as PerfMon, is a Microsoft Windows utility that allows to capture statistical information about the hardware environment, operating system, and any applications that expose properties and counters. It uses a polling architecture to capture and log numeric data exposed by applications. © The Norns Laboratories, 2009
How to Start To start Performance Monitor Click Start, click in the Start Search box, type perfmon, and press ENTER. In the navigation tree, expand Monitoring Tools, and then click Performance Monitor. You can also use Performance Monitor to view real-time performance data on a remote computer. Membership in the target computer's Performance Log Users group, or equivalent, is the minimum required to complete this procedure. To connect to a remote computer with Performance Monitor Start Performance Monitor. In the navigation tree, ri
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432
MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432

Weitere ähnliche Inhalte

Was ist angesagt?

Sql Server 2012 Installation..
Sql Server 2012 Installation..Sql Server 2012 Installation..
Sql Server 2012 Installation..Anand Kumar Rajana
 
SQL Server 2016 Editions
SQL Server 2016 Editions SQL Server 2016 Editions
SQL Server 2016 Editions Onomi
 
Sql And Storage Considerations For Share Point Server 2010
Sql And Storage Considerations For Share Point Server 2010Sql And Storage Considerations For Share Point Server 2010
Sql And Storage Considerations For Share Point Server 2010Mike Watson
 
Database Mirror for the exceptional DBA – David Izahk
Database Mirror for the exceptional DBA – David IzahkDatabase Mirror for the exceptional DBA – David Izahk
Database Mirror for the exceptional DBA – David Izahksqlserver.co.il
 
Real Time Operational Analytics with Microsoft Sql Server 2016 [Liviu Ieran]
Real Time Operational Analytics with Microsoft Sql Server 2016 [Liviu Ieran]Real Time Operational Analytics with Microsoft Sql Server 2016 [Liviu Ieran]
Real Time Operational Analytics with Microsoft Sql Server 2016 [Liviu Ieran]ITCamp
 
SQL Server 2016: Just a Few of Our DBA's Favorite Things
SQL Server 2016: Just a Few of Our DBA's Favorite ThingsSQL Server 2016: Just a Few of Our DBA's Favorite Things
SQL Server 2016: Just a Few of Our DBA's Favorite ThingsHostway|HOSTING
 
Microsoft sql server architecture
Microsoft sql server architectureMicrosoft sql server architecture
Microsoft sql server architectureNaveen Boda
 
Sql Server 2008 Enhancements
Sql Server 2008 EnhancementsSql Server 2008 Enhancements
Sql Server 2008 Enhancementskobico10
 
Design Considerations For Storing With Windows Azure
Design Considerations For Storing With Windows AzureDesign Considerations For Storing With Windows Azure
Design Considerations For Storing With Windows AzureEric Nelson
 
SharePoint & SQL Server Working Together Efficiently
SharePoint & SQL Server Working Together EfficientlySharePoint & SQL Server Working Together Efficiently
SharePoint & SQL Server Working Together Efficientlyvmaximiuk
 
Microsoft SQL Server internals & architecture
Microsoft SQL Server internals & architectureMicrosoft SQL Server internals & architecture
Microsoft SQL Server internals & architectureKevin Kline
 
Pre and post tips to installing sql server correctly
Pre and post tips to installing sql server correctlyPre and post tips to installing sql server correctly
Pre and post tips to installing sql server correctlyAntonios Chatzipavlis
 
Sql training
Sql trainingSql training
Sql trainingpremrings
 

Was ist angesagt? (18)

Sql Server 2012 Installation..
Sql Server 2012 Installation..Sql Server 2012 Installation..
Sql Server 2012 Installation..
 
SQL Server 2012 Best Practices
SQL Server 2012 Best PracticesSQL Server 2012 Best Practices
SQL Server 2012 Best Practices
 
Troubleshooting sql server
Troubleshooting sql serverTroubleshooting sql server
Troubleshooting sql server
 
Sql2008 (1)
Sql2008 (1)Sql2008 (1)
Sql2008 (1)
 
SQL Server 2016 Editions
SQL Server 2016 Editions SQL Server 2016 Editions
SQL Server 2016 Editions
 
Sql And Storage Considerations For Share Point Server 2010
Sql And Storage Considerations For Share Point Server 2010Sql And Storage Considerations For Share Point Server 2010
Sql And Storage Considerations For Share Point Server 2010
 
Database Mirror for the exceptional DBA – David Izahk
Database Mirror for the exceptional DBA – David IzahkDatabase Mirror for the exceptional DBA – David Izahk
Database Mirror for the exceptional DBA – David Izahk
 
Real Time Operational Analytics with Microsoft Sql Server 2016 [Liviu Ieran]
Real Time Operational Analytics with Microsoft Sql Server 2016 [Liviu Ieran]Real Time Operational Analytics with Microsoft Sql Server 2016 [Liviu Ieran]
Real Time Operational Analytics with Microsoft Sql Server 2016 [Liviu Ieran]
 
SQL Server 2016: Just a Few of Our DBA's Favorite Things
SQL Server 2016: Just a Few of Our DBA's Favorite ThingsSQL Server 2016: Just a Few of Our DBA's Favorite Things
SQL Server 2016: Just a Few of Our DBA's Favorite Things
 
Microsoft sql server architecture
Microsoft sql server architectureMicrosoft sql server architecture
Microsoft sql server architecture
 
Sql Server 2008 Enhancements
Sql Server 2008 EnhancementsSql Server 2008 Enhancements
Sql Server 2008 Enhancements
 
Optimizing SQL Server 2012 for SharePoint 2013
Optimizing SQL Server 2012 for SharePoint 2013Optimizing SQL Server 2012 for SharePoint 2013
Optimizing SQL Server 2012 for SharePoint 2013
 
Diving into sql server 2016
Diving into sql server 2016Diving into sql server 2016
Diving into sql server 2016
 
Design Considerations For Storing With Windows Azure
Design Considerations For Storing With Windows AzureDesign Considerations For Storing With Windows Azure
Design Considerations For Storing With Windows Azure
 
SharePoint & SQL Server Working Together Efficiently
SharePoint & SQL Server Working Together EfficientlySharePoint & SQL Server Working Together Efficiently
SharePoint & SQL Server Working Together Efficiently
 
Microsoft SQL Server internals & architecture
Microsoft SQL Server internals & architectureMicrosoft SQL Server internals & architecture
Microsoft SQL Server internals & architecture
 
Pre and post tips to installing sql server correctly
Pre and post tips to installing sql server correctlyPre and post tips to installing sql server correctly
Pre and post tips to installing sql server correctly
 
Sql training
Sql trainingSql training
Sql training
 

Ähnlich wie MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432

Ähnlich wie MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432 (20)

CV Chandrajit Samanta
CV Chandrajit SamantaCV Chandrajit Samanta
CV Chandrajit Samanta
 
Ruchika Goswami_DBA
Ruchika Goswami_DBARuchika Goswami_DBA
Ruchika Goswami_DBA
 
MySql_PlSQL_7yrs_CV
MySql_PlSQL_7yrs_CVMySql_PlSQL_7yrs_CV
MySql_PlSQL_7yrs_CV
 
DBA, LEVEL III TTLM Monitoring and Administering Database.docx
DBA, LEVEL III TTLM Monitoring and Administering Database.docxDBA, LEVEL III TTLM Monitoring and Administering Database.docx
DBA, LEVEL III TTLM Monitoring and Administering Database.docx
 
KarenResumeDBA
KarenResumeDBAKarenResumeDBA
KarenResumeDBA
 
KarenResumeDBA
KarenResumeDBAKarenResumeDBA
KarenResumeDBA
 
Praveen Kumar Resume
Praveen Kumar ResumePraveen Kumar Resume
Praveen Kumar Resume
 
PRADEEP SINGH
PRADEEP SINGHPRADEEP SINGH
PRADEEP SINGH
 
Pramodkumar_SQL_DBA(5YRS EXP)
Pramodkumar_SQL_DBA(5YRS EXP)Pramodkumar_SQL_DBA(5YRS EXP)
Pramodkumar_SQL_DBA(5YRS EXP)
 
CVNguyenThanhLam-102015
CVNguyenThanhLam-102015CVNguyenThanhLam-102015
CVNguyenThanhLam-102015
 
Resume Edit_7pm
Resume Edit_7pmResume Edit_7pm
Resume Edit_7pm
 
Exploring Scalability, Performance And Deployment
Exploring Scalability, Performance And DeploymentExploring Scalability, Performance And Deployment
Exploring Scalability, Performance And Deployment
 
Configuring sql server - SQL Saturday, Athens Oct 2014
Configuring sql server - SQL Saturday, Athens Oct 2014Configuring sql server - SQL Saturday, Athens Oct 2014
Configuring sql server - SQL Saturday, Athens Oct 2014
 
12363 database certification
12363 database certification12363 database certification
12363 database certification
 
My C.V
My C.VMy C.V
My C.V
 
Chetan.Kumar-SQL_DBA 9115
Chetan.Kumar-SQL_DBA 9115Chetan.Kumar-SQL_DBA 9115
Chetan.Kumar-SQL_DBA 9115
 
Mercury Testdirector8.0 Admin Slides
Mercury Testdirector8.0 Admin SlidesMercury Testdirector8.0 Admin Slides
Mercury Testdirector8.0 Admin Slides
 
Kathir_Resume
Kathir_ResumeKathir_Resume
Kathir_Resume
 
Nitin Paliwal
Nitin PaliwalNitin Paliwal
Nitin Paliwal
 
Subhabrata Deb Resume
Subhabrata Deb ResumeSubhabrata Deb Resume
Subhabrata Deb Resume
 

Kürzlich hochgeladen

Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...panagenda
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Hiroshi SHIBATA
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Strongerpanagenda
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityIES VE
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfIngrid Airi González
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersRaghuram Pandurangan
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI AgeCprime
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch TuesdayIvanti
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...Wes McKinney
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterMydbops
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...AliaaTarek5
 
Manual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditManual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditSkynet Technologies
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxLoriGlavin3
 

Kürzlich hochgeladen (20)

Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
Why device, WIFI, and ISP insights are crucial to supporting remote Microsoft...
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a reality
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
Generative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdfGenerative Artificial Intelligence: How generative AI works.pdf
Generative Artificial Intelligence: How generative AI works.pdf
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
Generative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information DevelopersGenerative AI for Technical Writer or Information Developers
Generative AI for Technical Writer or Information Developers
 
A Framework for Development in the AI Age
A Framework for Development in the AI AgeA Framework for Development in the AI Age
A Framework for Development in the AI Age
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch Tuesday
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
 
Scale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL RouterScale your database traffic with Read & Write split using MySQL Router
Scale your database traffic with Read & Write split using MySQL Router
 
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
(How to Program) Paul Deitel, Harvey Deitel-Java How to Program, Early Object...
 
Manual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance AuditManual 508 Accessibility Compliance Audit
Manual 508 Accessibility Compliance Audit
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
 

MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432

  • 1. MS SQL SERVER 2008 – Implementation and maintenance MCTS EXAM 70-432 © The Norns Laboratories, 2009
  • 2. Introductory class 30 minutes © The Norns Laboratories, 2009
  • 3. Agenda Introduction of the instructor Introduction of the participants Review course schedule, and exam requirements Q & A © The Norns Laboratories, 2009
  • 4. About the instructor © The Norns Laboratories, 2009 VitaliyFursov, MSc, PMP, CSPO An experienced software developer, architect, and project manager with a record of major accomplishments directing the delivery of software development projects of a various sizes and complexity.  Extensive experience in designing, implementing, and supporting data management solutions for major players in financial industry, transportation, retail, telecom, and government. Recognized and recruited to guide development projects by companies like IBM Global Services to design Loan Origination Systems and Online Banking Systems for US largest banks in 2002-2005. Designed most complex retail management solution currently used by major US universities and colleges to operate campus retail units, such as bookstores, computer stores, campus transportation systems, food courts, etc. Recently consulted US government agency on a project of developing video portal designed to serve up to 10 million users. Extensive experience leading project teams, introducing PMO to the R&D organizations, and reduced cost of development. Long term Agile methods practitioner, Certified Scrum Product Owner, PMP designation holder. Experienced mentor, coach, trainer in several technical disciplines, professional public speaker. Volunteer at number of international organizations, such as PMI, Agile Alliance, Scrum Alliance, Toastmasters. After business hours, father of 3 kids, farmer, writer, poet, jazz music composer.
  • 5. Introduction of participants Your name What do I know about SQL Server What do I want to know about SQL Server What do I enjoy doing at work What do I enjoy doing outside of my work © The Norns Laboratories, 2009
  • 6. Course Schedule © The Norns Laboratories, 2009
  • 7. Exam stats Time: 180 minutes, 61 question spread over 6 testlets (cases), passing score 700 points, only multiple choice questions, no simulations. About 3 minutes per question grouped per testlet. 9 question testlet shall be completed at around 27 minutes. Time left on one testlet is not added to the next. The 180 minutes should be regarded as an indication for the maximum exam length. © The Norns Laboratories, 2009
  • 8. Q&A © The Norns Laboratories, 2009
  • 9. Installing and Configuring SQL Server 2008 2.5 hours © The Norns Laboratories, 2009
  • 10. Agenda Determining Hardware and Software Requirements Selecting SQL Server Editions Installing and Configuring SQL Server Instances Configuring Database Mail (self-study) Practicing Exam Questions © The Norns Laboratories, 2009
  • 11. Hardware and Software requirements © The Norns Laboratories, 2009
  • 12. SQL Server Editions Enterprise Standard Workgroup Express Compact Developer Evaluation © The Norns Laboratories, 2009
  • 13. Installing SQL Server Understanding Collation Modes Understanding Authentication Models Understanding SQL Server Instance concept Multiple Instances, Default Instance, Named Instances SQL Server Configuration Manager Installing Sample Database © The Norns Laboratories, 2009
  • 14. SQL Server Configuration Manager Starting, stopping, pausing, and restarting a service Changing service accounts and service account passwords Managing the start-up mode of a service Configuring service start-up parameters After you have completed the initial installation and configuration of your SQL Server services, the primary action that you will perform within SQL Server Configuration Manager is to change service account passwords periodically. When changing service account passwords, you no longer have to restart the SQL Server instance for the new credential settings to take effect. © The Norns Laboratories, 2009
  • 15. Database Mail Database Mail provides a notification capability to SQL Server instances. Database Mail uses the Simple Mail Transfer Protocol (SMTP) relay service that is available on all Windows machines to transmit mail messages. When a mail send is initiated, the message along with all of the message properties is logged into a table in the MSDB database. On a periodic basis, a background task that is managed by SQL Server Agent executes. When the mail send process executes, all messages within the send queue that have not yet been forwarded are picked up and sent using the appropriate mail profile. If SQL Server Agent is not running, messages will accumulate in a queue within the MSDB database. © The Norns Laboratories, 2009
  • 16. Configuring Database Mail © The Norns Laboratories, 2009 1. To enable Database Mail feature: EXEC sp_configure 'Database Mail XPs',1 GO RECONFIGURE WITH OVERRIDE GO 2. Configure Database Mail under the Management node of the SQL Server instance. 3. Click Next on the Welcome screen. 4. Select Set Up Database Mail By Performing The Following Tasks and click Next. 5. Specify a name for your profile and click the Add button to specify settings for a mail account. 6. Fill in the Account Name, E-mail Address, Display Name, Reply E-mail, and Server Name fields. 7. Select the appropriate SMTP Authentication mode for your organization and, if using Basic authentication, specify the username and password.
  • 17. Database Mail Profiles Public profile – can be accessed by any user with the ability to send mail. Private profile – can be accessed only by those users who have been granted access to the mail profile explicitly. Any mail profile could be designated as the default. When sending mail, if a mail profile is not specified, SQL Server uses the mail profile designated as the default to send the message. © The Norns Laboratories, 2009
  • 18. Practice Test, Review and Questions 10 questions, lesson 1, time 20 minutes © The Norns Laboratories, 2009
  • 19. Test settings © The Norns Laboratories, 2009
  • 20. Database Configuration and Maintenance 2 hours © The Norns Laboratories, 2009
  • 21. Agenda Files and Filegroups Manipulating objects between filegroups Transaction Logs FILESTREAM Data tempdb Database Creating Database Database Recovery Models Database Auto Options Change Tracking Access Parameterization Collations Sequences Database Integrity Checks © The Norns Laboratories, 2009
  • 22. Files and Filegroups .mdf, .ndf, .ldf – default file extensions Filegroupsscehmas: Option 1 Data filegroup Index filegroup Option 2 Read only tables filegroup Read-write tables filegroup Index filegroup Option 3 Read only tables filegroup Read-write tables filegroup Index filegroug Key table 1 filegroup Key table 2 filegroup Key table 3 filegroup Based on your application, filegroups can be created to resolve IO performance problems by spreading the database over additional spindles alleviating disk queuing. © The Norns Laboratories, 2009
  • 23. How to create a new filegroups? USE CustomerDB_OLD;GOALTER DATABASE CustomerDB_OLDADD FILEGROUP FG_ReadOnlyGO © The Norns Laboratories, 2009
  • 24. How to add files to a filegroup? ALTER DATABASE CustomerDB_OLD ADD FILE ( NAME = FG_READONLY1, FILENAME = 'C:ustDB_RO.ndf', SIZE = 5MB, MAXSIZE = 100MB, FILEGROWTH = 5MB ) TO FILEGROUP FG_READONLY; GO © The Norns Laboratories, 2009
  • 25. How to create objects in the new filegroup? -- Table CREATE TABLE dbo.OrdersDetail ( OrderIDint NOT NULL, ProductIDint NOT NULL, CustomerIDint NOT NULL, UnitPrice money NOT NULL, OrderQtysmallint NOT NULL ) ON FG_READONLY -- Index CREATE INDEX IDX_OrderID ON dbo.OrdersDetail(OrderID) ON FG_READONLY GO © The Norns Laboratories, 2009
  • 26. How to move an object from the primary file group to another file group? To move an existing table with a clustered index, issue the following command: -- Table - The base table is stored with the -- clustered index, so moving the clustered  -- index moves the base tableCREATE CLUSTERED INDEX IDX_ProductID ON dbo.OrdersDetail(ProductID) ON FG_ReadOnlyGO To move a non-clustered index, issue the following command: -- Non-clustered indexCREATE INDEX IDX_OrderID ON dbo.OrdersDetail(OrderID) WITH (DROP_EXISTING = ON)ON FG_ReadOnlyGO If the table does not have a clustered index and needs to be moved, then create the clustered index on the table specifying the new file group. This process will move the base table and clustered index to the new file group. Then the clustered index can be dropped.  Reference these commands: -- Table without a clustered index + drop indexCREATE CLUSTERED INDEX IDX_ProductID ON dbo.OrdersDetail(ProductID) ON FG_ReadOnlyGO DROP INDEX IDX_ProductID ON dbo.OrdersDetail(ProductID)GO © The Norns Laboratories, 2009
  • 27. How to determine which objects exist in a particular filegroup? SELECT o.[name], o.[type], i.[name], i.[index_id], f.[name]FROM sys.indexesiINNER JOIN sys.filegroups fON i.data_space_id = f.data_space_idINNER JOIN sys.all_objects oON i.[object_id] = o.[object_id]WHERE i.data_space_id = 2 --* New FileGroup*GO © The Norns Laboratories, 2009
  • 28. Transaction Logs © The Norns Laboratories, 2009 ACID (atomicity, consistency, isolation, durability) is a set of properties that guarantee that database transactions are processed reliably. In the context of databases, a single logical operation on the data is called a transaction. An example of a transaction is a transfer of funds from one bank account to another, even though it might consist of multiple individual operations (such as debiting one account and crediting another). A Transaction Log is a history of actions executed by a database management system to guarantee ACID properties over crashes or hardware failures. Physically, a log is a file of updates done to the database, stored in stable storage.
  • 29.
  • 30. work tables to store intermediate results for spools or sorting; 
  • 31. Row versions that are generated by data modification transactions in a database that uses read-committed using row versioning isolation or snapshot isolation transactions;
  • 32. Row versions that are generated by data modification transactions for features, such as: online index operations, Multiple Active Result Sets (MARS), and AFTER triggers.
  • 34. tempdb is re-created every time SQL Server is started.
  • 35. There is never anything in tempdb to be saved from one session of SQL Server to another.
  • 36.
  • 37. What should be the size of tempdb? The best way to estimate the size of tempdb is by running your workload in a test environment. Use ALTER DATABASE command to set its size with a safety factor  that you feel is appropriate. Never allow auto-grow for tempdb. Auto-grow causes a pause during processing when you can least afford it Less of an issue with instant file initialization Auto-grow leads to physical fragmentation Remember that tempdb is created every time you restart a SQL Server but its size is set to either default of Model database or the size you had set using ALTER DATABASE command (the recommended option) © The Norns Laboratories, 2009
  • 38. 1 file vs. multiple files for tempdb Spread TempDB across at least as many equal sized files as there are COREs or CPUs. Since allocation in SQL Server is done using proportional fill, the allocation will be evenly distributed and so is the access/manipulation of the allocation structures across all files. Note, you can always have more files than COREs but you may not see much improvement. © The Norns Laboratories, 2009
  • 39. Creating Database Execute the following code to create a database: CREATE DATABASE TK432 ON PRIMARY ( NAME = N'TK432_Data', FILENAME = N'c:estK432.mdf' , SIZE = 8MB , MAXSIZE = UNLIMITED, FILEGROWTH = 16MB ), FILEGROUP FG1 ( NAME = N'TK432_Data2', FILENAME = N'c:estK432.ndf' , SIZE = 8MB , MAXSIZE = UNLIMITED, FILEGROWTH = 16MB ), FILEGROUP Documents CONTAINS FILESTREAM DEFAULT ( NAME = N'Documents', FILENAME = N'c:estK432Documents' ) LOG ON ( NAME = N'TK432_Log', FILENAME = N'c:estK432.ldf' , SIZE = 8MB , MAXSIZE = 2048GB , FILEGROWTH = 16MB ) GO Execute the following code to change the default filegroup: ALTER DATABASE TK432 MODIFY FILEGROUP FG1 DEFAULT GO © The Norns Laboratories, 2009
  • 40. Database Recovery Models ALTER DATABASE database_name SET RECOVERY { FULL | BULK_LOGGED | SIMPLE } You need to know which types of backups are possible for each recovery model. © The Norns Laboratories, 2009
  • 41. Auto Options AUTO_CLOSE AUTO_SHRINK AUTO_CREATE_STATISTICS AUTO_UPDATE_STATISTICS AUTO_UPDATE_STATISTICS_ASYNCH © The Norns Laboratories, 2009
  • 42. Change Tracking New to SQL Server 2008 version – versioning of each changed row in a table. CHANGE_RETENTION AUTO_CLEANUP © The Norns Laboratories, 2009
  • 43. Access Database status modes: ONLINE READ_ONLY / READ_WRITE SINGLE_USER / RESTRICTED_USER / MULTI_USER OFFLINE EMERGENCY ROLLBACK IMMEDIATE ROLLBACK AFTER<number of seconds> © The Norns Laboratories, 2009
  • 44. Parameterization Forced parameterization changes the literal constants in a query to parameters when compiling a query. Forced parameterization should not be used for environments that rely heavily on indexed views and indexes on computed columns. Generally, the PARAMETERIZATION FORCED option should only be used by experienced database administrators after determining that doing this does not adversely affect performance. Distributed queries that reference more than one database are eligible for forced parameterization as long as the PARAMETERIZATION option is set to FORCED in the database whose context the query is running. Setting the PARAMETERIZATION option to FORCED flushes all query plans from the plan cache of a database, except those that currently are compiling, recompiling, or running. Plans for queries that are compiling or running during the setting change are parameterized the next time the query is executed. Setting the PARAMETERIZATION option is an online operation that it requires no database-level exclusive locks. Forced parameterization is disabled (set to SIMPLE) when the compatibility of a SQL Server database is set to 80, or a database on an earlier instance is attached to an instance of SQL Server 2005 or later.  The current setting of the PARAMETERIZATION option is preserved when reattaching or restoring a database. When the PARAMETERIZATION option is set to FORCED, the reporting of error messages may differ from that of simple parameterization: multiple error messages may be reported in cases where fewer message would be reported under simple parameterization, and the line numbers in which errors occur may be reported incorrectly. © The Norns Laboratories, 2009
  • 45. Collation Sequences Each SQL Server collation specifies three properties: The sort order to use for Unicode data types (nchar, nvarchar, and ntext). A sort order defines the sequence in which characters are sorted, and the way characters are evaluated in comparison operations. The sort order to use for non-Unicode character data types (char, varchar, and text). The code page used to store non-Unicode character data. Note  You cannot specify the equivalent of a code page for the Unicode data types (nchar, nvarchar, and ntext). The double-byte bit patterns used for Unicode characters are defined by the Unicode standard and cannot be changed. © The Norns Laboratories, 2009
  • 46. Database Integrity Checks USE [master] GO ALTER DATABASE [AdventureWorks2008] SET PAGE_VERIFY CHECKSUM GO When DBCC CHECKDB is executed, SQL Server performs all the following actions: Checks page allocation within the database Checks the structural integrity of all tables and indexed views Calculates a checksum for every data and index page to compare against the stored checksum Validates the contents of every indexed view Checks the database catalog Validates Service Broker data within the database To accomplish these checks, DBCC CHECKDB executes the following commands: DBCC CHECKALLOC, to check the page allocation of the database DBCC CHECKCATALOG, to check the database catalog DBCC CHECKTABLE, for each table and view in the database to check the structural integrity © The Norns Laboratories, 2009
  • 47. Practice Test, Review and Questions 10 questions, lesson 1, time 20 minutes © The Norns Laboratories, 2009
  • 48. Tables 3 hours © The Norns Laboratories, 2009
  • 49. Basics of Data Modeling Subject Area Reflecting SA in the model Put stuff where it belongs. Tables follow some very basic rules—columns define a group of data that you need to store, and you add one row to the table for each unique group of information. The columns that you define represent the distinct pieces of information that you need to work with inside your database, such as a city, product name, first name, last name, or price. © The Norns Laboratories, 2009
  • 50. Nullability You can specify whether a column allows nulls by specifying NULL or NOT NULL for the column properties. Just as with every command you execute, you should always specify explicitly each option that you want, especially when you are creating objects. If you do not specify the nullability option, SQL Server uses the default option when creating a table, which could produce unexpected results. In addition, the default option is not guaranteed to be the same for each database because you can modify this by changing the ANSI_NULL_DEFAULT database property. NULL value does not equal another NULL and NULLs cannot be compared. © The Norns Laboratories, 2009
  • 51. COLLATE Collation sequences control the way characters in various languages are handled. When you install an instance of SQL Server, you specify the default collation sequence for the instance. You can set the COLLATE property of a database to override the instance collation sequence, which SQL Server then applies as the default collation sequence for objects within the database. You can override the collation sequence for an entire table. You can override the collation sequence for an individual column. By specifying the COLLATE option for a character-based column, you can set language-specifi c behavior for the column. © The Norns Laboratories, 2009
  • 52. IDENTITY Identities are used to provide a value for a column automatically when data is inserted. You cannot update a column with the identity property. Columns with any numeric data type, except float and real, can accept an identity property because you also have to specify a seed value and an increment to be applied for each subsequently inserted row. You can have only a single identity column in a table. Although SQL Server automatically provides the next value in the sequence, you can insert a value into an identity column explicitly by using the SET IDENTITY_INSERT <table name> ON command. You can also change the next value generated by modifying the seed using the DBCC CHECKIDENT command. © The Norns Laboratories, 2009
  • 53. NOT FOR REPLICATION By applying the NOT FOR REPLICATION option, SQL Server does not reseed the identity column when the replication engine is applying changes. © The Norns Laboratories, 2009
  • 54. Computed Columns When you create a computed column, only the definition of the calculation is stored. A computed column cannot be used as a DEFAULT or FOREIGN KEY constraint definition or with a NOT NULL constraint definition. However, a computed column can be used as a key column in an index or as part of any PRIMARY KEY or UNIQUE constraint, if the computed column value is defined by a deterministic expression and the data type of the result is allowed in index columns. For example, if the table has integer columns a and b, the computed column a+b may be indexed, but computed column a+DATEPART(dd, GETDATE()) cannot be indexed because the value may change in subsequent invocations.  A computed column cannot be the target of an INSERT or UPDATE statement.  © The Norns Laboratories, 2009
  • 55. Row and Page Compression © The Norns Laboratories, 2009 Row-level compression allows you to compress individual rows to fit more rows on a page, which in turn reduces the amount of storage space for the table because you don’t need to store as many pages on a disk. Because you can uncompress the data at any time and the uncompress operation must always succeed, you cannot use compression to store more than 8,060 bytes in a single row. Page compression reduces only the amount of disk storage required because the entire page is compressed. To compress any newly added, uncompressed pages, you need to execute an ALTER TABLE. . .REBUILD statement with the PAGE compression option.
  • 56. Modeling world’s currencies © The Norns Laboratories, 2009
  • 57. Primary Keys The primary key defines the column(s) that uniquely identify every row in the table. You must specify all columns within the primary key as NOT NULL. You can have only a single primary key constraint defined for a table. When you create a primary key, you also designate whether the primary key is clustered or nonclustered. A clustered primary key, the default SQL Server behavior, causes SQL Server to store the table in sorted order according to the primary key. The default option for a primary key is clustered. When a clustered primary key is created on a table that is compressed, the compression option is applied to the primary key when the table is rebuilt. © The Norns Laboratories, 2009
  • 58. Foreign Keys You use foreign keys to implement referential integrity between tables within your database. By creating foreign keys, you can ensure that related tables cannot contain invalid, orphaned rows. Foreign keys create what is referred to as a parent-child relationship between two tables and ensures that a value cannot be written to the child table that does not already exist in the parent table. For example, it would not make any sense to have an order for a customer who does not exist. To create a foreign key between two tables, the parent table must have a primary key, which is used to refer to the child table. In addition, the data types between the parent column(s) and child column(s) must be compatible. If you have a multicolumn primary key, all the columns from the parent primary key must exist in the child table to define a foreign key. © The Norns Laboratories, 2009
  • 59. CASCADING One of the options for a foreign key is CASCADE. You can configure a foreign key such that modifications of the parent table are cascaded to the child table. For example, when you delete a customer, SQL Server also deletes all the customer’s associated orders. Cascading is an extremely bad idea. It is very common to have foreign keys defined between all the tables within a database. If you were to issue a DELETE statement without a WHERE clause against the wrong table, you could eliminate every row, in every table within your database, very quickly. By leaving the CASCADE option off for a foreign key, if you attempt to delete a parent row that is referenced, you get an error. © The Norns Laboratories, 2009
  • 60. Default constraints Default constraints allow you to specify a value that is written to the column if the application does not supply a value. Default constraints apply only to new rows added with an INSERT, BCP, or BULK INSERT statement. You can define default constraints for either NULL or NOT NULL columns. If a column has a default constraint and an application passes in a NULL for the column, SQL Server writes a NULL to the column instead of the default value. SQL Server writes the default value to the column only if the application does not specify the column in the INSERT statement. © The Norns Laboratories, 2009
  • 61. Adding a Check Constraint © The Norns Laboratories, 2009 Check constraints limit the range of values within a column. Check constraints can be created at a column level and are not allowed to reference any other column in the table. Table-level check constraints can reference any column within a table, but they are not allowed to reference columns in other tables.
  • 62. CREATE TABLE script USE [MyFirstDatabase] GO /****** Object: Table [Currencies].[Currencies] Script Date: 11/09/2009 22:22:23 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [Currencies].[Currencies]( [currencyId] [int] NOT NULL, [countryName] [nvarchar](64) NOT NULL, [currencyName] [nvarchar](64) NOT NULL, [currencyCode] [nchar](4) NULL, CONSTRAINT [PK_Currencies] PRIMARY KEY CLUSTERED ( [currencyId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO © The Norns Laboratories, 2009
  • 63. CREATE TABLE script (cont.) CREATE TABLE [Currencies].[CurrencyUnits]( [currencyUnitId] [int] NOT NULL, [name] [nvarchar](64) NOT NULL, [value] [money] NOT NULL, [image] [image] NULL, [currencyId] [int] NOT NULL, CONSTRAINT [PK_CurrencyUnits] PRIMARY KEY CLUSTERED ( [currencyUnitId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY] GO ALTER TABLE [Currencies].[CurrencyUnits] WITH CHECK ADD CONSTRAINT [FK_CurrencyUnits_Currencies] FOREIGN KEY([currencyId]) REFERENCES [Currencies].[Currencies] ([currencyId]) GO ALTER TABLE [Currencies].[CurrencyUnits] CHECK CONSTRAINT [FK_CurrencyUnits_Currencies] GO ALTER TABLE [Currencies].[CurrencyUnits] WITH CHECK ADD CONSTRAINT [CK_CurrencyUnits_ValueGT0] CHECK (([value]>(0))) GO ALTER TABLE [Currencies].[CurrencyUnits] CHECK CONSTRAINT [CK_CurrencyUnits_ValueGT0] GO EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'Value Greater Than 0' , @level0type=N'SCHEMA',@level0name=N'Currencies', @level1type=N'TABLE',@level1name=N'CurrencyUnits', @level2type=N'CONSTRAINT',@level2name=N'CK_CurrencyUnits_ValueGT0' GO © The Norns Laboratories, 2009
  • 64. Using Schema’s CREATE SCHEMA [Currencies] AUTHORIZATION dbo GO It is recommended that you do not create tables and view or assign permissions within a CREATE SCHEMA statement. Any CREATE SCHEMA statement that is executed must be in a separate batch. ALTER SCHEMA [Currencies] TRANSFER dbo.CurrencyUnits GO ALTER SCHEMA [Currencies] TRANSFER dbo.Currencies GO © The Norns Laboratories, 2009
  • 65. Indexes 2 hours © The Norns Laboratories, 2009
  • 66. Balanced Trees (B-Trees) © The Norns Laboratories, 2009 A B-tree is constructed of a root node that contains a single page of data, one or more optional intermediate level pages, and one or more optional leaf level pages. The core concept of a B-tree can be found in the first word of the name: balanced. A B-tree is always symmetrical, with the same number of pages on both the left and right halves at each level. The leaf-level pages contain entries sorted in the order that you specified. The data at the leaf level contains every combination of values within the column(s) that are being indexed. The number of index rows on a page is determined by the storage space required by the columns that are defined in the index.
  • 67. Index Levels A data page = 8,192 bytes (or 8,060 bytes of actual user data). If you build an index on an INT column, each row in the table will require 4 bytes of storage in the index. © The Norns Laboratories, 2009 1 1 0 ? 2,015
  • 68. Indexing limits You can define an index with a maximum of 16 columns. The maximum size of the index key is 900 bytes. A table without a clustered index is referred to as a heap. When you have a heap, page chains are not stored in sorted order. © The Norns Laboratories, 2009
  • 69. Covering Indexes When an index is built, every value in the index key is loaded into the index. In effect, each index is a mini-table containing all the values corresponding to just the columns in the index key. It is possible for a query to be entirely satisfied by using the data in the index. An index that is constructed such that SQL Server can completely satisfy queries by reading only the index is called a covering index. © The Norns Laboratories, 2009
  • 70. Included Columns Indexes can be created using the optional INCLUDE clause. Included columns become part of the index at only the leaf level. Values from included columns do not appear in the root or intermediate levels of an index and do not count against the 900-byte limit for an index. This way you can construct covering indexes that can have more than 16 columns and 900 bytes by using the INCLUDE clause. © The Norns Laboratories, 2009
  • 71. Query optimizer Ways to create statistics in SQL Server 2008: The optimizer automatically creates single-column statistics as needed as a side effect of optimizing SELECT, INSERT, UPDATE, DELETE, and MERGE statements if AUTO_CREATE_STATISTICS is enabled, which is the default setting. Note: The optimizer only creates nonfiltered statistics in these cases. There are two basic statements in SQL Server 2008 that explicitly generate the statistical information described above: CREATE INDEX generates the declared index in the first place, and it also creates one set of statistics for the column combinations constituting the index keys (but not other included columns). CREATE STATISTICS only generates the statistics for a given column or combination of columns. Note: If the CREATE INDEX defines a predicate, the corresponding statistics are created with the same predicate. In addition, there are several other ways to create statistics or indexes. Ultimately, though, each issues one of the above two commands. Use sp_createstatsto create statistics for all eligible columns (all except XML columns) for all user tables in the current database. A new statistics object will not be created for columns that already have a statistics object. Use dbccdbreindexto rebuild one or more indexes for a table in the specified database. In SQL Server Management Studio, expand the folder under a Table object, right click the Statistics folder, and choose New Statistics. Use the Database Engine Tuning Advisor to create indexes. © The Norns Laboratories, 2009
  • 72. CREATE STATISTICS CREATE STATISTICS FirstLast2 ON Person.Contact(FirstName,LastName) WITH SAMPLE 50 PERCENT The auto update statistics feature described above may be turned off at different levels: On the database level, disable auto update statistics by using command ALTER DATABASE dbname SET AUTO_UPDATE_STATISTICS OFF At the table level, disable auto update statistics using the NORECOMPUTE option of the UPDATE STATISTICS command or CREATE STATISTICS command. Use sp_autostats to display and change the auto update statistics setting for a table, index, or statistics object.  Re-enabling the automatic updating of statistics can be done similarly using ALTER DATABASE, UPDATE STATISTICS, or sp_autostats. © The Norns Laboratories, 2009
  • 73. FILLFACTOR The FILLFACTOR option for an index determines the percentage of free space that is reserved on each leaf-level page of the index when an index is created or rebuilt. The free space reserved leaves room on the page for additional values to be added, thereby reducing the rate at which page splits occur. The FILLFACTOR is represented as a percentage full. For example, a FILLFACTOR = 75 means that 25 percent of the space on each leaf-level page is left empty to accommodate future values. © The Norns Laboratories, 2009
  • 74. Defragmenting an Index ALTER INDEX { index_name | ALL } ON <object> { REBUILD [ [ WITH ( <rebuild_index_option> [ ,...n ] ) ] | [ PARTITION = partition_number [ WITH ( <single_partition_rebuild_index_option> [ ,...n ] )] ] ] | DISABLE | REORGANIZE [ PARTITION = partition_number ] [ WITH ( LOB_COMPACTION = { ON | OFF } ) ] | SET ( <set_index_option> [ ,...n ] ) }[ ; ] When you defragment an index, you can use either the REBUILD or REORGANIZE options. © The Norns Laboratories, 2009
  • 75. Index REBUILD The REBUILD option rebuilds all levels of the index and leaves all pages filled according to the FILLFACTOR setting of an index. The rebuild of an index effectively re-creates the entire B-tree structure, so unless you specify the ONLINE option, a shared table lock is acquired, preventing any changes until the rebuild operation completes. © The Norns Laboratories, 2009
  • 76. Index REORGANIZE The REORGANIZE option removes fragmentation only at the leaf level. Intermediate-level pages and the root page are not defragmented during a reorganize. REORGANIZE is always an online operation that does not incur any long-term blocking. © The Norns Laboratories, 2009
  • 77. Disabling an index An index can be disabled by using the ALTER INDEX statement as follows: ALTER INDEX { index_name | ALL } ON <object> DISABLE [ ; ] When an index is disabled, the definition remains in the system catalog but is no longer used. SQL Server does not maintain the index as data in the table changes, and the index cannot be used to satisfy queries. If a clustered index is disabled, the entire table becomes inaccessible. To enable an index, it must be rebuilt to regenerate and populate the B-tree structure. ALTER INDEX { index_name | ALL } ON <object> REBUILD [ ; ] © The Norns Laboratories, 2009
  • 78. Full Text Indexing Full text indexes can be created against CHAR/VARCHAR, XML, and VARBINARY columns. When you full text index a VARBINARY column, you must specify the filter to be used by the word breaker to interpret the document content. Thesaurus files allow you to specify a list of synonyms or word replacements for search terms. Stop lists exclude a list of words from search arguments and a full text index. © The Norns Laboratories, 2009
  • 79. Full Text Catalog The first step in building a full text index is to create a storage structure. Unlike relational indexes, full text indexes have a unique internal structure that is maintained within a separate storage format called a full text catalog. Each full text catalog contains one or more full text indexes. The generic syntax for creating a full text catalog is CREATE FULLTEXT CATALOG catalog_name [ON FILEGROUP filegroup ] [IN PATH 'rootpath'] [WITH <catalog_option>] [AS DEFAULT] [AUTHORIZATION owner_name ] <catalog_option>::= ACCENT_SENSITIVITY = {ON|OFF} FILEGROUP clause specifies the filegroup that you want to use to store any full text indexes. ACCENT_SENSITIVITY allows you to configure whether the full text engine considers accent marks when building or querying a full text index. AS DEFAULT clause works the same as the DEFAULT option for a filegroup. AUTHORIZATION option specifies the owner of the full text catalog. © The Norns Laboratories, 2009
  • 80. Change Tracking The CHANGE_TRACKING option for a full text index determines how SQL Server maintains the index when the underlying data changes. When set to AUTO, SQL Server automatically updates the fulltext index as the data is modified. When set to MANUAL, you are responsible for periodically propagating the changes into the full text index. © The Norns Laboratories, 2009
  • 81. Stemmers SQL Server uses stemmers to allow a full text index to search on all inflectional forms of asearch term, such as drive, drove, driven, and driving. Stemming is language-specific. Althoughyou could employ a German word breaker to tokenize English, the German stemmer cannotprocess English. © The Norns Laboratories, 2009
  • 82. Querying Full Text Data SELECT ProductDescriptionID, Description FROM Production.ProductDescription WHERE FREETEXT(Description,N'bike') GO All search terms used with full text are Unicode strings. If youpass in a non-Unicodestring, the query still works, but it is much less efficient because the optimizer cannot useparameter sniffing to evaluate distribution statistics on the full text index. Make certainthat all terms you pass in for full text search are always typed as Unicode for maximumperformance. © The Norns Laboratories, 2009
  • 83. THESAURUS FILES A thesaurus file exists for each supported language. All thesaurus files are XML filesstored in the FTDATA directory underneath your default SQL Server installation path. The thesaurus files are not populated, so to perform synonym searches, you need topopulate the thesaurus files. © The Norns Laboratories, 2009
  • 84. Stop Lists Stop listsare used to excludewords that you do not want included in a full text index. CREATE FULLTEXT STOPLIST ProductStopList; GO ALTER FULLTEXT STOPLIST ProductStopList ADD 'bike' LANGUAGE 1033; GO ALTER FULLTEXT INDEX ON Production.ProductDescription SET STOPLIST ProductStopList GO © The Norns Laboratories, 2009
  • 85. Distributing and Partitioning Data 2.5 hours © The Norns Laboratories, 2009
  • 86. Distributing and Partitioning Data Table partitioning was introduced in Microsoft SQL Server 2005 as a means to split large tables across multiple storage structures. Previously, objects were restricted to a single filegroup that could contain multiple files. However, the placement of data within a filegroup was still determined by SQL Server. Table partitioning allows tables, indexes, and indexed views to be created on multiple filegroups while also allowing the database administrator (DBA) to specify which portion of the object will be stored on a specific filegroup. © The Norns Laboratories, 2009
  • 87. The process for partitioning For partitioning of a table, index, or indexed view do the following: Create a partition function. Create a partition scheme mapped to a partition function. Create the table, index, or indexed view on the partition scheme. © The Norns Laboratories, 2009
  • 88. Creating a Partition Function A partition function defines the boundary points that will be used to split data across apartition scheme. The data type for a partition function can be anynative SQL Server data type, except: text, ntext, image, varbinary(max), timestamp, xml, varchar(max) © The Norns Laboratories, 2009
  • 89. Partition Function CREATE PARTITION FUNCTION mypartfunction (int) AS RANGE LEFT FOR VALUES (10,20,30,40,50,60) © The Norns Laboratories, 2009 CREATE PARTITION FUNCTION mypartfunction (int) AS RANGE RIGHT FOR VALUES (10,20,30,40,50,60)
  • 90. Practice Partitioning © The Norns Laboratories, 2009 Self-paced Training Kit, page 140
  • 91. Creating a Partition Scheme A partition scheme defines the storage structures and collection of filegroups that you want to use with a given partition function. CREATE PARTITION SCHEME partition_scheme_name AS PARTITION partition_function_name [ ALL ] TO ( { file_group_name | [ PRIMARY ] } [ ,...n ] ) Create partition scheme as described on p.143-144 Run the following commands to check on results: SELECT * FROM sys.partition_range_values; SELECT * FROM sys.partition_schemes; © The Norns Laboratories, 2009
  • 92. Creating Partitioned Tables and Indexes CREATE TABLE Employee (EmployeeIDint NOT NULL, FirstNamevarchar(50) NOT NULL, LastNamevarchar(50) NOT NULL) ON mypartscheme(EmployeeID); GO CREATE NONCLUSTERED INDEX idx_employeefirtname ON dbo.Employee(FirstName) ON mypartscheme(EmployeeID); GO © The Norns Laboratories, 2009
  • 93. Split and Merge Operators The SPLIT operator introduces a new boundary point into a partition function. MERGE eliminates a boundary point from a partition function. The general syntax is as follows: ALTER PARTITION FUNCTION partition_function_name() {SPLIT RANGE ( boundary_value ) | MERGE RANGE ( boundary_value ) } [ ; ] © The Norns Laboratories, 2009
  • 94. Altering a Partition Scheme You can add filegroups to an existing partition scheme to create more storage space for a partitioned table. The general syntax is as follows: ALTER PARTITION SCHEME partition_scheme_name NEXT USED [ filegroup_name ] [ ; ] The NEXT USED clause has two purposes: It adds a new filegroup to the partition scheme, if the specified filegroup is not already part of the partition scheme. It marks the NEXT USED property for a filegroup. The filegroup that is marked with the NEXT USED flag is the filegroup that contains the next partition that is created when a SPLIT operation is executed. © The Norns Laboratories, 2009
  • 95. Switch Operator SQL Server stores data on pages in a doubly linked list. To locate and access data, SQL Server performs the following basic process: 1. Resolve the table name to an object ID. 2. Locate the entry for the object ID in sys.indexes to extract the first page for the object. 3. Read the first page of the object. 4. Using the Next Page and Previous Page entries on each data page, walk the page chain to locate the data required. © The Norns Laboratories, 2009 SWITCH operator allows to exchange partitions between tables in a perfectly scalable manner with no locking, blocking, or deadlocking.
  • 96. Practice Review script provided in a file Practice Distributing and Partitioning.sql Run each step individually. Observe results, and describe each step purpose in comments. © The Norns Laboratories, 2009
  • 97. Importing and Exporting Data 1.5 hours © The Norns Laboratories, 2009
  • 98. Bulk Copy Program (BCP) BCP is a program that allows: import data from a file into a table; export data from a table to a file. bcp {[[database_name.][owner].]{table_name | view_name} | "query"} {in | out | queryout | format} data_file [-mmax_errors] [-fformat_file] [-x] [-eerr_file] [-Ffirst_row] [-Llast_row] [-bbatch_size] [-n] [-c] [-w] [-N] [-V (60 | 65 | 70 | 80)] [-6] [-q] [-C { ACP | OEM | RAW | code_page } ] [-tfield_term] [-rrow_term] [-iinput_file] [-ooutput_file] [-apacket_size] [-Sserver_name[nstance_name]] [-Ulogin_id] [-Ppassword] [-T] [-v] [-R] [-k] [-E] [-h"hint [,...n]"] C:gt;bcp master..sysobjects out c:estysobjects.txt -c -t, -T -S <servername> C:gt;bcpAdventureWorks.Sales.SalesOrderDetail out c:estdventureWorks.Sales.SalesOrderDetail.txt -c –t, -T -S <servername> © The Norns Laboratories, 2009
  • 99. BCP (continues)… The switches used are: -c Output in ASCII with the default field terminator (tab) and row terminator (crlf) -t override the field terminator with "," -T use a trusted connection. Note that U –P may be used for username/password -S connect to this server to execute the command Note that, like DTS/SSIS, BCP is a client utility, hence you need to supply the connection information. For transfer of data between SQL servers, in place of –c, use –n or -N for native data format (-N = Unicode). This is much faster and avoids data conversion problems. © The Norns Laboratories, 2009
  • 100. BULK INSERT BULK INSERT is a T-SQL command that allows import data from a file into a table. BULK INSERT cannot export data. BULK INSERT [ database_name . [ schema_name ] . | schema_name . ] [ table_name | view_name ] FROM 'data_file' [ WITH ( [ [ , ] BATCHSIZE = batch_size ] [ [ , ] CHECK_CONSTRAINTS ] [ [ , ] CODEPAGE = { 'ACP' | 'OEM' | 'RAW' | 'code_page' } ] [ [ , ] DATAFILETYPE = { 'char' | 'native'| 'widechar' | 'widenative' } ] [ [ , ] FIELDTERMINATOR = 'field_terminator' ] [ [ , ] FIRSTROW =first_row ] [ [ , ] FIRE_TRIGGERS ] [ [ , ] FORMATFILE = 'format_file_path' ] [ [ , ] KEEPIDENTITY ] [ [ , ] KEEPNULLS ] [ [ , ] KILOBYTES_PER_BATCH =kilobytes_per_batch ] [ [ , ] LASTROW = last_row ] [ [ , ] MAXERRORS = max_errors ] [ [ , ] ORDER ( { column [ ASC | DESC ] } [ ,...n ] ) ] [ [ , ] ROWS_PER_BATCH = rows_per_batch ] [ [ , ] ROWTERMINATOR = 'row_terminator' ] [ [ , ] TABLOCK ] [ [ , ] ERRORFILE = 'file_name' ] )] © The Norns Laboratories, 2009
  • 101. BULK INSERT (continues…) DECLARE @bulk_cmdvarchar(1000) SET @bulk_cmd = 'BULK INSERT MyFirstDatabase..SalesOrderDetail FROM ''C:estdventureWorks.Sales.SalesOrderDetail.txt'' WITH (DATAFILETYPE = ''char'', FIELDTERMINATOR = '','')' EXEC(@bulk_cmd) GO SELECT * FROM SalesOrderDetail GO © The Norns Laboratories, 2009
  • 102. SSIS – Import/Export Wizard The Import and Export Wizard uses a subset of the SSIS feature set to move data between a source and destination. Self-paced Training Kit, p.167-171, Practice 2. © The Norns Laboratories, 2009
  • 103. Designing Policy Based Management 1 hr. © The Norns Laboratories, 2009
  • 104. Designing Policies SQL Server 2008 has a new feature called Policy Based Management, also known as the Declarative Management Framework (DMF), to tackle the problem of standardizing your SQL Server instances. Policy Based Management introduces the following new objects that are used to design and check for compliance: Facets Conditions Policies Policy targets Policy categories © The Norns Laboratories, 2009
  • 105. Facets and Conditions Policies are created from a predefined set of facets. Facets define the type of object or option to be checked, such as database, Surface Area, or login. SQL Server ships with 74 facets, implemented as .NET assemblies, each with a unique set of properties. Each facet contains a subgroup of SQL Server 2008 configuration settings and other events that you can control. You pair these facets with conditions in order to create a policy. Conditions are the values that are allowed for the properties of a facet, the configuration settings, or other events contained within that facet. © The Norns Laboratories, 2009
  • 106. Policies Policies are created for a single condition and set to either enforce or check compliance. The execution mode can be set as follows: On demand –Evaluates the policy when directly executed by a user On change, prevent – Creates data definition language (DDL) triggers to prevent a change that violates the policy On change, log only – Checks the policy automatically when a change is made using the event notification infrastructure On schedule – Creates a SQL Server Agent job to check the policy on a defined schedule If a policy contains a condition that was defined using the advanced editor, the only available execution mode is On Demand. © The Norns Laboratories, 2009
  • 107. Policy Categories Policy categories can be used to group one or more policies into a single compliance unit. If not specified, all policies belong to the DEFAULT category. To check or enforce policies, you create a subscription to one or more policies. Subscription occurs at two levels—instance and database. A member of the sysadmin role can subscribe an instance to a policy category. Once subscribed, the owner of each database within the instance can subscribe their database to a policy category. Each policy category has a Mandate property that applies to databases. When a policy category is set to Mandate and a sysadmin subscribes the instance to a policy category, all databases that meet the target set are controlled by the policies within the policy category. A policy subscription to a policy category set to Mandate cannot be overridden by a database owner. © The Norns Laboratories, 2009
  • 108. Creating New Condition © The Norns Laboratories, 2009
  • 109. Practice PBM Self-paced Training Kit, p.184-191, Practices 1-5 © The Norns Laboratories, 2009
  • 110. Backing up and Restoring Database 3 hrs. © The Norns Laboratories, 2009
  • 111. Backups Backups are taken to reduce the risk of data loss. Because it is more common to back up a database than to restore one, the backup engine is optimized for the backup process. The only two parameters required for a backup are the name of the database and the backup device. Up to 64 devices could be used for a backup. Because the backup process is not concerned with the ordering of pages, multiple threads can be used to write pages to the backup device. When a backup is initiated, the backup engine grabs pages from the data files as quickly as possible, without regard to the order of pages. © The Norns Laboratories, 2009
  • 112. Backup Types Full Captures all pages within a database that contain data. Pages that do not contain data are not included in the backup. The database is fully operational during a full backup. The only operations that are not allowed during the a full backup are: Adding or removing a database file Shrinking a database Partial Captures only the filegroups that can change. Read only filegroups are not included to minimize the size of the backup. Differential Captures all extents that have changed since the last full backup. The primary purpose of a differential backup is to reduce the number of transaction log backups that need to be restored. A differential backup has to be applied to a full backup and can’t exist until a full backup has been created. Transaction log Every change made to a database has an entry made to the transaction log. Filegroup Individual file or a filegroup backup. © The Norns Laboratories, 2009
  • 113. BACKUP DATABASE BACKUP DATABASE { database_name | @database_name_var } TO <backup_device> [ ,...n ] [ <MIRROR TO clause> ] [ next-mirror-to ] [ WITH { DIFFERENTIAL | <general_WITH_options> [ ,...n ] } ] <backup_device>::= { { logical_device_name | @logical_device_name_var } | { DISK | TAPE } = { 'physical_device_name' | @physical_device_name_var } } <MIRROR TO clause>::= MIRROR TO <backup_device> [ ,...n ] <general_WITH_options> [ ,...n ]::= --Backup Set Options COPY_ONLY | { COMPRESSION | NO_COMPRESSION } | DESCRIPTION = { 'text' | @text_variable } | NAME = { backup_set_name | @backup_set_name_var } | PASSWORD = { password | @password_variable } | { EXPIREDATE = { 'date' | @date_var } | RETAINDAYS = { days | @days_var } } --Media Set Options { NOINIT | INIT } | { NOSKIP | SKIP } | { NOFORMAT | FORMAT } | MEDIADESCRIPTION = { 'text' | @text_variable } | MEDIANAME = { media_name | @media_name_variable } | MEDIAPASSWORD = { mediapassword | @mediapassword_variable } | BLOCKSIZE = { blocksize | @blocksize_variable } --Error Management Options { NO_CHECKSUM | CHECKSUM } | { STOP_ON_ERROR | CONTINUE_AFTER_ERROR } © The Norns Laboratories, 2009
  • 114. Configuring Backup Devices USE [master] GO EXEC master.dbo.sp_addumpdevice @devtype = N'disk', @logicalname = N'New Backup Device', @physicalname = N'C:estew Backup Device.bak' GO © The Norns Laboratories, 2009
  • 115. Backups Mirroring One of the maxims of disaster recovery is that you can’t have enough copies of your backups. The MIRROR TO clause provides a built-in capability to create up to four copies of a backup in a single operation. When you include the MIRROR TO clause, SQL Server retrieves the page once from the database and writes a copy of the page to each backup mirror. If you back up to tape, you must mirror to tape. If you back up to disk, you must mirror to disk. During a restore operation, you can use any of the mirrors. © The Norns Laboratories, 2009
  • 116. Backup best practices Design and implement a well thought backup strategy to suit the needs of your organization Perform backups more often Decrease backup times by using a compression Use various media for backups Increase number of backup copies Keep backup copies at different places Allocate only a single backup per file Use of meaningful names for the backup files © The Norns Laboratories, 2009
  • 117. Database backup strategy © The Norns Laboratories, 2009
  • 118. Transaction Log Backups Every change made to a database has an entry made to the transaction log. Each row is assigned a unique number internally called the Log Sequence Number (LSN). The contents of a transaction log are broken down into two basic parts: Inactive - contains all the changes that have been committed to the database. Active - contains all the changes that have not yet been committed Based on the sequence number, it is possible to restore one transaction log backup after another to recover a database to any point in time by simply following the chain of transactions as identified by the LSN. Before you can issue a transaction log backup, you must execute a full backup. © The Norns Laboratories, 2009
  • 119. BACKUP LOG Command BACKUP LOG { database_name | @database_name_var } TO <backup_device> [ ,...n ] [ <MIRROR TO clause> ] [ next-mirror-to ] [ WITH { <general_WITH_options> | <log-specific_optionspec> } [ ,...n ] ][;] © The Norns Laboratories, 2009
  • 120. Differential Backups A differential backup contains all pages changed since the last full backup. For example, if a full backup was taken at midnight and a differential backup occurred every four hours, both the 4 A.M. backup and the 8 A.M. backup would contain all the changes made to the database since midnight. Each database in the header has a special page called the Differential Change Map (DCM). DCM keeps the counter of changes occurred since last full backup. A full backup zeroes out the contents of the DCM. © The Norns Laboratories, 2009
  • 121. COPY_ONLY Option The COPY_ONLY option allows to create a backup that can be used to create the development or test environment as it does not affect the database state or set of backups in production. A full backup with the COPY_ONLY option does not reset the differential change map page and therefore has no impact on differential backups. A transaction log backup with the COPY_ONLY option does not remove transactions from the transaction log. © The Norns Laboratories, 2009
  • 122. Filegroup Backups File or filegroup backups are used to reduce the footprint of a backup, as it only targets a portion of a database to be backed up. Because for successful recovery of a database, you need all the files underneath a filegroup to be in exactly the same state, it is good idea to backup a filegroup, but not an individual files. Filegroup backups can be used in conjunction with differential and transaction log backups to recover a portion of the database in the event of a failure. The database can remain online and accessible to applications during the restore operation. Only the portion of the database being restored is off-line. © The Norns Laboratories, 2009
  • 123. Partial Backups BACKUP DATABASE database_nameREAD_WRITE_FILEGROUPS[,<file_filegroup_list>]TO <backup_device> When executed, SQL Server backs up the primary filegroup, all read/write filegroups, and any explicitly specified read-only filegroups. Partial Backups are only used for a purpose of saving backup space by excluding read only filegroups from backup. © The Norns Laboratories, 2009
  • 124. Identifying bad pages By executing the following command, SQL Server detects and quarantines corrupted pages: ALTER DATABASE <dbname> SET PAGE_VERIFY CHECKSUM If the database is participating in a Database Mirroring session, a copy of the corrupt page is retrieved from the mirror. If the page on the mirror is intact, the corrupt page is repaired automatically with the page retrieved from the mirror. To protect databases from massive corruption, SQL Server 2008 limits the allowed number of corrupted pages to a total of 1,000 per database. If the corrupt page limit reached, SQL Server takes the database off-line and places it in a suspect state to protect it from further damage. © The Norns Laboratories, 2009
  • 125. Maintenance Plans (SSIS) Maintenance plans provide a mechanism to graphically create job workflows that support common administrative functions such as backup, re-indexing, and space management. Tasks that are supported by maintenance plans are: Backing up of databases and transaction logs Shrinking databases Re-indexing Updating of statistics Performing consistency checks The most common tasks performed by maintenance plans are database backups. © The Norns Laboratories, 2009
  • 126.
  • 127. SQL Server offers two levels of encryption: database-level and cell-level. Both use the key management hierarchy.
  • 128. When TDE is enabled on a database, all backups are encrypted. http://technet.microsoft.com/en-us/library/cc278098.aspx
  • 129. Enabling TDE To enable TDE, you must have the normal permissions associated with creating a database master key and certificates in the master database. You must also have CONTROL permissions on the user database. To enable TDE perform the following steps in the master database: If it does not already exist, create a database master key (DMK) for the master database. Ensure that the database master key is encrypted by the service master key (SMK). CREATE MASTER KEY ENCRYPTION BY PASSWORD = ‘some password’; Either create or designate an existing certificate for use as the database encryption key (DEK) protector. For the best security, it is recommended that you create a new certificate whose only function is to protect the DEK. Ensure that this certificate is protected by the DMK. CREATE CERTIFICATE tdeCert WITH SUBJECT = ‘TDE Certificate’; Create a backup of the certificate with the private key and store it in a secure location. (Note that the private key is stored in a separate file—be sure to keep both files). Be sure to maintain backups of the certificate as data loss may occur otherwise. BACKUP CERTIFICATE tdeCert TO FILE = ‘path_to_file’WITH PRIVATE KEY (FILE = ‘path_to_private_key_file’,ENCRYPTION BY PASSWORD = ‘cert password’); Optionally, enable SSL on the server to protect data in transit.Perform the following steps in the user database. These require CONTROL permissions on the database. Create the database encryption key (DEK) encrypted with the certificate designated from step 2 above. This certificate is referenced as a server certificate to distinguish it from other certificates that may be stored in the user database. CREATE DATABASE ENCRYPTION KEYWITH ALGORITHM = AES_256ENCRYPTION BY SERVER CERTIFICATE tdeCert Enable TDE. This command starts a background thread (referred to as the encryption scan), which runs asynchronously. ALTER DATABASE myDatabase SET ENCRYPTION ON © The Norns Laboratories, 2009
  • 130. Service Master Key (SMK) The Service Master Key is the root of the SQL Server encryption hierarchy. It is generated automatically the first time it is needed to encrypt another key. By default, the Service Master Key is encrypted using the Windows data protection API and using the local machine key. Each time that you change the SQL Server service account or service account password, the service master key is regenerated. The first action that you should take after an instance is started is to back up the service master key. You should also back up the service master key immediately following a change to the service account or service account password. BACKUP SERVICE MASTER KEY TO FILE = 'path_to_file' ENCRYPTION BY PASSWORD = 'password' © The Norns Laboratories, 2009
  • 131. Database Master Key (DMK) Database master key(DMK) isthe root of the encryption hierarchy in a database. To ensure that you can access certificates,asymmetric keys, and symmetric keys within a database, you need to have a backup of theDMK. BACKUP MASTER KEY TO FILE = 'path_to_file' ENCRYPTION BY PASSWORD = 'password' Before you can back up a DMK, it must be open. By default, a DMK is encrypted with theservice master key. If the DMK is encrypted only with a password, you must first open theDMK by using the following command: USE <database name>; OPEN MASTER KEY DECRYPTION BY PASSWORD = '<SpecifyStrongPasswordHere>'; © The Norns Laboratories, 2009
  • 132. Certificates Certificates are used to encrypt data as well as digitally sign code modules. Although you could create a new certificate to replace the digital signature in the event of the loss of a certificate, you must have the original certificate to access any data that was encrypted with the certificate. Certificates have both a public and a private key. You should back up a certificate immediately after creation by using the following command: BACKUP CERTIFICATE certname TO FILE = 'path_to_file' [ WITH PRIVATE KEY ( FILE = 'path_to_private_key_file' , ENCRYPTION BY PASSWORD = 'encryption_password' [ , DECRYPTION BY PASSWORD = 'decryption_password' ] ) ] You can back up just the public key by using the following command: BACKUP CERTIFICATE certname TO FILE = 'path_to_file‘ However, if you restore a backup of a certificate containing only the public key, SQL Server generates a new private key. © The Norns Laboratories, 2009
  • 133. Validating a Backup To validate a backup, execute the following command: RESTORE VERIFYONLY FROM <backup device> When a backup is validated, SQL Server performs the following checks: Calculates a checksum for the backup and compares to the checksum stored in the backup file Verifies that the header of the backup is correctly written and valid Transits the page chain to ensure that all pages are contained in the database and can be located © The Norns Laboratories, 2009
  • 134. Database Restores All restore sequences begin with either a full backup or filegroup backup. When restoring backups, you have the option to terminate the restore process at any point and make the database available for transactions. After the database or filegroup being restored has been brought online, you can’t apply any additional differential or transaction log backups to the database. © The Norns Laboratories, 2009
  • 135. Restoring a Full Backup RESTORE DATABASE { database_name | @database_name_var } [ FROM <backup_device> [ ,...n ] ] [ WITH {[ RECOVERY | NORECOVERY | STANDBY = {standby_file_name | @standby_file_name_var } ] | , <general_WITH_options> [ ,...n ] | , <replication_WITH_option> | , <change_data_capture_WITH_option> | , <service_broker_WITH options> | , <point_in_time_WITH_options—RESTORE_DATABASE> } [ ,...n ] ] <general_WITH_options> [ ,...n ]::= --Restore Operation Options MOVE 'logical_file_name_in_backup' TO 'operating_system_file_name' [ ,...n ] | REPLACE | RESTART | RESTRICTED_USER When a RESTORE command is issued, if the database does not already exist within the instance, SQL Server creates the database along with all files underneath the database. The REPLACE option is used to force the restore over the top of an existing database. © The Norns Laboratories, 2009
  • 136. Database state after the Restore has completed If you want the database to be online and accessible for transactions after the RESTORE operation has completed, you need to specify the RECOVERY option. When a RESTORE is issued with the NORECOVERY option, the restore completes, but the database is left in a RECOVERING state such that subsequent differential and/or transaction log backups can be applied. The STANDBY option can be used to allow you to issue SELECT statements against the database while still issuing additional differential and/or transaction log restores. If you restore a database with the STANDBY option, an additional file is created to make the database consistent as of the last restore that was applied. © The Norns Laboratories, 2009
  • 137. Restoring a Differential Backup A differential restore uses the same command syntax as a full database restore. When the full backup has been restored, you can then restore the most recent differential backup. © The Norns Laboratories, 2009
  • 138. Restoring a Transaction Log Backup RESTORE LOG { database_name | @database_name_var } [ <file_or_filegroup_or_pages> [ ,...n ] ] [ FROM <backup_device> [ ,...n ] ] [ WITH {[ RECOVERY | NORECOVERY | STANDBY = {standby_file_name | @standby_file_name_var } ] | , <general_WITH_options> [ ,...n ] | , <replication_WITH_option> | , <point_in_time_WITH_options—RESTORE_LOG> } [ ,...n ] ] <point_in_time_WITH_options—RESTORE_LOG>::= | { STOPAT = { 'datetime' | @datetime_var } | STOPATMARK = { 'mark_name' | 'lsn:lsn_number' } [ AFTER 'datetime' ] | STOPBEFOREMARK = { 'mark_name' | 'lsn:lsn_number' } [ AFTER 'datetime' ] The STOPAT command allows to specify a date and time to which SQL Server restores. The STOPATMARK and STOPBEFOREMARK options allows to specify either an LSN or a transaction log MARK to use for the stopping point in the restore operation. © The Norns Laboratories, 2009
  • 139. Restore a Corrupt Page Page corruption occurs when the contents of a page are not consistent.Usually occurs when disk controller begins to fail. Strategy for recovery: Indexfiles – drop and re-create Data files –restore Page restore has several requirements: The database must be in either the Full or Bulked-logged recovery model. You must be able to create a transaction log backup. A page restore can apply only to a read/write filegroup. You must have a valid full, file, or filegroup backup available. The page restore cannot be executed at the same time as any other restore operation. © The Norns Laboratories, 2009
  • 140. Page Restore Process Retrieve the PageID of the damaged page. Using the most recent full, file, or filegroup backup, execute the following command: RESTORE DATABASE database_name PAGE = 'file:page [ ,...n ]' [ ,...n ] FROM <backup_device> [ ,...n ] WITH NORECOVERY Restore any differential backups with the NORECOVERY option. Restore any additional transaction log backups with the NORECOVERY option. Create a transaction log backup. Restore the transaction log backup from step #5 using the WITH RECOVERY option. © The Norns Laboratories, 2009
  • 141. Best Effort Restore Because pages are restored in sequential order, as soon as the first page has been restored to a database, anything that previously existed is no longer valid. If a problem with the backup media was subsequently encountered and the restore aborted, you would be left with an invalid database that could not be used. SQL Server has the ability to continue the restore operation even if the backup media is damaged. When it encounters an unreadable section of the backup file, SQL Server can continue past the source of damage and continue restoring as much of the database as possible. This feature is referred to as best effort restore. To restore from backup media that has been damaged, you need to specify the CONTINUE_AFTER_ERROR option for a RESTORE DATABASE or RESTORE LOG command. © The Norns Laboratories, 2009
  • 142. Database Snapshots A Database Snapshot is a point-in-time, read-only, copy of a database. Database Snapshot is available only in SQL Server 2008 Enterprise. Database Snapshot is not compatible with FILESTREAM. If you create a Database Snapshot against a database with FILESTREAM data, the FILESTREAM filegroup is disabled and not accessible. CREATE DATABASE database_snapshot_name ON (NAME = logical_file_name, FILENAME = 'os_file_name') [ ,...n ] AS SNAPSHOT OF source_database_name © The Norns Laboratories, 2009
  • 143. Reverting Data Using a Database Snapshot RESTORE DATABASE <database_name> FROM DATABASE_SNAPSHOT = <database_snapshot_name> Only a single Database Snapshot can exist for the source database. Full-text catalogs on the source database must be dropped and then re-created after the revert completes. Because the transaction log is rebuilt, the transaction log chain is broken. Both the source database and Database Snapshot are off-line during the revert process. The source database cannot be enabled for FILESTREAM. © The Norns Laboratories, 2009
  • 144. Automating SQL Server 2 hrs © The Norns Laboratories, 2009
  • 145. SQL Server Agent Service SQL Server Agent Service is a scheduling engine for SQL Server. © The Norns Laboratories, 2009
  • 146. Practice SQL Automation Jobs – Self-paced Training Kit, p.237-240 Alerts – Self-paced Training Kit, p.243-245 © The Norns Laboratories, 2009
  • 147. Practice Test, Review and Questions 10 questions, lesson 1, time 20 minutes © The Norns Laboratories, 2009
  • 148. Designing SQL ServerSecurity 5 hours © The Norns Laboratories, 2009
  • 149. Exam objectives Manage logins and server roles. Manage users and database roles. Manage SQL Server instance permissions. Manage database permissions. Manage schema permissions and object permissions. Audit SQL Server instances. Manage transparent data encryption (TDE). Configure surface area. © The Norns Laboratories, 2009
  • 150. Identity and Access Control (Database Engine) When configuring security for users, services and other accounts to access the system, you must have to work with: Principals (users and login accounts), Roles (groups of Principals), Securable objects (Securables) and Permissions. © The Norns Laboratories, 2009
  • 151. Principals of the Database Engine Principals are entities that can request SQL Server resources. Like other components of the SQL Server authorization model, principals can be arranged in a hierarchy. The scope of influence of a principal depends on the scope of the definition of the principal: Windows server database Every principal has a security identifier (SID). Windows-level principals Windows Domain Login Windows Local Login SQL Server-level principal SQL Server Login Database-level principals Database User Database Role Application Role © The Norns Laboratories, 2009
  • 152. sa Login The SQL Server sa log in is a server-level principal. It is created by default when an instance is installed. In SQL Server 2005 and SQL Server 2008, the default database of sa is master. © The Norns Laboratories, 2009
  • 153. public Database Role Every database user belongs to the public database role. When a user has not been granted or denied specific permissions on a securable, the user inherits the permissions granted to public on that securable. © The Norns Laboratories, 2009
  • 154. INFORMATION_SCHEMA and sys Every database includes two entities that appear as users in catalog views:  INFORMATION_SCHEMA  sys These entities are required by SQL Server. They are not principals, and they cannot be modified or dropped. © The Norns Laboratories, 2009
  • 155. Certificate-based SQL Server Logins Server principals with names enclosed by double hash marks (##) are for internal system use only. The following principals are created from certificates when SQL Server is installed, and should not be deleted. ##MS_SQLResourceSigningCertificate##  ##MS_SQLReplicationSigningCertificate##  ##MS_SQLAuthenticatorCertificate##  ##MS_AgentSigningCertificate##  ##MS_PolicyEventProcessingLogin##  ##MS_PolicySigningCertificate##  ##MS_PolicyTsqlExecutionLogin##  © The Norns Laboratories, 2009
  • 156. Client and Database Server By definition, a client and a database server are security principals and can be secured. These entities can be mutually authenticated before a secure network connection is established. SQL Server supports the Kerberos authentication protocol, which defines how clients interact with a network authentication service. © The Norns Laboratories, 2009
  • 157. Database Users A database user is a principal at the database level. Every database user is a member of the public role. By default, the database includes a guest user when a database is created. Permissions granted to the guest user are inherited by users who do not have a user account in the database. The guest user cannot be dropped, but it can be disabled by revoking its CONNECT permission. The CONNECT permission can be revoked by executing REVOKE CONNECT FROM GUEST within any database other than master or tempdb. © The Norns Laboratories, 2009
  • 158.
  • 159. Application roles are enabled by using sp_setapprole, which requires a password.
  • 160.
  • 161. SIDs and IDs Server-Level Identification Number (SID) identifies the security context of the login and is unique within the server instance.  Database-Level Identification Number (ID) identifies the user as a securable within the database.  The maximum number of database users is determined by the size of the user ID field. The value of a user ID must be zero or a positive integer. In SQL Server 2000, the user ID is stored as a smallint consisting of 16 bits, one of which is the sign. For this reason, the maximum number of user IDs in SQL Server 2000 is 215 = 32,768. In SQL Server 2005 and later versions, the user ID is stored as an int consisting of 32 bits, one of which is the sign. These additional bits make it possible to assign 231 = 2,147,483,648 ID numbers. © The Norns Laboratories, 2009
  • 162. Kerberos Authentication and SQL Server Kerberos is a network authentication protocol provides a highly secure method to authenticate client and server entities (security principals) on a network. These security principals use authentication that is based on master keys and encrypted tickets. In the Kerberos protocol model, every client/server connection begins with authentication. If authentication is successful, session setup completes and a secure client/server session is established. SQL Server supports Kerberos indirectly through the Windows Security Support Provider Interface (SSPI) when SQL Server is using Windows Authentication. SQL Server 2008 supports Kerberos authentication on the following protocols: TCP/IP Named pipes Shared memory © The Norns Laboratories, 2009
  • 163. Create a SQL Server Login To create a SQL Server login that uses Windows Authentication using Transact-SQL: CREATE LOGIN <name of Windows User> FROM WINDOWS; GO To create a SQL Server login that uses SQL Server Authentication (Transact-SQL) CREATE LOGIN <login name> WITH PASSWORD = '<password>' ; GO © The Norns Laboratories, 2009
  • 164. Create a Database User USE <database name> GO CREATE USER <new user name> FOR LOGIN <login name> ; GO © The Norns Laboratories, 2009
  • 165. Create a Database Schema USE <database name> GO CREATE SCHEMA <new schema name> AUTHORIZATION [new schema owner] ; GO © The Norns Laboratories, 2009
  • 166. Server-Level Roles Server-Level Roles are security principals that group other principals. Roles are like groups in the Microsoft Windows operating system. Server-level roles are also named fixed server roles because you cannot create new server-level roles. Server-level roles are server-wide in their permissions scope. You can add SQL Server logins, Windows accounts, and Windows groups into server-level roles. Each member of a fixed server role can add other logins to that same role. © The Norns Laboratories, 2009
  • 167. Server-level roles’ capabilities sysadmin Members can perform any activity in the server. serveradmin Members can change server-wide configuration options and shut down the server. securityadmin Members can manage logins and their properties. They can GRANT, DENY, and REVOKE server-level permissions. They can also GRANT, DENY, and REVOKE database-level permissions. Additionally, they can reset passwords for SQL Server logins. processadmin Members can end processes that are running in an instance of SQL Server. setupadmin Members can add and remove linked servers. bulkadmin Members can run the BULK INSERT statement. diskadmin The role is used for managing disk files. dbcreator Members can create, alter, drop, and restore any database. public Every SQL Server login belongs to the public server role. Only assign public permissions on any object when you want the object to be available to all users. © The Norns Laboratories, 2009
  • 168. Database-Level Roles Database-level roles are database-wide in their permissions scope. There are two types of database-level roles in SQL Server:  fixed database roles that are predefined in the database and  flexible database roles that you can create. Members of the db_owner and db_securityadmindatabase roles can manage fixed database role membership. Only members of the db_owner database role can add members to the db_owner fixed database role. There are also some special-purpose fixed database roles in the msdb database. You can add any database account and other SQL Server roles into database-level roles. Each member of a fixed database role can add other logins to that same role. © The Norns Laboratories, 2009
  • 169. Fixed database-level Roles’ Capabilities db_owner  can perform all configuration and maintenance activities on the database, and can also drop the database. db_securityadmin  can modify role membership and manage permissions. Adding principals to this role could enable unintended privilege escalation. db_accessadmin  can add or remove access to the database for Windows logins, Windows groups, and SQL Server logins. db_backupoperator  can back up the database. db_ddladmin  run any Data Definition Language (DDL) command in a database. db_datawriter  can add, delete, or change data in all user tables. db_datareader  can read all data from all user tables. db_denydatawriter  cannot add, modify, or delete any data in the user tables within a database. db_denydatareader cannot read any data in the user tables within a database. © The Norns Laboratories, 2009
  • 170. msdb Roles db_ssisadmin, db_ssisoperator, db_ssisltduser can administer and use SSIS.  dc_admin, dc_operator, dc_proxy can administer and use the data collector.  PolicyAdministratorRole can perform all configuration and maintenance activities on Policy-Based Management policies and conditions.  ServerGroupAdministratorRole, ServerGroupReaderRole can administer and use registered server groups. Every database user belongs to the public database role. When a user has not been granted or denied specific permissions on a securable object, the user inherits the permissions granted to public on that object. © The Norns Laboratories, 2009
  • 171. Credentials A credential is a record that contains the authentication information (credentials) required to connect to a resource outside SQL Server. This information is used internally by SQL Server. Most credentials contain a Windows user name and password. The information stored in a credential enables a user who has connected to SQL Server by way of SQL Server Authentication to access resources outside the server instance. CREATE CREDENTIAL credential_name WITH IDENTITY = 'identity_name' [ , SECRET = 'secret' ] [ FOR CRYPTOGRAPHIC PROVIDER cryptographic_provider_name ] © The Norns Laboratories, 2009
  • 172. Securables Securables are the resources to which the SQL Server Database Engine authorization system regulates access. Some securables can be contained within others, creating nested hierarchies called "scopes" that can themselves be secured. The securable scopes are  server database schema. © The Norns Laboratories, 2009
  • 173. Permissions Every SQL Server securable has associated permissions that can be granted to a principal. Returning the complete list of grantable permissions SELECT * FROM fn_builtin_permissions(default); Returning the permissions on a particular class of objects SELECT * FROM fn_builtin_permissions('assembly') Returning the permissions granted to the executing principal on an object SELECT * FROM fn_my_permissions('Orders55', 'object'); © The Norns Laboratories, 2009
  • 174. Network Protocols and TDS Endpoints When the SQL Server Database Engine communicates with an application, it formats the communication in a Microsoft communication format called a tabular data stream (TDS) packet. The network SQL Server Network Interface (SNI) protocol layerencapsulates the TDS packet inside a standard communication protocol, such as TCP/IP or named pipes. The server creates a SQL Server object called a TDS endpoint for each network protocol. On the server, the TDS endpoints are installed by SQL Server during SQL Server installation. Acting very similar to firewalls on the network, endpoints are a layer of security at the border between applications and a SQL Server instance. © The Norns Laboratories, 2009
  • 175.
  • 176.
  • 177. SQL Server Asymmetric Keys Public Key Cryptography (PKI) is a form of message secrecy in which a user creates a public key and a private key. The private key is kept secret, whereas the public key can be distributed to others. Although the keys are mathematically related, the private key cannot be easily derived by using the public key. The public key is used to encrypt data and the private key is used to decrypt data. A message that is encrypted by using the public key can only be decrypted by using the correct private key. Since there are two different keys, these keys are asymmetric. © The Norns Laboratories, 2009
  • 178. SQL Server Certificates A certificate is a digitally signed security object that contains a public (and optionally a private) key for SQL Server. Certificates and asymmetric keys are both ways to use asymmetric encryption. Certificates are often used as containers for asymmetric keys because they can contain more information such as expiry dates and issuers. There is no difference between the two mechanisms for the cryptographic algorithm, and no difference in strength given the same key length. Generally, you use a certificate to encrypt other types of encryption keys in a database, or to sign code modules. Certificates and asymmetric keys can decrypt data that the other encrypts. Generally, you use asymmetric encryption to encrypt a symmetric key for storage in a database. A public key does not have a particular format like a certificate would have, and you cannot export it to a file. © The Norns Laboratories, 2009
  • 179. Using a Certificate in SQL Server To creation a certificate requires CREATE CERTIFICATE permission on the database. Only Windows logins, SQL Server logins, and application roles can own certificates. Groups and roles cannot own certificates. Creating a self-signed certificate USE AdventureWorks; CREATE CERTIFICATE HiTech01 ENCRYPTION BY PASSWORD = 'pGFD4bb925DGvbd2439587y' WITH SUBJECT = ‘HiTech Institute', EXPIRY_DATE = '10/31/2010'; GO © The Norns Laboratories, 2009
  • 180. SQL Server Encryption Encryption is the process of obfuscating data by the use of a key or password. This can make the data useless without the corresponding decryption key or password. Encryption does not solve access control problems. However, it enhances security by limiting data loss even if access controls are bypassed. You can use encryption in SQL Server for connections, data, and stored procedures. The following table contains more information about encryption in SQL Server. © The Norns Laboratories, 2009
  • 181. Encryption Hierarchy SQL Server encrypts data with a hierarchical encryption and key management infrastructure. Each layer encrypts the layer below it by using a combination of certificates, asymmetric keys, and symmetric keys. Asymmetric keys and symmetric keys can be stored outside of SQL Server in an Extensible Key Management (EKM) module. © The Norns Laboratories, 2009
  • 182. How does encryption applies? SQL Server provides the following mechanisms for encryption: Transact-SQL functions Asymmetric keys Symmetric keys Certificates Transparent Data Encryption © The Norns Laboratories, 2009
  • 183. Simple Symmetric Encryption Creating a symmetric key CREATE SYMMETRIC KEY VitaliyFursov007 WITH ALGORITHM = AES_256 ENCRYPTION BY CERTIFICATE HiTech01; GO © The Norns Laboratories, 2009
  • 184. Practice data encryption Read, run, and understand script provided in file “Practice Data Encryption.sql” © The Norns Laboratories, 2009
  • 185. Auditing SQL Server provides several features that you can use for auditing activities and changes on your SQL Server system. These features enable administrators to implement a defense-in-depth strategy that they can tailor to meet the specific security risks of their environment. © The Norns Laboratories, 2009
  • 186. Understanding SQL Server Audit Auditing an instance of SQL Server or a SQL Server database involves tracking and logging events that occur on the system. Beginning in SQL Server 2008 Enterprise, you can also set up automatic auditing by using SQL Server Audit. There are several levels of auditing for SQL Server, depending on government or standards requirements for your installation. SQL Server Audit provides the tools and processes you must have to enable, store, and view audits on various server and database objects. © The Norns Laboratories, 2009
  • 187. SQL Server Audit Components An audit is the combination of several elements into a single package for a specific group of server actions or database actions. The components of SQL Server Audit combine to produce an output that is called an audit, just as a report definition combined with graphics and data elements produces a report. SQL Server Audit uses Extended Events to help create an audit.  © The Norns Laboratories, 2009
  • 188. SQL Server Audit The SQL Server Audit object collects a single instance of server or database-level actions and groups of actions to monitor. The audit is at the SQL Server instance level. You can have multiple audits per SQL Server instance. When you define an audit, you specify the location for the output of the results. This is the audit destination. The audit is created in adisabled state, and does not automatically audit any actions. After the audit is enabled, the audit destination receives data from the audit. © The Norns Laboratories, 2009
  • 189. Server Audit Specification The Server Audit Specification object belongs to an audit. You can create one server audit specification per audit, because both are created at the SQL Server instance scope. The server audit specification collects many server-level action groups raised by the Extended Events feature. You can include audit action groups in a server audit specification. Audit action groups are predefined groups of actions, which are atomic events occurring in the Database Engine. These actions are sent to the audit, which records them in the target. © The Norns Laboratories, 2009
  • 190. Database Audit Specification The Database Audit Specification object also belongs to a SQL Server audit. You can create one database audit specification per SQL Server database per audit. The database audit specification collects database-level audit actions raised by the Extended Events feature. You can add either audit action groups or audit events to a database audit specification.  Audit events are the atomic actions that can be audited by the SQL Server engine.  Audit action groups are predefined groups of actions. Both are at the SQL Server database scope. These actions are sent to the audit, which records them in the target. © The Norns Laboratories, 2009
  • 191. Audit Target The results of an audit are sent to a target, which can be: File Windows Security event log Windows Application event log. Writing to the Security log is not available on Windows XP. Logs must be reviewed and archived periodically to make sure that the target has sufficient space to write additional records. © The Norns Laboratories, 2009
  • 192. Using SQL Server Audit Create an audit and define the target. Create either a server audit specification or database audit specification that maps to the audit. Enable the audit specification. Enable the audit. Read the audit events by using the Windows Event Viewer, Log File Viewer, or the fn_get_audit_file function. © The Norns Laboratories, 2009 SELECT * FROM sys.fn_get_audit_file ('C:estudit.sqlaudit',default,default); GO
  • 193. Monitoring MicrosoftSQL Server 4 hrs. © The Norns Laboratories, 2009
  • 194. Exam objectives Collect performance data by using System Monitor . Collect trace data by using SQL Server Profiler Identify SQL Server service problems . Identify concurrency problems . Locate error information . © The Norns Laboratories, 2009
  • 195. System Monitor System Monitor, commonly referred to as PerfMon, is a Microsoft Windows utility that allows to capture statistical information about the hardware environment, operating system, and any applications that expose properties and counters. It uses a polling architecture to capture and log numeric data exposed by applications. © The Norns Laboratories, 2009
  • 196. How to Start To start Performance Monitor Click Start, click in the Start Search box, type perfmon, and press ENTER. In the navigation tree, expand Monitoring Tools, and then click Performance Monitor. You can also use Performance Monitor to view real-time performance data on a remote computer. Membership in the target computer's Performance Log Users group, or equivalent, is the minimum required to complete this procedure. To connect to a remote computer with Performance Monitor Start Performance Monitor. In the navigation tree, ri

Hinweis der Redaktion

  1. Make a conclusion of the presentation. Review goals, see if we have meet them. Review presentation materials, and highlight most important things. Seek for a feedback.