I think it is not the first time you hear that how important is to have a restore script which helps you in DR (Disaster Recovery) scenarios. In a DR scenario one of your main enemy is the time.
You have to figure out what to be done. When you decide that restore is the way to go you have to make the appropriate backup files ready (paying attention to the restore sequence) to restore then start the restore process.
Starting this process in the GUI (Management Studio) is time consuming even if you need to restore only one database. Using a restore script could be faster but only if you don’t need to write the whole script from scratch. In my opinion the best way is to have a well working restore script generator.
I tried to find a good one in the past more times. Since I did not find a script which meets fully my requirements I decided to write my own script, but not from scratch. I used the good parts, ideas from other scripts I found. My Restore Script Generator (sp_RestoreScriptGen) is inspired by Tibor Karaszi (sqlblog | blog)
sp_RestoreScriptGen generates the restore script from the backup files. It supports databases which have more data files. You can choose different destination folder for data files and log file, you can also change the database name.
Supported WITH options: REPLACE, STOPAT, RECOVERY (1), NORECOVERY (0), CHECKSUM (if available).
Only backup files with ONE backup set is supported at the moment!
Further features: Instant File Initialization check, Orphan user check, Restore Time Measure option
1 2 3 4 5 6 7 8 9 10 11 12
EXEC sp_RestoreScriptGen @help=0, @backup_file= N'', @database_name = N'', @data_dest = N'', @log_dest = N'', @recovery = 'NORECOVERY', @stopat = '', @replace = 0, @checkIFI = 1, @checkorphan= 1, @restoretime = 1
Here is the stored procedure: sp_SQLApprenticeDOTnet_SQL2008R2_RestoreScriptGen
I really hope that you can find this stored procedure useful. Don’t hesitate to contact me (robertATsqlapprenticeDOTnet) in case of having any questions or remarks.
My real life example how you can collect data from SQL Server instances using linked servers has been published on SQLServerCentral (When the reporting team needs an assist…). It is also a good example for practical usage of sp_uforeach (additional link) script.
You can find my sp_uforeach script on SQLServerCentral as well: Universal foreach aka sp_uforeach
It has been a long time since I wrote a post. I had so so much work to do, so that I had no time for posting
Anyway I want to change that in the future. I don’t promise that I will write every single week but I will do my best.
So about the bug. Last week, my colleague reported me that our consistency checking job was failed. More interestingly it failed on the master. Even more interestingly he said, when he tried to run the check manually with
DBCC CHECKDB (master) WITH NO_INFOMSGS, ALL_ERRORMSGS
the check run without any error. The error come up only when he used DATA_PURITY option:
1 2 3 4 5 6
dbcc checkdb(master) with all_errormsgs, no_infomsgs, data_purity Msg 2570, Level 16, State 2, Line 1 Page (1:354), slot 27 in object ID 58, index ID 1, partition ID 281474980511744, alloc unit ID 281474980511744 (type "In-row data"). Column "name" value is out of range for data type "nvarchar". Update column to a legal value. CHECKDB found 0 allocation errors and 1 consistency errors in table 'sys.sysbinobjs' (object ID 58). CHECKDB found 0 allocation errors and 1 consistency errors in database 'master'.
This is what attracted my attention. I checked BOL which sait: “For databases created in SQL Server 2005 and later, column-value integrity checks are enabled by default and do not require the DATA_PURITY option. For databases upgraded from earlier versions of SQL Server, column-value checks are not enabled by default until DBCC CHECKDB WITH DATA_PURITY has been run error free on the database. After this, DBCC CHECKDB checks column-value integrity by default. For more information about how CHECKDB might be affected by upgrading database from earlier versions of SQL Server, see the Remarks section later in this topic.”
Our affected instance is a ‘native’ SQL Server 2008 and I double-checked that so as the result of the DBCC CHECKDB command. Ok, from that time it was obvious to me that something is wrong. Maybe the way DBCC CHECKDB is working or the description of DATA_PURITY option in BOL or something else. Since we were dealing with system table inconsistency in the master database I contacted Paul Randal who is the expert of this field (too). Paul confirmed that there is bug here. See more about this bug on his blog…
Last month I helped one of my customer investigate a deadlock situation.
After we got the deadlock report it turned out that we were dealing with a ‘conversion deadlock’ caused by inappropriate index options.
Let’s reproduce and check the situation. I create a database ‘deadlock’, a table ‘t1′ and put some data into it. After that I create a user stored procedure which is used for updating the content of this table.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
CREATE DATABASE deadlock GO USE [deadlock] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[t1]( [id] [int] IDENTITY(1,1) NOT NULL, [ForeignId] [int] NOT NULL, [ForeignId2] [int] NOT NULL, [data1] [int] NOT NULL, [data2] [int] NOT NULL, [data3] [int] NOT NULL CONSTRAINT [PK_UserGames] PRIMARY KEY CLUSTERED ( [id] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY], CONSTRAINT [t1_uq] UNIQUE NONCLUSTERED ( [ForeignId] ASC, [ForeignId2] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = OFF, ALLOW_PAGE_LOCKS = OFF) ON [PRIMARY] ) ON [PRIMARY] GO
Notice that ALLOW_ROW_LOCKS=OFF and ALLOW_PAGE_LOCKS=OFF options have been used! Configuring both options to OFF means that SQL Server put every lock on the table immediately. This cause serious problems in case of high concurrency, especially for writers like in our case. Let’s go on.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
USE [deadlock] GO INSERT INTO t1 (ForeignId,ForeignId2, data1, data2, data3) values (1,1,1,1,1) GO INSERT INTO t1 (ForeignId,ForeignId2, data1, data2, data3) values (1,2,1,1,1) GO INSERT INTO t1 (ForeignId,ForeignId2, data1, data2, data3) values (2,1,1,1,1) GO INSERT INTO t1 (ForeignId,ForeignId2, data1, data2, data3) values (2,2,1,1,1) GO USE [deadlock] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER OFF GO CREATE PROCEDURE [dbo].[usp_UpdateT1] @ForeignId int = 0 ,@ForeignId2 int = 0 ,@data1 int = 0 ,@data2 int = 0 ,@data3 int = 0 AS UPDATE T1 SET data1 = @data1 ,data2 = data2 + @data2 ,data3 = data3 + @data3 WHERE ForeignId = @ForeignId AND ForeignId2 = @ForeignId2 IF (@@ERROR <> 0) GOTO ErrorHandler Return 1 ErrorHandler: RETURN -1 GO
Ok, we have everything to reproduce the issue. Only one thing left. Configure the profiler to catch every deadlock and set the T1222 trace flag on to get the deadlock information in the error log.
DBCC TRACEON (3605, 1222, -1) GO
Done. Open two query window and run the following queries in it. It is important that the queries run totally the same time!
1 2 3 4 5 6 7 8 9 10 11
--Query window 1 WAITFOR TIME '17:59:00.000' EXEC usp_UpdateT1 1,2,2,2,2 GO --Query window 2 WAITFOR TIME '17:59:00.000' EXEC usp_UpdateT1 2,1,3,3,3 GO
If everything go well (hm… I mean wrong in our case) you can see the following results:
Deadlock report from error log:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42
2012-11-04 17:59:00.41 spid13s deadlock-list 2012-11-04 17:59:00.41 spid13s deadlock victim=process3dd708 2012-11-04 17:59:00.41 spid13s process-list 2012-11-04 17:59:00.41 spid13s process id=process3dd708 taskpriority=0 logused=0 waitresource=OBJECT: 5:2105058535:0 waittime=403 ownerId=21088 transactionname=UPDATE lasttranstarted=2012-11-04T17:59:00 XDES=0x800199c0 lockMode=X schedulerid=4 kpid=23612 status=suspended spid=57 sbid=0 ecid=0 priority=0 trancount=2 lastbatchstarted=2012-11-04T17:58:48.700 lastbatchcompleted=2012-11-04T17:58:48.700 clientapp=Microsoft SQL Server Management Studio - Query hostname=LENNY hostpid=21884 loginname=LENNY\Robi isolationlevel=read committed (2) xactid=21088 currentdb=5 lockTimeout=4294967295 clientoption1=536881248 clientoption2=390200 2012-11-04 17:59:00.41 spid13s executionStack 2012-11-04 17:59:00.41 spid13s frame procname=deadlock.dbo.usp_UpdateT1 line=8 stmtstart=418 stmtend=864 sqlhandle=0x0300050092115500865c2401ffa000000100000000000000 2012-11-04 17:59:00.41 spid13s UPDATE T1 2012-11-04 17:59:00.41 spid13s SET data1 = @data1 2012-11-04 17:59:00.41 spid13s ,data2 = data2 + @data2 2012-11-04 17:59:00.41 spid13s ,data3 = data3 + @data3 2012-11-04 17:59:00.41 spid13s WHERE ForeignId = @ForeignId 2012-11-04 17:59:00.41 spid13s AND ForeignId2 = @ForeignId2 2012-11-04 17:59:00.41 spid13s frame procname=adhoc line=3 stmtstart=62 sqlhandle=0x010005005c58ea1f201f0f80000000000000000000000000 2012-11-04 17:59:00.41 spid13s EXEC usp_UpdateT1 1,2,2,2,2 2012-11-04 17:59:00.41 spid13s inputbuf 2012-11-04 17:59:00.41 spid13s WAITFOR TIME '17:59:00.000' 2012-11-04 17:59:00.41 spid13s EXEC usp_UpdateT1 1,2,2,2,2 2012-11-04 17:59:00.41 spid13s process id=process13ee08 taskpriority=0 logused=0 waitresource=OBJECT: 5:2105058535:0 waittime=404 ownerId=21089 transactionname=UPDATE lasttranstarted=2012-11-04T17:59:00 XDES=0x85276e90 lockMode=X schedulerid=1 kpid=740 status=suspended spid=55 sbid=0 ecid=0 priority=0 trancount=2 lastbatchstarted=2012-11-04T17:58:49.947 lastbatchcompleted=2012-11-04T17:58:49.947 clientapp=Microsoft SQL Server Management Studio - Query hostname=LENNY hostpid=21884 loginname=LENNY\Robi isolationlevel=read committed (2) xactid=21089 currentdb=5 lockTimeout=4294967295 clientoption1=536881248 clientoption2=390200 2012-11-04 17:59:00.41 spid13s executionStack 2012-11-04 17:59:00.41 spid13s frame procname=deadlock.dbo.usp_UpdateT1 line=8 stmtstart=418 stmtend=864 sqlhandle=0x0300050092115500865c2401ffa000000100000000000000 2012-11-04 17:59:00.41 spid13s UPDATE T1 2012-11-04 17:59:00.41 spid13s SET data1 = @data1 2012-11-04 17:59:00.41 spid13s ,data2 = data2 + @data2 2012-11-04 17:59:00.41 spid13s ,data3 = data3 + @data3 2012-11-04 17:59:00.41 spid13s WHERE ForeignId = @ForeignId 2012-11-04 17:59:00.41 spid13s AND ForeignId2 = @ForeignId2 2012-11-04 17:59:00.41 spid13s frame procname=adhoc line=3 stmtstart=62 sqlhandle=0x01000500eee8eb14607bec80000000000000000000000000 2012-11-04 17:59:00.41 spid13s EXEC usp_UpdateT1 2,1,3,3,3 2012-11-04 17:59:00.41 spid13s inputbuf 2012-11-04 17:59:00.41 spid13s WAITFOR TIME '17:59:00.000' 2012-11-04 17:59:00.41 spid13s EXEC usp_UpdateT1 2,1,3,3,3 2012-11-04 17:59:00.41 spid13s resource-list 2012-11-04 17:59:00.41 spid13s objectlock lockPartition=0 objid=2105058535 subresource=FULL dbid=5 objectname=deadlock.dbo.t1 id=lock80be1f00 mode=IX associatedObjectId=2105058535 2012-11-04 17:59:00.41 spid13s owner-list 2012-11-04 17:59:00.41 spid13s owner id=process13ee08 mode=IX 2012-11-04 17:59:00.41 spid13s waiter-list 2012-11-04 17:59:00.41 spid13s waiter id=process3dd708 mode=X requestType=convert 2012-11-04 17:59:00.41 spid13s objectlock lockPartition=0 objid=2105058535 subresource=FULL dbid=5 objectname=deadlock.dbo.t1 id=lock80be1f00 mode=IX associatedObjectId=2105058535 2012-11-04 17:59:00.41 spid13s owner-list 2012-11-04 17:59:00.41 spid13s owner id=process3dd708 mode=IX 2012-11-04 17:59:00.41 spid13s waiter-list 2012-11-04 17:59:00.41 spid13s waiter id=process13ee08 mode=X requestType=convert
So what happened here? Both transaction acquire an intent exclusive lock on the table (IX) because of the index options. Two IX is compatible each other. The real problem happen when both transaction want to convert its IX lock to X. Both transaction try to update totally different rows which rows could be on totally different pages, it doesn’t matter, because locks go to the top of table.
If ALLOW_PAGE_LOCK was configured to ON the situation would be better a bit. In this case deadlock would occur if two concurrent transaction tried to update rows on the same page at totally the same time.
Conclusion: In case of high concurrency (especially writers) set ALLOW_PAGE_LOCK and ALLOW_ROW_LOCK to ON!
If you have any remark or question feel free to contact me, leave a comment or drop me a mail /robertATsqlapprenticeDOTnet/.
In this post, I provide my solution for the Security vs. Monitoring puzzle. I will use module signing.
First, let’s think a little bit proactive and I assume that there will be more special request in the future when we need to create some further code for monitoring, so I create a separate database called ‘Monitoring’. The ‘monitoring’ user will have rights only in this database, and just as much as it really needed.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
USE [master] GO CREATE DATABASE [monitoring] GO CREATE LOGIN [monitoring] WITH PASSWORD=N'M0n!t0r!ng$tr0ngP@$$w0rd', DEFAULT_DATABASE=[monitoring], CHECK_EXPIRATION=OFF, CHECK_POLICY=ON GO GRANT VIEW SERVER STATE TO [monitoring] GO USE [monitoring] GO ALTER AUTHORIZATION ON DATABASE::[Monitoring] TO [sa] GO CREATE USER [monitoring] FOR LOGIN [monitoring] GO
After that I create a stored procedure called ‘usp_monitor’ with EXECUTE AS clause and in this user defined stored procedure the sp_monitor is called. I grant execute permission to the ‘monitoring’ user.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
USE [monitoring] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE PROCEDURE [dbo].[usp_monitor] WITH EXECUTE AS 'dbo' AS BEGIN EXECUTE sp_monitor END GO GRANT EXECUTE ON [dbo].[usp_monitor] TO [monitoring] GO
Nothing special so far. As I mentioned before, I use module signing to provide all the necessary permissions. For this I create a certificate in the ‘monitoring’ database then I sign the usp_monitoring with this certificate. After that I backup this certificate and then delete the primary key part so no-one can sign again the stored procedure in case of any modification without the primary key.
1 2 3 4 5 6 7 8 9 10 11 12 13 14
USE [monitoring] GO CREATE CERTIFICATE monitoring_cert ENCRYPTION BY PASSWORD = '$trong!P@ssWD' WITH SUBJECT = 'Signing cert for Monitoring' GO ADD SIGNATURE TO usp_monitor BY CERTIFICATE monitoring_cert WITH PASSWORD = '$trong!P@ssWD' GO BACKUP CERTIFICATE monitoring_cert TO FILE = 'monitoring.cer' WITH PRIVATE KEY ( DECRYPTION BY PASSWORD = '$trong!P@ssWD', FILE = 'monitoring_cer.pvk' , ENCRYPTION BY PASSWORD = '$trong!P@ssWD' ); GO ALTER CERTIFICATE monitoring_cert REMOVE PRIVATE KEY GO
In the end, I restore the public part of the certificate to the master database, and I create a login from it, and grant authenticate server privilege to this login.
1 2 3 4 5 6 7 8
USE [master] GO CREATE CERTIFICATE monitoring_cert FROM FILE = 'monitoring.cer' GO CREATE LOGIN monitoring_cert_login FROM CERTIFICATE monitoring_cert GO GRANT AUTHENTICATE SERVER TO monitoring_cert_login GO
We are done. Only one thing left. Move the certificate (especially the private key part) to secure place. If you have any question feel free to contact me, leave a comment or drop me a mail /robertATsqlapprenticeDOTnet/.
Further on I will provide some very useful links/articles about this topic.
In this post I will talk a bit about security and monitoring and show you one particular case.
There are many monitoring tool which is used, for example Tivoli or Nagios. These tools technically do the same thing: keep checking the servers and services like SQL Server services and in case of any problem in some way they notify the operator about the problem.
For doing this, these tools need to have access not just to the operating system but to the sql server as well. The real concern here is what kind of permission is needed for the monitoring tool’s user or what kind of permissions can be granted.
Servers which contain sensitive data, like credit card data have to meet certain auditing requirements. Here is the main rule: Never give more permission to a user than it’s really necessary. Any deviation from the standards have to be explained, and usually some additional action has to be taken in this case!
Minimum permission for a monitoring user:
VIEW SERVER STATE
Membership in the public role in the databases which need to be monitored.
Most of the time these permissions are enough, but sometimes unfortunately not and that’s where our case come into the picture. So let’s see it.
Our customer implemented the Nagios to monitor the sql server instances and the databases as well. We set the minimum permissions to the nagios user. However there was one stored procedure sp_monitor, which is also used by Nagios and which has a strong restriction: executing this stored procedure requires membership in the sysadmin fixed server role.
Remember to the main rule! Granting sysadmin server role to a simple monitoring user not just ‘more than it’s necessary’ but this would be the biggest permission we can even grant. So obviously we did not give that permission to the user mentioned.
I think the workaround is a well known one, a really easy one, unfortunately I did find comments in certain forums which said (Let me strikethrough this because it is evil!): ‘
give sysadmin role to the user‘. NEVER EVER!
First I thought that I just post my solution, but I think it would be good if you share your ideas with me and then I will post every different solution.
So please send your solution to me / robertATsqlapprenticeDOTnet / or leave a comment and let’s show how easy to handle a situation like this.
This is the first post of SQLApprentice.net. In this post I would like to introduce my universal foreach script.
So, here is the story:
It made me start thinking and I realized that lot of time we need not just to run the same script on different databases, but to run the same script on different objects such as logins, jobs. So Aaron Bertrand’s script inspired me to write an universal foreach stored procedure.
This stored procedure is very flexible.
Here is the syntax:
1 2 3 4 5 6 7 8
EXEC master.dbo.sp_uforeach @table_name='table name or select statement', @column_name='existing column name', @where_clause='where clause only when tabla name is used', @command='command with ? replace character', @replace_character='you can define a spec character which is used for replacing the 'looping object' (default '?') @print_command_only= 1 only print the command, 0 execute it @print_object_name= 1 print/select the object name the script working on (def 0) @debug=1 enable debug information
There are two options how you can provide the ‘looping objects’ which the script in the @command parameter have to run on.
First you can use an existing table name in the @table_name parameter (for example ‘sys.databases’) and in this case you can also provide a where clause without the word ‘where’ in the @where_clause parameter to filter the objects (for example ‘state = 0′).
Second, you can use a select statement with a where clause, like ‘select name from sys.databases where status = 0′. Using select statement the parameter @where_clause is ignored, but the parameter @column_name have to be provided, because the records of this column will be used as ‘looping objects’.
If you only provide the @command parameter, the stored procedure works like sp_MSforeachdb:
1 2 3 4 5 6
/*DEFAULT(foreach_db without command):*/ EXEC sp_uforeach @table_name='sys.databases', @column_name='name', @where_clause='', @command='', @print_command_only= 1
Let’s check some examples. You can get these examples using the @help=1 parameter as well. Pay close attention to the third example where I use sp_uforeach in nested way.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
/*-------Configure the filtered jobs to write to the Windows Application event log*/ EXEC master.dbo.sp_uforeach @table_name=N'msdb.dbo.sysjobs', @column_name=N'job_id', @where_clause=N'name like ''%TSI%''', @command=N'USE [msdb] EXEC msdb.dbo.sp_update_job @job_id=N''?'', @notify_level_eventlog=2', @print_command_only =1 /*-------Give sysadmin role to logins which name's start with 'adm'*/ EXEC master.dbo.sp_uforeach @table_name='sys.server_principals', @column_name='name', @where_clause='type in (''S'',''U'') AND name LIKE ''%adm_%''', @command='EXEC master..sp_addsrvrolemember @loginname=''?'', @rolename=N''sysadmin''', @print_command_only =1 /*-------Index maintenance in a specific database*/ EXEC master.dbo.sp_uforeach @table_name='USE AdventureWorks SELECT SCHEMA_NAME(schema_id)+''.''+name as name FROM AdventureWorks.sys.tables', @column_name='name', @replace_character='!', @print_command_only = 0, @command='USE AdventureWorks exec master.dbo.sp_uforeach @use_quotename=1, @print_command_only=1, @table_name=''USE AdventureWorks SELECT a.index_id, name, avg_fragmentation_in_percent FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID(''''!''''),NULL, NULL, NULL) AS a JOIN sys.indexes AS b ON a.object_id = b.object_id AND a.index_id=b.index_id AND a.index_id>0'', @column_name=''name'', @command=''USE [AdventureWorks] ALTER INDEX ? ON ! REORGANIZE;'''
I really hope that you can find this stored procedure useful. If so and you use sp_uforeach then please drop me a mail (robertATsqlapprenticeDOTnet) and I post your real life example(s). Don’t hesitate to contact me in case of having any question or remark.