Retrieve the SQL Service accounts using T-SQL

Useful script for finding out the SQL Server Service Account, and the SQL Agent Service Account.

Works for SQL 2005 & SQL 2008

DECLARE @NamedInstance bit
IF CAST(SERVERPROPERTY('ServerName') AS varchar) LIKE '%\%' SET @NamedInstance = 1 ELSE SET @NamedInstance = 0

DECLARE @ServiceName varchar(50)

IF @NamedInstance = 0
BEGIN
SET @ServiceName = 'MSSQLSERVER'
END
ELSE
BEGIN
SET @ServiceName = 'MSSQL$' + RIGHT(CAST(SERVERPROPERTY('ServerName') AS varchar),LEN(CAST(SERVERPROPERTY('ServerName') AS varchar)) - CHARINDEX('\',CAST(SERVERPROPERTY('ServerName') AS varchar),1))
END

DECLARE @KEY_VALUE varchar(100)
DECLARE @ServiceAccountName varchar(100) 

SET @KEY_VALUE = 'SYSTEM\CurrentControlSet\Services\' + @ServiceName
EXECUTE master..xp_regread 'HKEY_LOCAL_MACHINE', @KEY_VALUE, 'ObjectName', @ServiceAccountName OUTPUT
SELECT @ServiceAccountName as 'SQLService Account'
IF CAST(SERVERPROPERTY('ServerName') AS varchar) LIKE '%\%' SET @NamedInstance = 1 ELSE SET @NamedInstance = 0


IF @NamedInstance = 0
BEGIN
SET @ServiceName = 'SQLSERVERAGENT'
END
ELSE
BEGIN
SET @ServiceName = 'SQLAgent$' + RIGHT(CAST(SERVERPROPERTY('ServerName') AS varchar),LEN(CAST(SERVERPROPERTY('ServerName') AS varchar)) - CHARINDEX('\',CAST(SERVERPROPERTY('ServerName') AS varchar),1))
END

SET @KEY_VALUE = 'SYSTEM\CurrentControlSet\Services\' + @ServiceName

EXECUTE master..xp_regread 'HKEY_LOCAL_MACHINE', @KEY_VALUE, 'ObjectName', @ServiceAccountName OUTPUT
SELECT @ServiceAccountName as 'SQLAgent Account' 

Advertisements

Moving system databases in SQL Server 2008…

…has become a lot easier.

In SQL Server 2008, the MSDB & Model databases can be moved in the same way as the tempdb, by using simple alter database commands to initiate the move followed by a physical move and server stop start. see my post on How to move system databases in SQL 2005 to see the difference.

To check the current location of files in a database:

SELECT name, physical_name AS CurrentLocation
FROM sys.master_files WHERE database_id = DB_ID(N'msdb'); GO

To move the model, msdb and tempdb databases:

Single user mode is no longer necessary, neither is the 3608 trace flag. Now its a simple alter command:

--Move Model Database 
USE master; 
GO 
ALTER DATABASE model
MODIFY FILE (NAME = modeldev,FILENAME = 'E:\DATA\model.mdf'); 
GO 
ALTER DATABASE model
MODIFY FILE (NAME = Modellog,FILENAME = 'E:\DATA\modelLog.ldf'); 
GO 

--Move MSDB Database 
USE master; 
GO 
ALTER DATABASE msdb
MODIFY FILE (NAME = MSDBData,FILENAME = 'E:\DATA\MSDBData.mdf'); 
GO 
ALTER DATABASE msdb
MODIFY FILE (NAME = MSDBLog, FILENAME = 'E:\DATA\MSDBLog.ldf'); 
GO 

--Move Tempdb Database 
USE master; 
GO 
ALTER DATABASE tempdb
MODIFY FILE (NAME = tempdev, FILENAME = 'E:\DATA\tempdb.mdf'); 
GO 
ALTER DATABASE tempdb
MODIFY FILE (NAME = templog, FILENAME = 'E:\DATA\tempLog.ldf'); 
GO

Stop the instance, physically move the model and msdb files to the new location, and restart the instance. Tempdb files do not need to be moved as they get recreated on startup.

To move the master database:

In SQL Server configuration manager, edit the advanced properties for the SQL Server Service.

Change the startup parameters to the new location of the files, –l flag refers to log destination and –d flag refers to data file destination:

-dE:\SQLData\master.mdf;-eC:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\LOG\ERRORLOG;-lE:\SQLData\mastlog.ldf

Stop the instance.

Physically move the files to the new location

Start the instance.

Moving the master database is a lot simpler because the instance no longer has to be opened in master recovery only mode, as we no longer have the ability to move the resource database. In SQL Server 2008, the location of the Resource database is <drive>:\Program Files\Microsoft SQL Server\MSSQL10.<instance_name>\MSSQL\Binn\. and the database cannot be moved.

Moving Error Logs

Got a request to add this into this blog, and makes sense to have it here so here it is:

To move the Error log location of the SQL Server Error log

1. Stop the SQL Server Service of the SQL Server you want to modify

2. Open SQ Server Configuration Manager. Right click the SQL Server Service and click Properties

3. Select the Advanced Tab

4. Modify the Startup Parameters Line and replace the contents of the –e flag with the location of the new error log

Eg: from:

-eC:\Program Files\Microsoft SQL Server\MSSQL10.INST1\MSSQL\Log\ERRORLOG;

To:

-eD:\MSSQL\Log\ERRORLOG;

5. Start the SQL Server Service

image

To move the SQL Server Agent error log, go to SQL Server Agent -> Error Logs –> <Right click> Configure

image

You can also change logging level in this location.

Blocked Process Threshold – sp_configure

This configuration option is an easy way to automatically find and alert on blocked processes, without actively monitoring in profiler. The threshold can be specified in seconds between 0 and 86,400. By default, no blocked process reports are generated so this is something that has to be ‘turned on’.

Waits are not the same as deadlocks – and this only generates reports for deadlocks and not for tasks that are waiting on resources!

The example shows the threshold being changed so that reports are generated after 20 seconds of deadlocking.

sp_configure 'show advanced options', 1 ;
GO
RECONFIGURE ;
GO
sp_configure 'blocked process threshold', 20 ;
GO
RECONFIGURE ;
GO

The setting takes effect immediately without restart.

Test Case…

The below script will create the trace:

/****************************************************/
/* Created by: SQL Server Profiler 2005             */
/* Date: 02/15/2011  04:48:09 PM                    */
/****************************************************/

-- Create a Queue
declare @rc int
declare @TraceID int
declare @maxfilesize bigint
set @maxfilesize = 50 
exec @rc = sp_trace_create @TraceID output, 2, N'c:\BlockedProcess', @maxfilesize, NULL 
if (@rc != 0) goto error
-- Client side File and Table cannot be scripted
-- Set the events
declare @on bit
set @on = 1
exec sp_trace_setevent @TraceID, 137, 15, @on
exec sp_trace_setevent @TraceID, 137, 1, @on
exec sp_trace_setevent @TraceID, 137, 13, @on

-- Set the Filters
declare @intfilter int
declare @bigintfilter bigint
-- Set the trace status to start
exec sp_trace_setstatus @TraceID, 1
-- display trace id for future references
select TraceID=@TraceID
goto finish
error: 
select ErrorCode=@rc
finish: 
go

Note: The above can be generated via SQL Profiler by setting the trace via the GUI and then extracting the T-SQL

(File –> Export –> Script Trace Definition)

The event ID for Blocked process report is 137 (In the GUI it can be found under the Errors and Warnings header) – Full List Here. A column list for this event is available here

First, create the table:

CREATE TABLE X(rowid int)
INSERT INTO X VALUES (1)

Then, start 2 sessions with the following syntax

SELECT * FROM X

BEGIN TRAN
update X set rowid = rowid + 1
waitfor delay '00:00:30'
ROLLBACK

Once both sessions have completed, cleanup the test:

DROP TABLE X

To Stop the trace…

Where 2 is the Trace ID

--stop the trace
exec sp_trace_setstatus 2, 0
--delete the trace but leaves the file on the drive
exec sp_trace_setstatus 2, 2

To Check the Trace output…

--extension of trc is appended automatically
select cast(TextData as xml), SPID, EndTime, Duration/1000/1000
from fn_trace_gettable(N'c:\BlockedProcess.trc', default)
where eventclass = 137

The output shows in xml, the blocked process and also the blocking process

<blocked-process-report monitorLoop="2173262">
  <blocked-process>
    <process id="processec5ac8" taskpriority="0" logused="0" waitresource="RID: 16:1:176:0" waittime="23437" ownerId="957361298" transactionname="SELECT" lasttranstarted="2011-02-15T17:05:56.693" XDES="0x80053ca8" lockMode="S" schedulerid="2" kpid="18964" status="suspended" spid="60" sbid="0" ecid="0" priority="0" transcount="0" lastbatchstarted="2011-02-15T17:05:56.693" lastbatchcompleted="2011-02-15T17:04:21.787" lastattention="2011-02-15T17:04:20.460" clientapp="Microsoft SQL Server Management Studio - Query" hostname="TCY-SQL01" hostpid="12360" loginname="GROUP\sys.mpatel" isolationlevel="read committed (2)" xactid="957361298" currentdb="16" lockTimeout="4294967295" clientoption1="671090784" clientoption2="390200">
      <executionStack>
        <frame line="1" stmtend="36" sqlhandle="0x02000000aa03d61b27d1e0c406266b410d31c3e5c34ff161" />
      </executionStack>
      <inputbuf>
SELECT * FROM X

BEGIN TRAN
update X set rowid = rowid + 1
waitfor delay '00:00:30'
ROLLBACK   </inputbuf>
    </process>
  </blocked-process>
  <blocking-process>
    <process status="suspended" waittime="25421" spid="58" sbid="0" ecid="0" priority="0" transcount="1" lastbatchstarted="2011-02-15T17:05:54.710" lastbatchcompleted="2011-02-15T17:02:09.270" clientapp="Microsoft SQL Server Management Studio - Query" hostname="TCY-SQL01" hostpid="12360" loginname="GROUP\sys.mpatel" isolationlevel="read committed (2)" xactid="957361297" currentdb="16" lockTimeout="4294967295" clientoption1="671090784" clientoption2="390200">
      <executionStack>
        <frame line="5" stmtstart="126" stmtend="176" sqlhandle="0x02000000aa03d61b27d1e0c406266b410d31c3e5c34ff161" />
      </executionStack>
      <inputbuf>
SELECT * FROM X

BEGIN TRAN
update X set rowid = rowid + 1
waitfor delay '00:00:30'
ROLLBACK   </inputbuf>
    </process>
  </blocking-process>
</blocked-process-report>

Configuring Windows Instant File Initialization for SQL Server

After Reading Brad McGehee’s ebook on ‘How to Optimize tempdb Performance’, I came across a performance enhancement which is available to all databases called Instant File Initialization. This feature is available on Windows Server 2003 onwards, using SQL Server 2005 onwards to significatly reduce the amount of time required to create or grow a sql server data file.

Note: This is only applicable for SQL Server Data Files and Not Log Files. The reason being that Log files have to be completely Zeroed out before being written to disk.

The problem…

When you create a SQL Server data file, the operating system has to go through the file and ‘Zero out’ the entire file after it is allocated for one of the following operations:

  • Create a database.
  • Add files, log or data, to an existing database.
  • Increase the size of an existing file (including autogrow operations).
  • Restore a database or filegroup.

This can take a quite a bit of time for a very large file which can become critical in disaster recovery and restore operations.

Windows Instant file initialization removes the need to Zero out the file when it is allocated making the process almost instantaneous. The operating system just allocates the disk space, but the contents of the file is actually what is originally on the disk.

Making use of this feature…

Instant file initialization is only available if the SQL Server (MSSQLSERVER) service account has been granted SE_MANAGE_VOLUME_NAME. Members of the Windows Administrator group have this right and can grant it to other users by adding them to the Perform Volume Maintenance Tasks security policy.

Grant the right to the windows account that the SQL Server Service is running under, by doing the following:

  1. Open Group Policy Editor (or run gpedit.msc)
  2. Expand out the following – Computer Configuration –> Windows Settings –> Security Settings –> Local Policies
  3. Highlight User Rights Assignment
  4. In right hand pane, go to the properties for ‘Perform Volume Maintenance tasks’ and add the service account.
  5. The change will only take affect after the SQL Server instance has been restarted.

image

MSDN Article here

See the difference…

Test it by trying to restore a large database (>50gb) on a SQL Server 2005 instance

  • On a test server, without the permission, restore the database backup and measure the restore time
  • Drop the restored database and stop the SQL Server service
  • Grant the “Perform Volume Maintenance Task” permission on the SQL Server service account and restart the service
  • Restore the database backup again and measure the restore time

XDB.DBMS_XDBT Package is INVALID – Could mean install of Oracle Text

While Oracle XDB and Oracle Text are two independent products, package XDB.DBMS_XDBT is used to setup XDB context index. The package XDB.DBMS_XDBT requires that Oracle Text is installed.

SQL> alter package XDB.DBMS_XDBT compile body;

Warning: Package Body altered with compilation errors.

SQL> show error
Errors for PACKAGE BODY XDB.DBMS_XDBT:

LINE/COL ERROR
-------- -----------------------------------------------------------------
34/7 PL/SQL: Statement ignored
34/7 PLS-00201: identifier 'CTXSYS.CTX_OUTPUT' must be declared
37/5 PL/SQL: Statement ignored
37/5 PLS-00201: identifier 'CTXSYS.CTX_OUTPUT' must be declared
38/5 PL/SQL: Statement ignored
38/33 PLS-00201: identifier 'CTXSYS.CTX_OUTPUT' must be declared
71/5 PL/SQL: Statement ignored
71/5 PLS-00201: identifier 'CTX_DDL.DROP_PREFERENCE' must be declared
75/5 PL/SQL: Statement ignored
75/5 PLS-00201: identifier 'CTX_DDL.DROP_PREFERENCE' must be declared
79/5 PL/SQL: Statement ignored
79/5 PLS-00201: identifier 'CTX_DDL.DROP_SECTION_GROUP' must be declared
83/5 PL/SQL: Statement ignored
83/5 PLS-00201: identifier 'CTX_DDL.DROP_PREFERENCE' must be declared
86/5 PL/SQL: Statement ignored
86/5 PLS-00201: identifier 'CTX_DDL.DROP_PREFERENCE' must be declared
90/5 PL/SQL: Statement ignored
90/5 PLS-00201: identifier 'CTX_DDL.DROP_PREFERENCE' must be declared
94/5 PL/SQL: Statement ignored
94/5 PLS-00201: identifier 'CTX_DDL.DROP_STOPLIST' must be declared

There are 2 options. Either you can choose to install oracle text or this object can be ignored or can be dropped if you don’t plan on using Oracle Text in the future.

Installing Oracle Text 11.1.0.7.0

Full guide can be found in note ‘Manual installation, deinstallation and verification of Oracle Text 11gR1 [ID 579601.1]’

sqlplus "/ as sysdba"
spool text_install.txt
@?/ctx/admin/catctx.sql change_on_install SYSAUX TEMP NOLOCK

Where:
change_on_install – is the ctxsys user password
SYSAUX – is the default tablespace for ctxsys
TEMP – is the temporary tablespace for ctxsys
LOCK|NOLOCK – ctxsys user account will be locked or not

The next step is to install appropriate language-specific default preferences – all the available default preferences are in the $ORACLE_HOME/ctx/admin/defaults directory:

connect "CTXSYS"/"change_on_install"
@?/ctx/admin/defaults/drdefuk.sql;

To then fix the invalid package:

grant execute on ctxsys.ctx_ddl to XDB; 
grant execute on ctxsys.ctx_output to XDB; 
grant ctxapp to XDB; 

alter user ctxsys account lock password expire;

@?/rdbms/admin/dbmsxdbt.sql

## Recompile package

alter package XDB.DBMS_XDBT compile; 
alter package XDB.DBMS_XDBT compile body; 
show errors package body XDB.DBMS_XDBT 

spool off

Validate the Install of Oracle Text for 11.1.0.7

-- List version
SELECT comp_name, status, substr(version,1,10) AS version
FROM dba_registry WHERE comp_id = 'CONTEXT';

SELECT * FROM ctxsys.ctx_version;

SELECT substr(ctxsys.dri_version,1,10) VER_CODE FROM dual;

-- Should be 372 objects
SELECT count(*) FROM dba_objects 
WHERE owner='CTXSYS';

-- invalid objects
SELECT object_name, object_type, status 
FROM dba_objects 
WHERE owner='CTXSYS' and status != 'VALID' 
ORDER BY object_name;

Archive log mode – On and Off

To Turn on first set your log_archive_dest to a location with enough space to hold the logs.

SQL> show parameter log_archive_dest

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest                     string
log_archive_dest_1                   string      LOCATION=/oraarch/DW2

SQL> ALTER SYSTEM SET log_archive_dest_1='location=/u01/MYDW/archive/';

system altered

SQL> show parameter log_archive_dest

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest                     string
log_archive_dest_1                   string      LOCATION=/u01/MYDW/archive/

Then start archive logging:

Shutdown immediate;
startup mount;
ALTER database archivelog;
ALTER SYSTEM ARCHIVE LOG START;
ALTER database open;

To turn off archive logging:

shutdown immediate;
startup mount;
ALTER database noarchivelog;
ALTER database open;

Several ways to check archive logging details:

show parameter archive
archive log list
select log_mode from v$database;

and also via Archive log views:

V$ARCHIVED_LOG
V$ARCHIVE_DEST
V$ARCHIVE_PROCESSES

To Switch the actual archive log:

alter system switch logfile;

To Archive all archive groups:

alter system archive log all;

448 Virtual Log Files (VLFs) on a 11gb file – Hmmmmmm

What is a VLF?

Each transaction log file is divided logically into smaller segments called virtual log files. Virtual log files are the unit of truncation for the transaction log. When a virtual log file no longer contains log records for active transactions, it can be truncated and the space becomes available to log new transactions

Why did this happen?

It seems as though whoever created the database, didn’t look into capacity planning and size the database accordingly. this has resulting in a lot of 10% autogrows, which in turn has resulted in some very small VLF’s and then some very big VLFs.

Performance Impact?

From Books Online (http://msdn.microsoft.com/en-us/library/ms179355.aspx):

The only time virtual log files affect system performance is if the log files are defined by small size and growth_increment values. If these log files grow to a large size because of many small increments, they will have lots of virtual log files. This can slow down database startup and also log backup and restore operations. We recommend that you assign log files a size value close to the final size required, and also have a relatively large growth_increment value

Read here for Test Case – Performance Impact: a large number of virtual Log files

and also here – Performance Impact: a large number of virtual Log files II

and also here – How a log file structure can affect database recovery time

How are VLF sizes decided?

It all depends on the amount of space that is added not the total size of the log. For example, log growth of 1mb (whether manual or autogrow) will ask for 4 VLFs each 256kb, but an autogrow of 1GB will request 8 chunks each 128mb.

chunks less than 64MB = 4 VLFs

chunks of 64MB and less than 1GB = 8 VLFs

chunks of 1GB and larger = 16 VLFs

How to resolve?

After reading Kimberly Tripps Guide to managing Transaction Logs – ‘8 Steps to better Transaction Log throughput’, i started looking through some of my more heavily used databases and found that one of the main databases had a log file of 11gb split into 448 VLF’s! If you put this in the context of what Kimberly suggests:

In general, most transaction logs will only have 20 or 30 VLFs – even 50 could be reasonable depending on the total size of the transaction log.

Then 448 is a large number for a fairly small transaction log (around 10% of the size of the database).

DBCC LOGINFO

Running the above command tells you how many VLF’s are in your transaction log for a particular database.

Following the advice given in the article, I truncated the log to 1mb:

USE [mydatabase]
GO
DBCC SHRINKFILE (N'mydatabase_log' , 0, TRUNCATEONLY)
GO

Then resized so that

the first 1gb had a VLF size of 128mb, and the rest had chunk sizes of 448mb up to 8GB.

So, to get the initial 8 chunks of 128mb, i ran:

ALTER DATABASE MyDatabase
MODIFY FILE 
( 
      NAME = 'MyDatabase_log'
    , SIZE = 1024MB
)

Then extend again up to 8gb to create the extra 16 partitions for the 7gb of addtional space:

ALTER DATABASE MyDatabase
MODIFY FILE 
( 
      NAME = 'MyDatabase_log'
    , SIZE = 8192MB
)

The end result is 24 more manageable chunks instead of 448!