Sqlcmd is dead. Long live Sqlcmd

SQL Server 2008 and higher ship with the invoke-sqlcmd cmdlet while SQL Server 2005 and higher includes the sqlcmd.exe utility. Not wanting to take a dependency on invoke-sqlcmd, a few years ago I’d written a simpler variation I called invoke-sqlcmd2 which I’ve updated along with others.

You might think with later releases of SQL Server and the ability to create your own Powershell function that the old sqlcmd.exe utility isn’t needed, but you’d be wrong. There are still certain cases where using sqlcmd.exe is a better choice than invoke-sqlcmd or your own Powershell functions.

I reached this conclusion when converting some automated scripts from VBScript to Powershell used within our job scheduler. What I originally came up with was a simple variation of invoke-sqlcmd2 with some logging and error handling. I called this script invoke-sql.ps1 as posted on PoshCode. The script worked reasonably well for many scheduled jobs, but as I started looking at other existing VBScripts which used sqlcmd they relied on sqlcmd input and output options.

At first I thought I could reproduce the same output options as sqlcmd.exe including:

  • Output rows affected
  • Write T-SQL PRINT and RAISERROR statments to an output file
  • The delimited output should not include header rows as export-csv does
  • The delimited output shouldn’t quote all fields as export-csv does
Unfortunately, doing all the above isn’t possible using invoke-sqlcmd, although it is with my invoke-sqlcmd2 function through extending functionality. The main sticking points with invoke-sqlcmd is that rows affected are not returned which some folks have gotten use to seeing in their sqlcmd.exe based scripts and getting at T-SQL PRINT and RAISERROR statement output is only available through the -verbose parameter which is difficult to redirect to a file.

SqlCmd Lives On

So, if I really want to get away from running sqlcmd.exe in favor of a Powershell cmdlet or function I’d have to write a lot code to duplicate all the output options, I’d have to test it and even then I’d have to provide fixes for things which inevitably happen in production as new issues are discovered. Just as developer sees the Base Class Library in .NET as code they don’t have to write I see native Windows console applications (or cmdlets) as portions of my script I don’t have to write or worry about working correctly.
sqlcmd.exe isn’t dead. The documentation doesn’t list sqlcmd as deprecated and there’s a nice table which shows the missing features in invoke-sqlcmd when compared to sqlcmd on the MSDN page. The documentation reinforces my own experience–invoke-sqlcmd lacks functionality in sqlcmd. Also the fact that sqlcmd isn’t marked for deprecation and has been enhanced (for example connecting to new SQL Server 2012 availability group listener) means I can continue to use it without concern it will go away in the next release or worry new functionality isn’t being added.

Invoke-SqlCmdExe

Since sqlcmd.exe is here to stay and solves issues with output format requirements I just needed to write a quick wrapper around sqlcmd.exe to do error handling/logging and ensure Powershell doesn’t get messed up on the parameters to sqlcmd.exe. I came up with a script I call Invoke-SqlCmdExe and posted on PoshCode.

Invoke-SqlCmdExe Explained

At the top of the script I create a new Eventlog source. If it already exists the command will fail, so I’ll suppress and clear the error then move on. As stated in the script you must run the script as administrator in order to create a new Eventlog source on a server with UAC enabled, but once the Eventlog source is created you do not need to run as administrator. This is one of the many, many annoyances of UAC running on a server. You could run just this section of code to register a new Eventlog source.

1
2
3
#This must be run as administrator on Windows 2008 and higher!
New-EventLog -LogName Application -Source $Application -EA SilentlyContinue
$Error.Clear()

When running scheduled tasks, my preference is to use the Eventlog for logging messages. I’ll either setup a special Eventlog or use the Application log. The nice thing about using the Eventlog is that it’s always there, so you don’t have worry about messages not getting logged like you would if you were using a single remote database. Also a lot of monitoring tools like System Center or scheduling products include Eventlog  watchers, so you can build actions to respond to messages written to the Eventlog if needed. As part of using the Eventlog log for schedule task messages, you may want to create unique categories so your Eventlog watcher can look for particular events. The hashtable at the top of the script defines my unique categories:

1
$events = @{"ApplicationStartEvent" = "31101"; "ApplicationStopEvent" = "31104"; "DatabaseException" = "31725"; "ConfigurationException" = "31705";"BadDataException" = "31760"}

Powershell tends to get a little confused when calling certain native console applications with complex parameters. To avoid this issue with sqlcmd.exe I’ll wrap the call using start-process and here strings. I’ll also grab the exitcode and send all output to a file:

1
2
3
$exitCode = (Start-Process -FilePath "sqlcmd.exe" -ArgumentList @"
$Options
"@ -Wait -NoNewWindow -RedirectStandardOutput $tempFile -Passthru).ExitCode

If you use native console applications in Powershell you need to check exit codes as I’ve done in the script. If the error isn’t zero I’ll throw an error:

1
2
3
4
5
6
7
if ($ExitCode -eq 0) {
$msg = "ApplicationStopEvent"
Write-Message -Severity Information -Category $events.ApplicationStopEvent -Eventid 99 -ShortMessage $msg -Context $Context
}
else {
throw
}

The catch and finally statement blocks will handle writing error messages based on what was written to the output files and clean up any temp files.

Which Method to Use?

I haven’t given up on invoke-sqlcmd/invoke-sqlcmd2, but what I have done is come with a general rule on when to use one or the other. If I need to run a Powershell command and send the data to SQL or if I need to run a SQL command and send the data to Powershell I’ll use invoke-sqlcmd or some variation thereof. If just need to run a SQL query or produce an output file from a SQL query without a need to use the data in Powershell I’ll use sqlcmd.exe. You think of it like this my sqlcmd.exe based scripts are used for one-way operations, run a query which changes data or produces a file.

{ 5 comments }

Scripting SSIS Package Deployments

Before I delve into the subject of scripting SSIS package deployments, I’m going to take a slight detour and explain why the general subject of automating deployments is important.

Automating Deployments

One of the keys areas you should be looking at automation, possibly through scripting is deployments. If you’re doing deployments and you’re not using a scripted repeatable process, free of GUI and typing then you’re doing it wrong. I’ve seen many times in my career where deployments too often rely on complex instructions rather than using a tested script-based approach. It is inevitable; relying on even the most detailed step-by-step manual instructions will lead to deployments errors because of the human operator factor. When it actually comes time to deploy changes there should be zero typing or clicking. And if it’s not a fully automated deployment then any manual steps should be made as simple as possible such as “run this script with these parameters through copy and paste and report back results.” End Detour.

 SSIS Package Deployments

My SSIS package deployment requirements:

  1. The solution must support 2005, 2008, 2008 R2 and 2012 because I have a mixed environment
  2. The solution must support deploying to a SQL Server data storage in msdb from a dtsx file
  3. The solution must include verification of package installation
  4. The solution must be able to create any needed folder structures automatically
  5. The solution must include error handling and detailed output on operations performed
  6. The solution must support constrained parameters based on using SQL Server data store of a ServerInstance, the dtsx file and the full destination path on the SSIS server

When automating any task I’ll see if there’s already a solution either from Microsoft or third parties. I couldn’t find anything that out-of-the-box does meet all my requirements, but I did find two ways which provide partial solutions.

The first, writing Powershell code directly against Microsoft.SqlServer.ManagedDTS like I’ve done in the SSIS Powershell module I created for SQL Server Powershell Extensions. There’s is a function in the SSIS module called Copy-ISItemFileToSQL, however it provides only part of the solution and there’s a bigger problem of incompatibilities between versions to handle. The assembly for SSIS changes between 2005 and 2008/2008 R2 and 2012 which make crafting a complete solution difficult. I’ve given up on going down this path because it quickly becomes complex.

The second option and the one I went with, is to use the command-line utility dtutil.exe. The nice thing about dtutil–its included with  SQL Server 2005 and higher, well-documented and removes some of complexity of coding against the SSIS classes directly.Although dtutil.exe only meets requirements 1 through 3 above, I can fill in the rest with a bit of Powershell code. I present my Powershell script solution install-ispackage.ps1.

Using Install-ISpackage

To use install-ispackage simply download the script and from PoshCode and run by providing three parameters. Here’s an example of installing a dtsx file to my SSIS server:

1
./install-ispackage.ps1 -DtsxFullName "C:\Users\Public\bin\SSIS\sqlpsx1.dtsx" -ServerInstance "Z001\SQL1" -PackageFullName "SQLPSX\sqlpsx1"

Install-ISPackage Explanined

The install-ISPackage script provides an example of how you can approach calling native console applications (exe’s) from Powershell. You see error handling and handling output differs greatly when calling an exe vs. using cmdlets or .NET code. The former does not trigger errors and instead relies on exit codes defined by the console application developer. You have to check lastexitcode and read whatever documentation is provided with console application to determine what the exit codes mean.

I’ll step through a few things to explain:

When I’m dealing with scripts that make changes I like to set $ErrorActionPreference to Stop instead of the default of Continue. This way I can wrap some error handling and logging around any errors and be assured the script won’t proceed to the next step should an error occur.

I also like to make the exit code more user friendly. I’ll do this by reading the documentation for the command-line utility. On the msdn page for dtutil there a nice table under dtutil Exit Codes which I then create as a hashtable at the top of the script:

1
2
3
4
5
6
$exitCode = @{
0="The utility executed successfully."
1="The utility failed."
4="The utility cannot locate the requested package."
5="The utility cannot load the requested package."
6="The utility cannot resolve the command line because it contains either syntactic or semantic errors"}

I can then return a more useful error message by using the hastable with the built-in variable $lasterrorcode:

1
throw $exitcode[$lastexitcode]

You’ll notice in the Get-SqlVersion function I’m just using the classic sqlcmd.exe console application to run a query to get the SQL Server version number:

1
$SqlVersion = sqlcmd -S "$ServerInstance" -d "master" -Q "SET NOCOUNT ON; SELECT SERVERPROPERTY('ProductVersion')" -h -1 -W

I choose to use sqlcmd.exe instead of invoke-sqlcmd Powershell cmdlet because it’s installed on every SQL 2005 machine and it’s easier to use when I just want to return a single string:

1
2
C:Users\Public\bin\>Get-SqlVersion -ServerInstance Z001\sql1
10.50.2550.0

The Set-DtutilPath function tries to find the “right” dtutil.exe based on the SQL version being deployed to. You see although parameters for dtutil.exe are identical between version the utility isn’t backwards or forward compatible. You have to use the 9.0 version for 2005,  the 10.0 version for both 2008 and 2008 R2 and the 11.0 version for 2012.

The rest of the functions follow a basic pattern:

Run dtutil.exe and save the output to $result variable

$result will be an array of strings so create a single string separated by newlines:

1
$result = $result -join "`n"

Rather than returning an error on failure or nothing on success, instead return an object with details of what was run:

1
2
3
4
5
6
new-object psobject -property @{
ExitCode = $lastexitcode
ExitDescription = "$($exitcode[$lastexitcode])"
Command = "$Script:dtutil /File `"$DtsxFullName`" /DestServer `"$ServerInstance`" /Copy SQL;`"$PackageFullName`" /Quiet"
Result = $result
Success = ($lastexitcode -eq 0)}

I really like using this technique so that if there are failures as part of troubleshooting you can just run the Command property and you get other relevant details. The key here is you can always get back to the base utility so if something doesn’t work in the script you can prove it’s not the script when you get the same error in the utility alone. Note: I have seen errors a few times, usually because a developer will create an SSIS package in later version than the server being deployed to.

Check the $lasterrorcode after calling the utility and returning an object with details:

1
2
3
if ($lastexitcode -ne 0) {
throw $exitcode[$lastexitcode]
}

Here I’ll use the hashtable defined at the top of script to return a more friendly error message. If errors occur between the error returned and the detailed object I can troubleshoot any issues.

The rest of the functions follow a similar pattern. I will point out a non-zero exit code doesn’t necessarily mean an error. Some console application developers will use error code of 1 or other numbers to mean something other than error as is the case when testing if a folder path exists in dtutil. If it doesn’t exist an error code of 1 is returned. Sometimes it’s hard to determine when a non-zero error code means something other than error except through using the utility. Fortunately Powershell cmdlets don’t use weird exit codes to return status, they generally return an object or error object, but if you’re going to write Powershell scripts against command-line utilities you’ll need to be aware of exit codes and specific exit code meaning for the utility you’re using.

The other thing I’ll point out is the logic to create nest folder paths in Get-FolderList  and new-folder functions. The functions are in place to satisfy my fourth requirement to automatically create folders if they don’t exist.

The main section executes the series of functions in order, wrapped in a try/catch block and since I set my $ErrorAction and check the $lasterrorcode throwing an error in each function, the script will stop should an error occur.

{ 8 comments }

Cloning SQL Servers to a Test Environment

I’ve been involved in a project to set up a full test environment of dozens of SQL Servers. The requirements of the test environment where such that they needed to be cloned via V2V or P2V from source servers, the servers would then be renamed and located within a completely isolated network on a separate AD without a trust relationship. What follows are a few scripts I created to deal with some of the issues which arise given the requirements. These issues include:

  • Dealing with orphaned AD SQL Server logins
  • Avoiding connection string changes
  • Avoiding Linked Servers changes

Dealing with Orphaned AD SQL Server Logins

The test environment Active Directory logins and groups are copied from the source environment by AD administrators; however the SIDs will be different in the test environment which creates orphaned AD logins in SQL Server. I have to admit I haven’t dealt with orphaned AD logins in SQL Server very much. I have dealt with orphaned SQL Server database users where the SQL Server login SID does not match between servers which often occur when restoring databases between servers. In researching this issue I was pleasantly surprised to see the same ALTER USER  … WITH LOGIN syntax or the old style sp_change_users_login system stored procedure I had used to deal with orphaned SQL Server database users works with AD logins. Here’s a T-SQL script to change AD logins from one domain to another and reassociate any database users. The script assumes the AD logins have been created in the test AD with the same name as the source domain:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
SET NOCOUNT ON
 
--#BEGIN LOGIN CREATION
PRINT '--1. BEGIN LOGIN CREATION ' + @@SERVERNAME;
 
DECLARE @name sysname, @default sysname
 
DECLARE loginCursor CURSOR FOR 
SELECT REPLACE(s1.name,'OldDomain','NewDomain'), s1.default_database_name 
FROM sys.server_principals s1
 WHERE s1.type IN ('G','U') AND s1.name LIKE 'OldDomain%'
 AND s1.name NOT IN (SELECT s1.name FROM sys.server_principals s2 WHERE REPLACE(s1.name,'OldDomain','') = REPLACE(s2.name,'NewDomain',''))
 
OPEN loginCursor
FETCH NEXT FROM loginCursor INTO @name,@default
 
WHILE @@FETCH_STATUS = 0
BEGIN
	PRINT 'CREATE LOGIN [' + @name + '] FROM WINDOWS WITH DEFAULT_DATABASE=[' + @default + '];'
	EXEC ('CREATE LOGIN [' + @name + '] FROM WINDOWS WITH DEFAULT_DATABASE=[' + @default + '];')
	FETCH NEXT FROM loginCursor INTO @name,@default
END
 
CLOSE loginCursor
DEALLOCATE loginCursor;
 
PRINT '--1. END LOGIN CREATION ' + @@SERVERNAME;
--#END LOGIN CREATION
 
--#BEGIN SERVER ROLE FIX
PRINT '--2. BEGIN SERVER ROLE FIX ' + @@SERVERNAME;
 
DECLARE @role sysname, @member sysname
 
DECLARE roleCursor CURSOR FOR 
SELECT SUSER_NAME(role_principal_id), REPLACE(SUSER_NAME(member_principal_id),'OldDomain','NewDomain')
 FROM sys.server_role_members
 WHERE SUSER_NAME(member_principal_id) LIKE 'OldDomain%'
 
OPEN roleCursor
FETCH NEXT FROM roleCursor INTO @role,@member
 
WHILE @@FETCH_STATUS = 0
BEGIN
	PRINT 'EXEC sp_addsrvrolemember ''' + @member + ''', ''' + @role + ''';'
	EXEC sp_addsrvrolemember @member, @role;
	FETCH NEXT FROM roleCursor INTO @role,@member
END
 
CLOSE roleCursor
DEALLOCATE roleCursor;
 
PRINT '--2. END SERVER ROLE FIX ' + @@SERVERNAME;
--#END SERVER ROLE FIX
 
--#BEGIN USER FIX
 
EXEC sp_MSforeachdb 
  @command1='PRINT ''--3. BEGIN USER FIX ?'''
, @command2='USE [?] IF DATABASEPROPERTY(''?'',''IsReadOnly'') = 0
BEGIN
	DECLARE @uname sysname, @sname sysname
 
	DECLARE userCursor CURSOR FOR
	SELECT name, REPLACE(SUSER_SNAME(sid),''OldDomain'',''NewDomain'')
	FROM sys.database_principals
	WHERE type IN (''G'',''U'') AND SUSER_SNAME(sid) LIKE ''OldDomain%''
 
	OPEN userCursor
	FETCH NEXT FROM userCursor INTO @uname,@sname
 
	WHILE @@FETCH_STATUS = 0
	BEGIN
		IF @uname = ''dbo''
                BEGIN
                    PRINT ''ALTER AUTHORIZATION ON DATABASE::[?] TO ['' + @sname + ''];''
		    EXEC (''ALTER AUTHORIZATION ON DATABASE::[?] TO ['' + @sname + ''];'')
                END
                ELSE
                BEGIN
		    PRINT ''ALTER USER ['' + @uname + ''] WITH LOGIN=['' + @sname + ''];''
		    EXEC (''ALTER USER ['' + @uname + ''] WITH LOGIN=['' + @sname + ''];'')
                END
 
		FETCH NEXT FROM userCursor INTO @uname,@sname
	END
 
	CLOSE userCursor
	DEALLOCATE userCursor
END'
, @command3 = 'PRINT ''--3. END USER FIX ?'''
--#END USER FIX

Avoiding Connection String Changes

In order to avoid changing connection strings I aliased the old server name to the new server name in a couple of ways, first by creating entries on each server’s C:windowsSystem32driversetchosts file and second using SQL Server Client Aliases. I’ve found that creating aliases using both methods provides the highest likelihood of success. By aliasing the new server name I’ve avoided changing SSIS connections, linked server and really anything that runs externally on the particular SQL Servers. There are two scripts I’ve used. First a Powershell script to generate hosts file entries given a text file of just server names. The grid output is then copied (ctrl-A, ctrl-C) and pasted into a hosts file. I’ll make the hosts file identical for each server in the test environment.

1
2
3
get-content ./servers.txt | 
foreach { new-object PSObject -property @{'Computername'=$_; 'IP' = (Test-Connection -ComputerName $_ -Count 1).IPV4Address.IPAddressToString }} |
 out-gridview

To create the SQL Server Client alias using a GUI you would use cliconfg from the run command and manually create each SQL Server alias, but in order to script this, I created a script to add the registry entries which is really all the GUI does. For a single SQL Server Client Alias, I’ve posted a script to PoshCode, called Add-SqlClientAlias, but rather than use that script I created a bulk alias script which goes against a SQL Server Central Management Server also setup in the test environment. The script assumes the old server name is the same as the new server name with a TEST prefix or suffix:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$query = @"
SET NOCOUNT ON;
SELECT DISTINCT s.name
FROM msdb.dbo.sysmanagement_shared_registered_servers s
JOIN msdb.dbo.sysmanagement_shared_server_groups g
ON s.server_group_id = g.server_group_id
"@
 
if (!(test-path 'HKLM:SOFTWAREMicrosoftMSSQLServerClientConnectTo')) {
    new-item -path 'HKLM:SOFTWAREMicrosoftMSSQLServerClient' -name ConnectTo
}
 
sqlcmd -S myCMServerInstance -d msdb -Q $query -h -1 -W |
foreach { Set-ItemProperty -Path 'HKLM:SOFTWAREMicrosoftMSSQLServerClientConnectTo' -Name $($_ -replace 'TEST') -Value "DBMSSOCN,$_" }
 
}

Avoiding Linked Server Changes

If you change the AD logins to the new domain any explicitly mapped Linked Server logins will break, so you’ll need to re-map them using the new domain accounts. This process is a little manual in that the T-SQL script below generates a linked server login creation script which you’ll need to copy and paste into a new query window and fill in passwords for the mapped AD to SQL accounts:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
SET NOCOUNT ON
 
--#BEGIN LOGIN CREATION
PRINT '--1. BEGIN LINKED SERVER LOGIN CREATION ' + @@SERVERNAME
 
CREATE TABLE #linkedsrvlogin
(
linked_server sysname NULL 
,local_login sysname NULL
,is_self_mapping SMALLINT NULL 
,remote_login sysname NULL
) 
INSERT #linkedsrvlogin
EXEC sp_helplinkedsrvlogin
 
SELECT 'EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname=N''' + 
linked_server + ''',@useself=N''False'',@locallogin=N''' + 
REPLACE(local_login,'OldDomain','NewDomain') +
''',@rmtuser=N''' + remote_login + 
''',@rmtpassword=''########'''
FROM #linkedsrvlogin s1
WHERE local_login LIKE 'OldDomain%'
AND is_self_mapping = 0
AND remote_login IS NOT NULL
AND local_login NOT IN (SELECT local_login FROM #linkedsrvlogin s2 WHERE REPLACE(s1.local_login,'OldDomain','') = REPLACE(s2.local_login,'NewDomain',''))
 
PRINT '--1. END LINKED SERVER LOGIN CREATION ' + @@SERVERNAME;
--#END LOGIN CREATION
 
SELECT * FROM #linkedsrvlogin

This post isn’t meant to be an all-inclusive list of things to consider when cloning SQL Server to a test environment, but rather just a few Powershell and T-SQL scripts to fix several issues. The scripts are meant for SQL Server 2005 and higher. Some of the assumptions I made are particular to my environment. As always if you choose to use any of these scripts you should test and adapt them to your needs.

{ 0 comments }

Using Invoke-WebRequest

The website PowershellCommunity.org is moving to a new site poweshell.org and I wasn’t sure the old forum posts will be brought over. Well, I kind like some of the answers I provided in the SQL Server forum and wanted to download them into a text file in case I ever need to refer to them. It looks like I finally found a use case for the Powershell V3 cmdlet, Invoke-WebRequest which sends web requests and allows you to parse the response.

The following script is purpose built. Of course any parsing of HTML is going very specific to the problem at hand. Powershell and the invoke-webrequest cmdlet provides an excellent way to explore the details of a webpage so you can quickly craft a custom web-scraping script. Here’s the script I came up with, again it won’t be applicable to your particular task, but it shows an example of using invoke-webrequest to parse links and download specific text from various pages.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$outputfile = 'C:UsersPublicbinpscommunitysql.txt'
 
1..14 | foreach {
            invoke-webrequest -Uri http://www.powershellcommunity.org/Forums/tabid/54/aff/24/afpg/$_/Default.aspx |
                where { $_.Links | where { $_.title -and $_.title -notlike "*Page" -and $_.title -notlike "PowerShellCommunity*" } |
                    foreach { invoke-webrequest -Uri $_.href } |
                        where { $_.Links | where { $_.href -like "*printmode*" } |
                            foreach { invoke-webrequest -Uri $_.href } |
                                foreach { $_.ParsedHtml.forms |
                                    foreach { "####################`n$($_.innerText)" | out-file $outputfile -append -encoding 'utf8' }
                                }
                        }
                }
        }

Explanation

  1. On line 3, I know there’s 14 pages of forums questions which I determined by looking at SQL Server forum in the browser with roughly 30 questions per page, so I’ll loop through 1 to 14
  2. On lines 4-5, I loop through each page then grab the link for each question by filtering out links which don’t apply. This was something I determined through trial and error looking at the object invoke-webrequest returned in the previous line
  3. On lines 6-7 I’ll get the web page for each question and since a question can have multiple page answers, I’ll call invoke-webrequest on the questions’s print preview mode which shows the question as a single page. This was something figured out by looking at links being returned by the invoke-webrequest call and noticing a little printer icon when viewing the same page in the browser
  4. Finally on lines 8 – 10, I’ll use invoke-webrequest to get the print preview page for the question, get the parsed forms data and then the inner text for the form which is appended to a text file.

{ 0 comments }

Upcoming Fall Events

During September/October I’ll be presenting at several in-person events:

Tampa IT Pro Camp

Join system administrators and IT professionals for the Tampa IT Pro Camp on Saturday, September 22nd, 2012.

IT Pro Camps are focused on serving the needs of  IT Pro’s (Windows system administrators). This is will be our seventh event since our first IT Pro Camp in March 2011 . Check  out the IT Pro Camp website  for details on all future IT Pro Camp events.

The Tampa IT Pro Camp features 30 one-hour sessions on topics including Powershell, SQL Server, Windows Server, System Center and Data Security. I’m presenting a session on Storing Powershell Output. See the full Tampa IT Pro Camp schedule for details on all sessions.

Be sure to register to attend the free Tampa IT Pro Camp event. A continental breakfast and Italian lunch will be provided. Please tell your colleagues about IT Pro Camps.

SQL Saturday #151—Orlando 2012

Orlando SQL Saturday features 44 sessions on variety of SQL Server related topics. I’ll be presenting a session on Integrating SQL Server PBM with SCOM.

Updated 9/20, unfortunately I’m unable to attend Orlando SQL Saturday this year.

Check out the full schedule and register to attend this free event.

Oracle OpenWorld

I’m  attending OpenWorld in San Francisco and I’ll be a panel member/presenter for Real-World Operational Reporting with Oracle GoldenGate: Customer Panel. Unfortunately I’m unable to attend OpenWorld this year due, however I’m sending an even better presenter in my place, Michael Wells (twitter | blog ).

–Chad Miller

{ 0 comments }

South Florida IT Pro Camp 2012 Presentation

Presentation and scripts from my Storing PowerShell Output session at South Florida IT Pro Camp 2012.

{ 2 comments }

Jacksonville IT Pro Camp 2012 Presentation

Presentation and scripts from my Integrating SQL Server PBM with SCOM session at Jacksonville IT Pro Camp 2012.

{ 0 comments }

Jacksonville IT Pro Camp

Join system administrators and IT professionals for the Jacksonville IT Pro Camp on Saturday, June 16th, 2012.

IT Pro Camps are focused on serving the needs of  IT Pro’s (Windows system administrators). After having completed IT Pro Camps in Tampa, South Florida, Orlando, and Sarasota, the Jacksonville IT Pro Camp marks the fifth IT Pro community event.

This year begins our second year of IT Pro Camps, we’ll be returning to Tampa on 9/22/2012 and South Florida on 7/21/2012. Check  out the IT Pro Camp website  for details on all future IT Pro Camp events.

The Jacksonville IT Pro Camp features 24 one-hour sessions on topics including Powershell, BI, Windows Server, System Center and Data Security. I’m presenting a session on Integrating SQL Server Policy-Based Management with System Center. See the full Jacksonville IT Pro Camp schedule at http://itprocamp.com/jacksonville/2012/05/23/schedule/

Be sure to register to attend the free Jacksonville IT Pro Camp event. A continental breakfast and BBQ lunch will be provided. Please tell your colleagues about IT Pro Camps.

–Chad Miller

{ 0 comments }

Appending New Rows Only

I saw a question in the forums related to inserting new rows into a SQL Server table only if they didn’t exist. The current solution was using an ADO.NET DataTable , checking for new rows and then pushing the rows back to SQL Server by calling the Update method on the DataAdapter. Although the solution works, the process becomes longer as each time the process is run the entire table is retrieved and compared.

There’s a number of approaches you could take to solve this problem. One solution is to use Table Valued Parameters which I’ve previously blogged about to push a batch of rows to SQL Server and add only new rows. This does require creating both a table type and stored procedure on the SQL Server and only works for SQL Server 2008 and higher:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
<#
/* FROM SSMS */
USE tempdb
GO
/* Create a Request table for testing purposes*/
CREATE TABLE Request
( PKID INT,
 MessageText varchar(max));
GO
/* Create a RequestList table type */
CREATE TYPE RequestList AS TABLE
( PKID INT,
 MessageText varchar(max));
GO
 
/* Create a procedure to use insert only new rows  */
CREATE PROCEDURE uspSetRequest
    @TVP RequestList READONLY
    AS
    SET NOCOUNT ON
    INSERT Request
    SELECT tvp.PKID, tvp.MessageText
    FROM @TVP tvp
    LEFT JOIN Request r ON
    tvp.PKID = r.PKID
    WHERE r.PKID IS NULL;
 GO
 #>
 
#FROM Powershell
#Create an ADO.NET DataTable matching the RequestList Table Type:
$dt = new-object Data.datatable
$col =  new-object Data.DataColumn
$col.ColumnName = 'PKID'
$col.DataType = [Int32]
$dt.Columns.Add($Col)
$col =  new-object Data.DataColumn
$col.ColumnName = 'MessageText'
$col.DataType = [String]
$dt.Columns.Add($Col)
 
#BEGIN INSERT foreach Loops to add records to DataTable
#Example below inserts only one record
#Add a Row to the DataTable
$dr = $dt.NewRow()
$dr.Item('PKID') = 1
$dr.Item('MessageText') = 'It worked!'
$dt.Rows.Add($dr)
#END INSERT foreach Loops to add records to DataTable 
 
#Connection and Query Info
$serverName="$env:computernamesql1"
$databaseName='tempdb'
$query='uspSetRequest' 
 
#Connect
$connString = "Server=$serverName;Database=$databaseName;Integrated Security=SSPI;"
$conn = new-object System.Data.SqlClient.SqlConnection $connString
$conn.Open()
 
#Create Sqlcommand type and params
$cmd = new-object System.Data.SqlClient.SqlCommand
$cmd.Connection = $conn
$cmd.CommandType = [System.Data.CommandType]"StoredProcedure"
$cmd.CommandText= $query
$null = $cmd.Parameters.Add("@TVP", [System.Data.SqlDbType]::Structured)
$cmd.Parameters["@TVP"].Value = $dt
 
#Execute Query and close connection
$cmd.ExecuteNonQuery() | out-null
$conn.Close()

{ 2 comments }

Backup Database Object

I saw this question in one of forums on backing up i.e. scripting out a database object. The problem is easy to solve, but only if you’re familiar with SMO :). Even so, there some more obscure aspects of SMO like URNs which not many people are aware of. If you read the MSDN docs on SMO you’ll find URNs are referenced in a few places. I haven’t used them much, but for this case  it makes sense. Normally if you want to get to an object in SMO you’d reference the server, then the database then the object type collection (StoredProcedures, Views, etc.), and then the object;  however if you don’t know the object type you can call EnumObject method on the database to get a list of objects with its URN. The URN is like a primary key of objects in SMO. So, here’s my solution with with comments…

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
add-type -AssemblyName "Microsoft.SqlServer.ConnectionInfo, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"
add-type -AssemblyName "Microsoft.SqlServer.Smo, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"
add-type -AssemblyName "Microsoft.SqlServer.SMOExtended, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"
add-type -AssemblyName "Microsoft.SqlServer.SqlEnum, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"
add-type -AssemblyName "Microsoft.SqlServer.Management.Sdk.Sfc, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91"
 
#######################
<#
.SYNOPSIS
Backs up a database object definition.
.DESCRIPTION
The Backup-DatabaseObject function  backs up a database object definition by scripting out the object to a .sql text file.
.EXAMPLE
Backup-DatabaseObject -ServerInstance Z002 -Database AdventureWorks -Schema HumanResources -Name vEmployee -Path "C:UsersPublic"
This command backups up the vEmployee view to a .sql file.
.NOTES
Version History
v1.0   - Chad Miller - Initial release
#>
function Backup-DatabaseObject
{
    [CmdletBinding()]
    param(
    [Parameter(Mandatory=$true)]
    [ValidateNotNullorEmpty()]
    [string]$ServerInstance,
    [Parameter(Mandatory=$true)]
    [ValidateNotNullorEmpty()]
    [string]$Database,
    [Parameter(Mandatory=$true)]
    [ValidateNotNullorEmpty()]
    [string]$Schema,
    #Database Object Name
    [Parameter(Mandatory=$true)]
    [ValidateNotNullorEmpty()]
    [string]$Name,
    [Parameter(Mandatory=$true)]
    [ValidateNotNullorEmpty()]
    [string]$Path
    )
 
    $server = new-object Microsoft.SqlServer.Management.Smo.Server($ServerInstance)
    $db = $server.Databases[$Database]
 
    #Create a UrnCollection. URNs are used by SMO as unique identifiers of objects. You can think of URN like primary keys
    #The URN format is similar to XPath
    $urns = new-object Microsoft.SqlServer.Management.Smo.UrnCollection
 
    #Get a list of database object which match the schema and object name specified
    #New up an URN object and add the URN to the urns collection
    $db.enumobjects() | where {$_.schema -eq $Schema -and  $_.name -eq $Name } |
        foreach {$urn = new-object Microsoft.SqlServer.Management.Sdk.Sfc.Urn($_.Urn);
                 $urns.Add($urn) }
 
    if ($urns.Count -gt 0) {
 
        #Create a scripter object with a connection to the server object created above
        $scripter = new-object Microsoft.SqlServer.Management.Smo.Scripter($server)
 
        #Set some scripting option properties
        $scripter.options.ScriptBatchTerminator = $true
        $scripter.options.FileName = "$PathBEFORE_$Schema.$Name.sql"
        $scripter.options.ToFileOnly = $true
        $scripter.options.Permissions = $true
        $scripter.options.DriAll = $true
        $scripter.options.Triggers = $true
        $scripter.options.Indexes = $true
        $scripter.Options.IncludeHeaders = $true
 
        #Script the collection of URNs
        $scripter.Script($urns)
 
    }
    else {
        write-warning "Object $Schema.$Name Not Found!"
    }
 
} #Backup-DatabaseObject

And here’s example of sourcing and calling the function:

1
2
. ./Backup-DatabaseObject.ps1
Backup-DatabaseObject -ServerInstance Z002 -Database AdventureWorks -Schema HumanResources -Name vEmployee -Path "C:UsersPublic"

I’ve posted the code on PoshCode also.

{ 5 comments }