Click here to Skip to main content
15,867,488 members
Articles / Database Development / SQL Server / SQL Server 2008

Continuous Integration for Databases with Visual Studio

Rate me:
Please Sign up or sign in to vote.
4.85/5 (14 votes)
23 Nov 2010Apache18 min read 62.8K   1.2K   68   20
Provides a framework for predictably compiling, extracting, and deploying a database project.

Introduction

A database is a key component in almost any modern, commercial application today, yet as a general rule, the industry has ignored the production of database development utilities. Those that exist are generally aimed at DBAs rather than developers, and almost all seem to require some level of manual intervention.

This project provides a mechanism for automatically generating a predictable, repeatable, and professional approach to releasing SQL code from a development environment into production. While the code does not explicitly require Visual Studio Database Edition to work, I would strongly recommend it, since it is the first IDE that provides real-time syntax checking of SQL code - validating the integrity of your code, object references (did you just change a column name?), as well as providing static code analysis to highlight some of those oft forgotten mistakes (and much, much more ... why Microsoft failed to include much of the functionality that I have added is beyond me).

Some of the issues that I have addressed are:

  1. Predictability: With most existing environments, the scripting of database objects is largely a manual process. Even with tools like the Schema Compare in Visual Studio Database Edition or those provided by RedGate software, most developers still find that they have to manually script out existence checks for objects (dropping procedures, checking to see if tables exist, interrogating the sys.indexes tables). For the most part, however, this is error prone - cut and paste the existence check for Object A, and then drop Object B. Now the existence check will automatically be built into the script, deployments won't fail because a table already exists.
  2. Security: One of the things that has often been a problem in large organizations is the scripting of security. I have abstracted all database level security to roles at a database level (as it is intended to be!), and left the actual user permissions and access up to the DBA, who should rightly have full control of the system and not have to worry about developers accidentally injecting development environment permissions into the code. Additionally, for those developers of large databases, you do not have to manually script out each permission for each object - I have provided a mechanism for generating this at build time, so if you forget to add it for that single Stored Procedure, the system will not break when you roll out.
  3. Repeatability: DBAs are human too, and errors occur during a roll out. Connections get dropped, power goes off, the DBA cuts and pastes the content across a remote session and it gets truncated - you name it, it can happen. It must be possible to simply pick up the process from where it last was, and a repeatable script that checks for existence on every object before it creates/alters it provides such a framework.
  4. Flexibility: Other utilities that I have used generate a single, large script. I have always found this inflexible, especially when dealing with a rollout that impacts multiple databases, or (heaven forbid) when there are circular dependencies between databases. The deployment scripts are separated into types (e.g., procedures, tables, etc.), and the execution of this can be sequenced any way you desire, since you can inject a custom script at any point in the process.

Setting Up the Sample Project

This is pretty straightforward. Download the source, unzip and then open the SampleDB.dbproj file in the Example VSDB Project folder. Ensure that the following lines in the SampleDB.targets file point to a valid SQL Server instance:

  • Line 45: SqlInstance=".\SQL2005"
  • Line 69: SqlInstance = ".\SQL2005"
  • Line 76: ExtractFromServer = ".\SQL2005" (this only needs to be changed if you need to extract reference data from a static database; the default is set to off)

Detailed comments are provided in the SampleDB.targets file, and in the CustomDeployment\ScriptExecutorConfiguration.xml file.

When ready, open up a Visual Studio Command Prompt in the folder where the SampleDB.dbproj file is located, and run the following command line statement to extract your schema:

MSBuild SampleDB.dbproj /Target:TheExtract

Should you wish to deploy the database as part of the build process, change the target to /Target:DeployDatabase instead. Please note that the sample project is a trivial example, but most of the types of objects you will use are catered for.

The Targets File

The targets file effectively controls the build process. Here is a detailed explanation of the targets file supplied with the sample project:

XML
<?xml version="1.0" encoding="Windows-1252"?>
<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <!-- An example of how to execute this from the command line would be: 
	MSBuild SampleDB.dbproj /target:DeployDatabase
      	Since DeployDatabase depends on SQLExtract, both tasks will be executed.
  -->
  <UsingTask TaskName="DeployDatabase" AssemblyFile="lib\SqlDeployment.Build.dll" />
  <UsingTask TaskName="SQLExtract" AssemblyFile="lib\SqlDeployment.Build.dll" />
  <UsingTask TaskName="SqlScriptExecutor" AssemblyFile="lib\SqlDeployment.Build.dll" />
  <!--
    You probably want to reference a common drop point on a build server, 
	instead of a path relative reference
  -->

  <!--
    This item is an enumeration of all "custom" files in the build 
    from the dbpro project that should be included in the build output. 
    Note that to include multiple files, simply use a semi-colon delimiter.
    The ScriptExecutorConfiguration.xml file is the config used by the 
    database deployment task - it is a serialized dictionary
    that is sorted to execute in sequence.
  -->
  <ItemGroup>
    <CustomDeployment
        Include="$(MSBuildProjectDirectory)\Scripts\CustomDeployment\*.sql;
	$(MSBuildProjectDirectory)\Scripts\CustomDeployment\
		ScriptExecutorConfiguration.xml"
        Exclude="*.sqlfile;*.sqlpermissions"
        />

    <!--
      This item is an enumeration of ALL project files in the 
      build from the dbpro project that should be included in the build
      Note that *.sql includes files such as .sqlfile and .sqlpermissions, 
      so we exclude them if they exist.
    -->
    <SchemaScripts
      Include="@(Build->'%(FullPath)')"
      Exclude="*.sqlfile;*.sqlpermissions"
        />
  </ItemGroup>
  <!--
      SqlScriptExecutor - execute any ddl against a target database. 
      This one is used to restore a target image prior to build execution
        SqlInstance - required. The SQLInstance against which the scripts will be executed
        SqlUsername - The sql username to use when connecting to the instance. 
			Leave blank to use windows authentication.
        SqlPassword - The password to use with the build account
        DatabaseName - required. The database name against which the scripts 
			will be executed
        ConfigPath - required. The path where the config file for the 
			.sql scripts is to be found.
    -->
  <Target Name="RestoreDatabase">
    <SqlScriptExecutor
      SqlInstance=".\SQL2005"
      SqlUsername = "BuildServiceUser"
      SqlPassword = "BuildServicePassword"
      DatabaseName="master"
      ConfigPath="RestoreBuildScript\RestoreConfiguration.xml"
    />
  </Target>

  <!--
      Database deployment - test a deployment against your daily refresh from production
        SqlInstance - required. The SQLInstance to which the scripts will be deployed
        DatabaseName - required. The database name to which the scripts will be deployed
        SqlUsername - The sql username to use when connecting to the instance. 
			Leave blank to use windows authentication.
        SqlPassword - The password to use with the build account
        ConfigPath - required. The path where the config file for the 
			extracted .sql scripts is to be found.
        OutputCompoundFile - indicates whether or not a single output file 
				should be generated
        OutputCompoundFilePath  - the path of the output file
        PrependUsingDatabaseName  - if a single output file is generated and 
				this is set to "1", a using statement will be 
				placed at the head of the output file, 
				using the name of the database against 
				which the deployment was performed
    -->
  <Target Name="DeployDatabase" DependsOnTargets="TheExtract">
    <DeployDatabase
      SqlInstance=".\SQL2005"
      SqlUsername = "BuildServiceUser"
      SqlPassword = "BuildServicePassword"
      DatabaseName="SampleDB"
      ConfigPath="Build\ScriptExecutorConfiguration.xml"
      OutputCompoundFile="1"
      OutputCompoundFilePath="Build\SampleDB.sql"
      PrependUsingDatabaseName="1"
    />
  </Target>

  <!--
      SQL Extract - predictably extracts .sql files from a dbproject
        SqlInstance - required. The SQLInstance against which the scripts 
			will be tested to assist in determining dependencies
        SqlUsername - The sql username to use when connecting to the instance. 
			Leave blank to use windows authentication.
        SqlPassword - The password to use with the build account
        ProjectGUID - required. The GUID of the db project. 
			Will be included in the test database build name.
        BuildPath - required. The path where the output .sql files will be deployed.
        SchemaScripts - required. The path (and files) for extraction
        StaticFiles - required. The path where the custom files reside.
        SplitDropToSeperateFile - required. Indicates whether or not multiple 
		"object typed" files are cresated. Currently only the value of 
		"1" is supported.
        ExtractData - Indicates whether or not reference data should be extracted 
			from a source database
        ExtractFromServer - The name of the server from which the source data 
			should be extracted
        ExtractUsername - The sql username to use when connecting to the 
			instance for extracting reference data. 
			Leave blank to use windows authentication.
        ExtractPassword - The password to use with the build account
        ExtractFromDatabase - The name of the database from which the 
				source data should be extracted
        ExtractTargetFile - The file where the extracted data should be saved
        ExtractCommandsFile - The SQL Commands (spGenerateMerge) used for the extract
        ScriptPermissions - Indicates whether or not to script permissions 
				for this database
        PermissionsTemplate - The template - (a serialized "Permissions Map" class - 
				to be used for permission generation
        PermissionsTargetFile - The target file where the scripted permissions 
				will be written
        AlterIfExists - If deploying against a known baseline, this will load up 
			the provider defined in ExistenceCheckerProvider and 
			for StoredProcedures, Functions and Views will NOT 
			generate a drop statement, but will only generate an ALTER.
        ExistenceCheckerProvider - The provider to use to check if an object exists. 
			Can use a file manifest 
			(AdamNachman.Build.SqlExtract.ManifestExistenceChecker, 
			AdamNachman.Build.SqlExtract) - a list of objects in a 
			single file, or can check the schema of an existing 
			database (AdamNachman.Build.SqlExtract.SchemaExistenceChecker, 
			AdamNachman.Build.SqlExtract).
                         	If in file, the format is [schema].[objectname], 
			with one item per line
        ExistenceCheckerConfiguration - The configuration 
				for the ExistenceCheckerProvider - 
				the ManifestExistenceChecker expects a file path, 
				the SchemaExistenceChecker a connection string. 
				If a connection string, ensure that the semi-colons 
				are replaced with %3B
        TreatWarningsAsError - Indicates that warnings should be treated as errors 
				and fail the build process
        ExtractAssemblies - An indicator that CLR assemblies should be extracted
        ExtractAssembliesConfig - The configuration for the assemblies to be extracted
    -->
  <Target Name="TheExtract">
    <SQLExtract
      SqlInstance               = ".\SQL2005"
      SqlUsername               = "BuildServiceUser"
      SqlPassword               = "BuildServicePassword"
      ProjectGUID               = "$(ProjectGuid)"
      BuildPath                 = "Build"
      SchemaScripts             = "@(Build->'%(FullPath)')"
      StaticFiles               = "@(CustomDeployment)"
      SplitDropToSeperateFile   = "1"
      ExtractData               = "1"
      ExtractFromServer         = ".\SQL2005"
      ExtractUsername           = "BuildServiceUser"
      ExtractPassword           = "BuildServicePassword"
      ExtractFromDatabase       = "SampleDBSource"
      ExtractTargetFile         = "$(MSBuildProjectDirectory)
				\Scripts\CustomDeployment\LoadReferenceData.sql"
      ExtractCommandsFile       = "$(MSBuildProjectDirectory)\ExtractCommands.txt"
      ScriptPermissions         = "1"
      PermissionsTemplate       = "$(MSBuildProjectDirectory)\PermissionsTemplate.xml"
      PermissionsTargetFile     = "$(MSBuildProjectDirectory)\Scripts\
					CustomDeployment\RolePermissions.sql"
      AlterIfExists             = "0"
      ExistenceCheckerProvider     = "AdamNachman.Build.SqlExtract.SchemaExistenceChecker, 
				AdamNachman.Build.SqlExtract"
      ExistenceCheckerConfiguration  = 
	"Server=.\DEV2005%3BDatabase=mySourceDatabase%3BTrusted_Connection=yes%3B"
      TreatWarningsAsError      = "1"
      ExtractAssemblies         = "1"
      ExtractAssembliesConfig   = "$(MSBuildProjectDirectory)\ExtractAssemblies.xml"

    />
  </Target>

  </Project>

The targets model is flexible, so you could extract and publish in a single step by changing the targets to MSBuild SampleDB.dbproj /Target:TheExtract,DeployDatabase. My targets pattern is /Target:Build,TheExtract,RestoreDatabase,DeployToRestoredImage,PublishToShared,Checkin, where DeployToRestoredImage executes the newly extracted script against the restored backup image from production, PublishToShared updates the shared development environment (both of these are simply copies of DeployDatabase with different parameters) and Checkin adds the scripts to the source repository. Each step is only executed if the previous one is successful.

Using the Assemblies in your Project

Essentially, the entire process is controlled from within the MSBuild targets file. For those readers who are unfamiliar with the MSBuild process, you think of the targets file as a serialized repository for a class representing the configuration and execution sequence of a series of tasks. In the example project that I have provided, I have already edited the .proj file to include the custom MSBuild target.

The overall process is divided up into the following steps:

  • Code extraction (TheExtract)
    1. Parse the project files, creating a class in memory per schema object
    2. Create a local, test database
    3. Execute the DLL against the local database, checking for dependencies to allow us to sequence the content correctly to eliminate warnings when we deploy
    4. Write the output .sql files to disk in the build folder
    5. Write the static files (custom deployment files) to the build folder
  • Deploying the database (DeployDatabase)
    1. Validate the database connection
    2. Read and sort ScriptExecutorConfiguration.xml
    3. Execute the scripts in sequence, reading them from the build path

Note that I have taken the decision to always drop and replace all Stored Procedures, views (indexed and otherwise), and functions with each deployment. This automates the recompile, and ensures that no incorrectly cached plans will be used. At the same time, it guarantees that the latest version of the code will be deployed into the production environment. Some may argue that this introduces the risk that if a change has been made in production, it will be overwritten and lost for all eternity. My argument is that if it doesn't exist in the source repository with the project in the first place, it doesn't exist anyway, so there is nothing to lose ... :).

In a continuous integration environment, ideally, the target database for the deploy database step would be a daily refresh of a production image. This way, you will know daily whether or not you have broken something, and be able to address it early on in your project. Obviously, in an Enterprise environment, a backup of TBs of live data is impractical - here, you would rely on your DBAs to assist by providing enough sample data and the correct production schema. Also, given the fact that your project would be stored in a source repository (TFS, Subversion, etc.), any on-the-fly production changes that the DBAs make can be applied directly to the project files.

Releasing your Database Project with Updates

Adding New Schema

Assuming that the new schema objects can be added without modifying existing objects (for example, adding a new table, or new stored procedure), simply add the new object to the database project and let the extract process do the rest. As long as the file (e.g. TableFive.table.sql) is included in the project (and marked for BUILD in the file properties), it will automatically be added to the extracted script files, and wrapped with an existence check script.

Updating Schema

In the event that you would like to update the schema, there are a number of options available to you. You can always add a custom script to the CustomDeployment folder and add a new sequence item in the ScriptExecutorConfiguration.xml configuration file. However, the AlterTables.sql file is included in the sample project and default .xml file, and can easily be used to extend your project. Let's assume you have a table TableOne, and you want to add Col5. The steps you would follow are:

  1. Change the Create Table statement in the TableOne.table.sql to include the new column.
    SQL
    CREATE TABLE [dbo].[TableOne] (
        [Col1] INT           IDENTITY (1, 1) NOT NULL,
        [Col2] INT           NOT NULL,
        [Col3] NVARCHAR (50) NOT NULL,
        [Col4] VARCHAR (50)  NOT NULL,
        [Col5] INT           NOT NULL
    );

    This will ensure that the column is included for any new databases.

  2. Add the default constraint file (new file DF_TableOne_Col5.defconst.sql) for the new column:
    SQL
    ALTER TABLE [dbo].[TableOne]
        ADD CONSTRAINT [DF_TableOne_Col5] DEFAULT ((0)) FOR [Col5];

    When the extract utility parses the script, it will create the existence check for this constraint automatically, so the script will be repeatable.

  3. Add the code to check for the existence of the column to the \Scripts\CustomDeployment\AlterTables.sql, like this:
    SQL
    IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA =
      'dbo' AND TABLE_NAME = 'TableOne')
      BEGIN
        IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA=
          'dbo' AND TABLE_NAME='TableOne' AND COLUMN_NAME='Col5')
          BEGIN
            ALTER TABLE dbo.TableOne
              ADD [Col5] INT CONSTRAINT [DF_TableOne_Col5] DEFAULT (0) NOT NULL;
          END
      END
    GO
  4. If you have a look in the source, there is a project called "DatabaseDeployment." This is a quick example of how you could write an application to process the generated script files and deploy them to a target database. In this manner, you could use the build process to generate the scripts, and then execute them at your customer. Note that there is ALSO a OutputCompoundFile flag on the DeployDatabase target. This instructs the automatic deployment task to generate a single output file appended in the correct sequence (this you could deploy to your customer if you prefer not to send multiple files - I've worked with a few DBAs that will only run the updates as single script files, and won't deploy using any application code other than Management Studio to run the scripts).

Extracting Static Data from a Source Database

Create a "source" database somewhere on disk, and use that to populate any other "default" reference data (lookup tables, etc). Change the ExtractData flag to "1", ensure that ExtractFromServer and ExtractFromDatabase are correct.

You already have the correct command to extract data from TableOne in the ExtractCommands.txt file by default. If you wanted to add another reference table, you would simply add another line. Note that the "merge" flag of one instructs the extract to insert if it does not exist and updates if already in place. To handle insert only, set @merge to 0.

Novice (and even intermediate and advanced) users should not have a need to alter the extract and deployment code to fit their requirements. The possible exceptions here are when there is a new DDL type not covered by the code (or one that I have not seen fit to cover myself). By and large, the existing framework should cover 99% of your requirements.

The bottom line, however, is that the code will not automatically generate a set of "delta" scripts, but because of the way that it is generated IF you code your alter statements carefully you will be able to use the same script at new AND old customers, and upgrade pretty much ANY previous versions of the schema. The default sequencing of the script files (configured in Scripts\CustomDeployment\ScriptExecutorConfiguration.xml) will ensure that the upgrade is performed correctly. Note that the sequence is pretty much open ended, so there is nothing to stop you from inserting your own custom scripts at any point. Just ensure that the item key in the XML is unique and in the sequence position you desire (the deployment assembly orders them according to the key number just in case you insert it in the wrong place in the file) and you can extend it as much as you like.

The following items are explicitly not covered by this project:

  1. Files. While file groups are catered for, physical files are ignored. Currently, I view this as a DBA responsibility and not a development one, and file groups will be created without a physical file. The exception is the primary file on a new database (i.e., the Deploy simply creates a new database in the default path). This also applies to files for full-text indexes. However, there is nothing stopping you from modifying the code to allow this.
  2. SQL 2008 specific syntax. While I am sure that the vast majority of the SQL sources will be parseable, I am hesitant to say this is 2008 compliant, since I haven't tested it with any of the new types (spatial objects, etc.). However, since the actual content is intended to be parsed and compiled by Visual Studio, and my code simply applies a regex to the file header to determine the type, it is unlikely to be too much of a problem.
  3. Certificates, and encryption keys. Again, I see this as a DBA function (hell, they have to do something, right?). If there is a real need out there, I am happy to extend the system as needed.
  4. User defined types, and XML schema.

Items that I've considered adding, but never got around to:

  1. Finishing the SQL CLR process (mentioned above)
  2. Compiling the output into a resource assembly instead of simple text files, and then using this to "build version" a database

I've taken an agile approach to the development of this library, and while I don't believe that it's finished yet (will it ever be?), I think that it's time to get it into the public domain and be used by other developers. Functionality can always be added when it's needed.

If you run into trouble, post a response and I'll be happy to assist. If you would like to contribute, please do so - this is intended to benefit the community as a whole.

What If I Don't Use Visual Studio?

There are plenty of people that don't use Visual Studio, for a variety of reasons ranging from "disliking Microsoft" to "using free open source products". These are all valid. It must be noted that the extraction and deployment utilities are NOT bound to Visual Studio, but only the build task. This means that there is nothing stopping you from writing a wrapper that passes all the relevant parameters into these assemblies (e.g. a custom task for CruiseControl.NET, or simply an executable that passes in the parameters pointing to .sql files on disk).

To assist with this, let me explain the structure of the existing assemblies (all the C# source code is provided):

Interfaces

An assembly containing the common interfaces used across projects. Currently, just the ILogger interface.

Utilities

An assembly containing the SerializableDictionary - the base object used by the ScriptExecutorConfiguration class to store a dictionary of KeyCaluePairs in a human readable/editable xml format on disk.

SqlExtract

The assembly containing the logic to parse the .sql files on disk. This is the core utility that generates the .sql scripts.

ScriptExecutor

The assembly that executes the SQL files, using the ScriptExecutorConfiguration.xml file as its input.

Build

The assemblies that are called from within MSBuild. If you look closely at the properties on the build tasks, you will see that they inherit from the MSBuild Task class, e.g.:

C#
public class SQLExtract : Microsoft.Build.Utilities.Task

The properties map directly to the attributes for the task in the XML, e.g.

C#
public Microsoft.Build.Framework.ITaskItem ExtractData

The build tasks simply wrap the underlying SQLExtract and ScriptExecutor assemblies, passing on those properties when the Execute method is called. So, for example, if you didn't use Visual Studio and simply stored your own .sql files on disk, you could write some code to call the SQLExtract class directly, setting all the public properties yourself, passing a List<string> of the .sql files to it. Alternatively, a CruiseControl.NET task that does the same. Or you can simply call the existing MSBuild targets from TeamCity or TFS Build or command line - it's all up to you.

DatabaseDeployment

As previously explained, a sample application that executes the script files on disk in sequence. You could use this for deploying your databases in a production environment. Ideally suited to customers with little or no SQL skills, where you can control the deployment with your own application.

Supported Database Objects

  • Tables
  • Views
  • Indexed Views
  • Functions
  • Filegroups
  • Constraints
  • Indexes (Clustered and Non clustered)
  • Service Broker Queues
  • Contracts
  • Messages
  • Services
  • Routes
  • Fulltext Catalogs
  • Fulltext Indexes
  • Partition Functions
  • Partition Scheme
  • Stored Procedures
  • Database Roles
  • Schema
  • Triggers
  • Assemblies
  • There may be more that I've inadvertantly catered for ... if you have any specific requirement, feel free to let me know.

For the record, I am currently using this on more than two dozen disparate database projects, spanning multiple servers with databases ranging in size from a couple of hundred MB to over 5 TB, all integrated into a build server running on TeamCity from Jetbrains. The key here is consistency. With a predictable and reliable build process, instant (or near instance) failure notifications and the involvement of the production DBA's in the design phase to minimize risk and encourage communication, we have successfully rolled out change after change in a highly volatile OLTP environment with a very unforgiving user base.

History

Version 1.3

  • Added support for Partition Schemes and functions
  • Added new wrapper for ScriptExecutor to allow for separate task option (sample uses it for restoring a database baseline)
  • Added support for checking for existing schema on extract (from file or db), and generating ALTER statements for functions, stored procedures and views
    Note: This was added since the existing model, while effective, fell down in a 24/7 environment. This is because the dropping and recreating of objects and assigning permissions at the end of the script would result in errors on a model supporting thousands of messages per second without the possibility of significant downtime. Creating the ALTER statements instead allows for a more targeted deployment. To regenerated from scratch, simply set the value of this config option to "0".
  • Added support for SQL authentication as well as Windows Authentication
  • Corrected bug in spGenerateMergeData procedure whereby data > 8000 characters was truncated on the extract
  • Corrected bug in the extract where the generated reference files were copied to the target output folder BEFORE they were regenerated.

Version 1.4

  • Corrected errors on extract of indexes for secondary schema
  • Added option to treat warnings as errors, which allows builds to fail on missing references
  • On request, added schema as an option to the permissions map class, allowing for targeting schema with the permissions generator
  • Migrated to VS 2010
  • Extended example extract command to show use of more complex extract commands
  • Note: With VS 2010, Microsoft's internal compiler now catches most issues with regards to missing variables (including correctly failing on case sensitive collation). Two outstanding issues that I know about include failing to detect an incorrect reference to the column of a table variable, and a call passing n-1 arguments to a stored procedure requiring n arguments is also parsed incorrectly (i.e. it succeeds when it should fail).

Version 1.5

  • On request, I added support for extracting the bitstream from an assembly and updating the assembly SQL file. This allows you to reference external assemblies and automatically extract the source and update your project file (and subsequent deployment).
    Note: I am not automatically handling any dependencies on the assemblies, or auto-creating any functions or procedures, nor am I automatically generating an alter statement. This is because I simply don't want to go to the effort of interrogating the interface of the assembly to ensure that it is backward compatible so that SQL rebinds safely - this is something that should come out during test anyway. If you have an updated assembly, I suggest that you explicitly handle the dropping of the dependant functions/procedures/constraints in a "pre-execution" script yourself. You can simply create your own custom script, add it to the ScriptExecutor configuration file and give it a low sequence number to force it ahead of the other statements.
  • Added a sample table valued function that uses SqlClr to parse a comma delimited list of integers and spit out a table variable. Wrapped with a T-SQL function called GetIds, and can be used to efficiently parse a delimited input list and allow you to join to it in a stored procedure. This will outperform pretty much any T-SQL parsing implementation, in terms of CPU utilization and memory. Note that I haven't even tried to optimize it - there are faster ones out there. This is just a simple example of efficient use of CLR to perform a task that T-SQL can do, but does poorly.
  • On request, added an option to the deploy target to prepend a using statement to the compound output file, especially to cater to DBAs that forget to select the correct database before execution ... *sigh*
  • Added 70+ unit tests to the parser to prevent a breaking change like my last update from occurring.
  • Improved overall performance of the parser (still using regex instead of a lexer, but since it works well as it is there is no reason to change).
  • Compiled for .NET 4.0
  • Corrected a bug in the extract script that resulted in the exclusion columns list being ignored
  • Build 1.5.3963
    • Corrected a bug introduced on extract from build source in existence checker
    • Corrected a bug executing during the deployment phase
    • Corrected "use" statement when prepending to single file extract
    • Added Visual Studio template - simply copy (DO NOT EXTRACT) the template zip file to My Documents\Visual Studio 2010\Templates\ProjectTemplates

Points of Interest

It's actually quite astounding to me just how many professional people seem to be blissfully unaware of the Database Edition for Visual Studio. Additionally, since Microsoft has (retrospectively) made this free with the Developer Edition since they are including it in VS 2010, the cost is no longer prohibitive to many of those in the know, but without the extra cash. Simply install SP1 on top of Developer, and then the GDR2 from Microsoft.

License

This article, along with any associated source code and files, is licensed under The Apache License, Version 2.0


Written By
Chief Technology Officer
Australia Australia
Emigrated to Sydney, Australia in 2013 from Cape Town, South Africa, and have been writing commercial software since 1997.

Expertise includes MS SQL Server (7 till latest), C#, VB6, VB.NET, VBScript, JavaScript, ASP, HTML, WPF Angular, Windows Installer and InstallShield (multiple versions) and a partridge in a pear tree. MSBuild, CruiseControl.NET, TFS, Jenkins, TeamCity, ant and nant are all necessary sidelines. Have tinkered with Java and C++

Experienced with Enterprise level application design and deployment, as well as sizing and scaling high volume OLTP database designs up to tens of thousands of transactions per second and diagnosing application and database performance bottlenecks.

Comments and Discussions

 
QuestionAwesome article, is there a way to import an existing database? Pin
Member 959413327-Nov-12 11:47
Member 959413327-Nov-12 11:47 
AnswerRe: Awesome article, is there a way to import an existing database? Pin
Adam Nachman27-Nov-12 16:58
Adam Nachman27-Nov-12 16:58 
GeneralMy vote of 5 Pin
soulprovidergr13-Sep-11 3:56
soulprovidergr13-Sep-11 3:56 
QuestionIncremental/Delta scripts since last checkin? Pin
Member 312204813-Dec-10 12:40
Member 312204813-Dec-10 12:40 
AnswerRe: Incremental/Delta scripts since last checkin? Pin
Adam Nachman13-Dec-10 18:34
Adam Nachman13-Dec-10 18:34 
GeneralRe: Incremental/Delta scripts since last checkin? Pin
Member 312204814-Dec-10 4:26
Member 312204814-Dec-10 4:26 
GeneralRe: Incremental/Delta scripts since last checkin? Pin
Adam Nachman14-Dec-10 7:51
Adam Nachman14-Dec-10 7:51 
GeneralRe: Incremental/Delta scripts since last checkin? Pin
Member 312204814-Dec-10 9:06
Member 312204814-Dec-10 9:06 
GeneralRe: Incremental/Delta scripts since last checkin? Pin
Adam Nachman14-Dec-10 16:49
Adam Nachman14-Dec-10 16:49 
GeneralRe: Incremental/Delta scripts since last checkin? Pin
Member 312204820-Dec-10 12:13
Member 312204820-Dec-10 12:13 
GeneralRe: Incremental/Delta scripts since last checkin? Pin
Adam Nachman20-Dec-10 17:02
Adam Nachman20-Dec-10 17:02 
GeneralMy vote of 5 Pin
Trellium1-Dec-10 5:17
Trellium1-Dec-10 5:17 
GeneralMy vote of 5 Pin
linuxjr23-Nov-10 12:44
professionallinuxjr23-Nov-10 12:44 
GeneralMy vote of 5 Pin
Shahriar Iqbal Chowdhury/Galib23-Nov-10 8:53
professionalShahriar Iqbal Chowdhury/Galib23-Nov-10 8:53 
GeneralMy vote of 5 Pin
Eric Xue (brokensnow)8-Sep-10 10:52
Eric Xue (brokensnow)8-Sep-10 10:52 
GeneralUpdate to build 1.4 Pin
Adam Nachman25-Aug-10 5:13
Adam Nachman25-Aug-10 5:13 
GeneralRe: Update to build 1.4 Pin
Adam Nachman25-Aug-10 5:38
Adam Nachman25-Aug-10 5:38 
GeneralScript database structure update Pin
fgoldenstein6-Jan-10 3:41
fgoldenstein6-Jan-10 3:41 
Nice article!
I'm not sure if I understood correctly but I'm looking for an application that simplifies the process of updating database structure. I developed a software that uses a SQL Server 2008 database and I would like to send to my customers the database script to update the structure when I make a change. The process should compare both databases (development and deployment) and execute the script to make both structures equal and to fill cell values with default values. For example, if I add a column called xx in Customer table and that column cannot be null, then the application should add that column to the deployed table and set a default value (0 if number or empty if text). Sorry for my English.
Thanks in advance.
GeneralRe: Script database structure update Pin
Adam Nachman6-Jan-10 5:26
Adam Nachman6-Jan-10 5:26 
GeneralRe: Script database structure update Pin
Adam Nachman6-Jan-10 7:29
Adam Nachman6-Jan-10 7:29 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.