Treat the Database like Code

  • Evil Kraig F (4/6/2012)


    I am all for the concept of versioning. However, after reading through this, I find noone's addressed a few key issues that I have with traditional software versioning. If I missed it, I'm sorry...

    In traditional software, if you need to go back to a different version, you simply build that version (label, whatever you want to call it) and drop it into the install. Poof, you're good to go.

    In databases you need to rollback your production structure so you don't have data loss. You can't simply rebuild the database and port the data, at least not on large systems. Even then, if you had multiple minute schema changes, you're going to be adjusting for each and every one of them.

    Thus, you need to include rollforward/back scripts with every label/version. This lends itself heavily to the delta model, particularly since one of the concepts is not just documentation, but reversal of rollouts that failed worse then the bug they tried to fix.

    Now, in theory, you could use the creation model, build off a model and do a SQL Compare or similar to build your rollback script, thus saving yourself that headache. However, getting your company to spring for the tools you need can be a royal pain some places.

    The other model I've seen is a continuous integration model... For databases, the less said about this the better. Once you've altered an index it alters/rebuilds every time you take that component out for the next build/release.

    So between the three, Delta/CI/Creationist, I usually prefer Delta. It lends more freedom to independent component rolls, however you must document any dependencies both consistently and heavily.

    You have to have tools to make this happen. I suppose you could build your own, but as you say, it's a tough problem. The two tools I've done this with, Red Gate & Data Dude, work off a sort of combined model. They each create delta scripts, just in slightly different ways.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • Regarding the issue of how to implement a rollback when deploying DDL/DML scripts, I'd prefer to deploy major changes (altering or bulk updating large tables) at an off hour with the database in RESTRICTED_USER mode, when that is an option. If a script aborts or the changes fail a smoke test, then a rollback can be performed by a point time restore.

    If the database must be available to users 99.99, then that's practically not an option. However, I think we would find that allowing database changes to be deployed using the method above would actually result in a higher net availability, because it would speed the recovery of a failed deployment.

    In other words, if we don't allow for dedicated deployment windows, we may actually increase the chance that the database could become unavailable for an extended period of time.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Grant Fritchey (4/5/2012)


    call.copse (4/5/2012)


    This is one area I do find hard to make easy. I do agree it has to be done absolutely but getting a team of people to use a consistent way of doing it is not easy. Still, we seem to muddle through.

    Anyone got any really fool proof ways to implement this? We would be up for using paid tools if they can help effective organisation. We tend to have a dev server from which changes are rolled out to test and then to production, pretty standard stuff. Anyone in favour of

    A) the 'delta capture' approach or would everyone go with

    B) record all changes as script in the first place

    or perhaps B) with A) as a check?

    The main problem is multiple people recording multiple changes which then need to be rolled up into one change script. Some features then may not make the cut, or be delayed, meaning more slicing and dicing. Should everyone work with one master script (for each version) to be checked in and out as needed?

    Excellent questions. I laid out a lot of my approaches over the years in the book SQL Server Team-Based Development[/url] (free download for the E-Book). Although, I wrote it a few years ago and the tools, and therefor the processes, have refined somewhat since then. It uses Microsoft and Red Gate tools (same tools I'd recommend now). However, it does lay down a framework for working with either a shared development environment or multiple individual environments. I prefer the latter approach now (I used to prefer the former).

    I strongly recommend using B. All database development is driven through source control, just like app dev. That way, you can more easily support branching in order to have multiple development streams, one for the current release bug fixes, one for the next release, and one for the release after that one (and frequently one or more for an integration release with other apps). That's the toughest problem to fix from a single shared development environment. In fact, I've never been able to support branches from a single shared environment. I don't think it's possible.

    If you have to go with A, you're going to have a harder time identifying a specific point in time in the code, and that's one of the best parts of using source control.

    Thanks Grant. I'll have a look through your ebook and see where I get to. I guess the tricky bit (from my perspective) is getting the branching right when you have a script for features A, B and C and B needs urgent roll out but A and C are shelved possibly short term, possibly long term. You know, does B rely on bits done for A or C, that kind of thing. It's an ongoing battle with no right answer!

  • call.copse (4/12/2012)


    Thanks Grant. I'll have a look through your ebook and see where I get to. I guess the tricky bit (from my perspective) is getting the branching right when you have a script for features A, B and C and B needs urgent roll out but A and C are shelved possibly short term, possibly long term. You know, does B rely on bits done for A or C, that kind of thing. It's an ongoing battle with no right answer!

    No, there are a lot of ways to get this done, you're right. But, the reason I say to get everything in source code is because, if you branch, you'll then have 3 copies of code, from this example. If you deploy the B copy, and it doesn't work right because it needs bits from A or C, you can deploy and test B to identify this issue. If you're just working from sets of differentials, the gods only know what you should be deploying.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

Viewing 4 posts - 46 through 48 (of 48 total)

You must be logged in to reply to this topic. Login to reply