What is the True Version of Code?

  • mister.magoo (2/3/2016)


    Orlando Colamatteo (2/3/2016)


    By the very nature of how a compiled application's deployment model works the developer or admin is compelled to go to a VCS where the plain-text class and Visual Studio (VS) Solution and Project files reside, open the Solution/Project in VS, make the code change and then after steps like unit testing, checking code in, checking the complete build still works, having the code reviewed, etc. they can build and re-deploy of the application to the web server to get that change into play. All of these steps practically write themselves as a byproduct of the code build-and-deploy model. With databases we can just skip all of that and for those comfortable doing so, signing up for placing a VCS first (always) is a tough sell and naturally will be met with some opposition.

    This build-and-deploy model is actually what SQL Server Data Tools (SSDT) Database Projects implements and is the paradigm-shift that lends itself to placing a VCS first in the lifecycle. It is not a coincidence that SSDT Database Projects are implemented in Visual Studio. Have you heard some of the vitriolic comments about SSDT Database Projects? It is a threat to a lot of folks because it takes power away from that database professional sitting at the top of the food chain in a lot of shops and shifts that power to the change management folks (i.e. managers) and the development team at large. Lot's of the vitriole translates to "not on my watch! they cannot be trusted!" From my perspective it is all rooted in fear and insecurity and is comical, actually.

    You're correct in that the two positions, server as Gold vs. VCS as Gold, are practically polar opposites and therein lies the debate. It takes discipline to put a VCS first in a database environment because in effect, is viewed mostly as optional due to is not being introduced early in our academic and professional education and therefore enters later in our learning after habits have already been formed. In a compiled application environment a VCS is still technically optional however the build and deploy model practically compels app devs to employ a VCS. Imagine if after a compiled application is deployed a bug is found and the original source code was only stored on the original developer's workstation and his machine has since crashed and burned. In this case the only recourse is to find a decompiler and try to recreate the original source code files and Project (build instructions) to be able to recapture something that can be modified and redeployed. Some decompilers are better than others but this is not really a strategy for managing code, it's a get out of jail free card, a bailout.

    This right here is why Database deployment is different to Compiled Application deployment. You cannot work without an offline source repository, whatever the nature of it is, when developing and maintaining a compiled application. Even the best .net decompilation doesn't get you back to the True Version of Code. A Database (excluding CLR/extended procedures) contains an (the) online version of the code, the schema, the configuration data. It is self contained, you can modify it, and as long as someone can modify it, nothing else can ever be 100% trusted to be the True Version of Code - without a quick refresh. You may achieve 99% trust and you may believe you have 100% trust in a VCS for database code, but the truth is I could change something without you knowing and that change could affect data in such a way that a deployment from VCS, without comparing to the database first, could make things very broken.

    I believe this is why, until we reach the point where no-one can modify the database without deploying from the VCS, the database will be gold.

    I do not think it has anything to do with removing an existing capability from the server. Even in most compiled applications you have at least one configuration file (e.g. web.config) or other plain-text deployable asset that can, sometimes by drastic measures, control the behavior of an application so how are those assets different than database code in terms of being changeable outside a VCS? Taking it one step further, I don't have the level of fu required but I suppose one could even open a binary editor and actually rearrange an executable or DLL to behave differently if they knew what they were doing. Is that second scenario likely? No, but theoretically possible. Somehow these application development teams manage to leverage their VCS where they can attain trust in the contents being gold and feel confident building and deploying from source, every time, so why can't we?

    That said, I wouldn't work without a VCS - I use it (somewhat haphazardly I admit) for everything - Application Development, SSRS, SSIS, batch files, READMEs, Database scripts, documentation, and I would love to work in the way you describe, I just don't have the tools or the environment to do that right now.

    As things stand, I use VCS in a production database environment mostly as a way of monitoring changes and capturing deployed versions as I don't have 100% trust that it will always be the right code.

    Take SSDT Database Projects as the vehicle...in a Build, Review, then Deploy model (as Andy and I have sort of hashed out) any changes to the model will be there for a DBA to review vis the change script produced from the Build phase. Taking it one step further the candidate Build's change-script that was just produced could be compared to a change-script produced by comparing the last known good model in VCS to the same candidate code to potentially find unsanctioned changes made to the production database outside the VCS. I am assuming good knowledge of how SSDT models are stored and how Build scripts are produced so if that did not come across clearly no problem, point is the VCS has the goods and we have the ability to do the necessary checks starting with code in the VCS. All I am saying is that placing a VCS first in the database lifecycle is technically possible. Refusing to place the gold label on the VCS is either a choice or is forced by organizational challenges but by no means are there technical hurdles that cannot be overcome.

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • removed silly post...

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • Orlando Colamatteo (2/3/2016)


    removed silly post...

    Your own, presumably? 🙂

    If you haven't even tried to resolve your issue, please don't expect the hard-working volunteers here to waste their time providing links to answers which you could easily have found yourself.

  • Phil Parkin (2/4/2016)


    Orlando Colamatteo (2/3/2016)


    removed silly post...

    Your own, presumably? 🙂

    Yeah, just some misplaced humor I didn't want mistaken for something that might take away from this thread. This written stuff is tricky, sometimes.

    There are no special teachers of virtue, because virtue is taught by the whole community.
    --Plato

  • Years ago, I started working for a company where the developers didn't check their .sql scripts into version control, or at least the smaller team I worked with didn't. However, what I did do was enforce a policy where only I and the director of database operations had SYSADMIN or DBO rights in production. Also, whenever a .sql script was submitted for deployment, I would check it into version control (it was Visual SourceSafe at the time). At first only I used it, but over time, as the developers grew tired of going to me to fetch prior versions of their own code (because I required them to create a work ticket which got copied to management), I convinced them to start using it themselves. Eventually it was managed entirely by the devleopment team.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • I'll still stand that allowing someone to modify code on the server is a fundamental design flaw. It's reality, but it's a problem.

    The live code is on the server, but is should always, always, be matched by a copy in a VCS. I chatted with Andy on this, and I understand his concern. How does the DBA know that the code in the VCS is what's on the server? Certainly developers could fail to branch, or work with the wrong branch, and make changes.

    There are also needs to sometimes make quick changes, like a recompile option, maybe change an index, etc. Those are valid changes, but I'd still say that there should be a copy of the code in a VCS. Because that way I have a history.

    All of the argument for making a production change boils down to expediency. I get that. If a new index, or a recompile can fix something in 10 sec, why spend more time going to look for other code, running it elsewhere, etc. There are certainly business pressures to move quickly.

    However, if you make a change at 3am, all of you are assuming you'll do a good job. Or that your change fixes things. Imagine you go to add recompile and accidently change this:

    CREATE PROCEDURE MyProc

    @importantparameter VARCHAR(25)

    AS

    to this:

    CREATE PROCEDURE MyProc

    @importantparameter VARCHAR(2)

    WITH RECOMPILE

    AS

    And you don't notice you fat fingered something. What's the previous version of the code? This compiles, and it might not be for minutes or hours that you realize there's an issue. What if there are other, logical changes, that cause problems you didn't forsee and you need to compare the version you "fixed" with the version that was running?

    My point is that you always, always, always, should get code from a VCS. The Redgate tools (disclosure, I work for Redgate) do a few things here. The SSMS add in, SQL Source Control, compares things with the VCS, so you can see the other versions. Perhaps there's a valid reason for production to be stored in a separate REPO that gives you an "audit" of your VCS.

    Our deployment tools in the DLM/CI/CD world are built to be run quickly, but also build a package from a VCS (comparing to production) and then use that package for deployment. However, if production is different from what's in the package at generation time, we throw errors, precisely because drift can cause you issues.

    In a perfect world, I'd run code on a test server, however I've built the code, and that code I ran is stored and a process executes it against staging and production (and other environments). Even if it's a simple:

    delete Sales

    where salescode in (56, 57)

    So that I can't accidently run "delete sales" without the WHERE. I don't know a system that covers these kinds of changes, but that's ideally what we want. Not to slow things down, but to take a few minutes and get a sanity check.

    If you don't have a large pipeline, ideally you still create a script that is ONLY what you run and execute that against some test server. Express, QA, whatever. I expect that you have somehow figured out a way to get an environment you can test against. You should be able to run the entire script, meaning if you have extra stuff in there for testing or checking, comment it out. Don't highlight anything, but press "execute" and see if it works. If it does, you change connection and do the same thing on production, without changing anything in that window.

  • Speaking as a dev and, therefore, someone who's used to viewing DBAs as a bunch of process Nazis (only kidding - but you know what I mean) seeing so many of you talking about "slamming a change into production" certainly raises an eyebrow.

    I'm primarily a LOB application developer but I've had to be an accidental DBA more than once. I've also been responsible, about 5 or 6 years ago I think, for trialling a bunch of database centric VCS tools (including RedGate's) with a view to introducing proper change management for the database. In the end we decided not to adopt any of them because none of them were seamless enough for us. Every one of them, while promising a better tomorrow, made our life just a little bit more difficult right now.

    I think Eric expressed your goal (and the problem you face) pretty well here:-

    As a DBA, if I get a 2am call that a critical application is down, because a select in a stored procedure is missing a join operator, then I'm making that change myself and then notifying the development team.

    I don't think that's an unreasonable expectation for a DBA to hold. But how about if you could do exactly that, with no extra hoops to jump through, no added complications, however minor, and have the change control just happen? Would you adopt Source Control if you didn't even have to be aware of it's existence? Of course you would.

    That's where us devs are at right now. When I want to make a change I don't worry about whether my "Gold" version is in synch with production. I don't worry about whether I've checked the class out. I don't worry about whether my change will break production or interfere with someone else's changes. I just open up visual studio, make my change (which will automatically check the file out if using a checkin checkout model), check my change in (or merge it, depending on the model that's being used) and walk away. It gets QA'd automatically, it gets its audit trail maintained, it get's integration tested against any critical branches and, if I'm working an emergency, fix it's rolled straight up into a patch which I can apply then and there with absolute confidence.

    I don't think the difference between Devs and DBAs is one of attitude or culture, you want your change process as properly managed as we do. You sure aint slap dash or lazy about process, I know that for sure. The difference I found was in the tools and how they integrate into the ways we work. My findings when I trialled a bunch of DB VCS tools were a mixture of:-

    1. They were often very slow to load. I guess they were churning through some pretty big schemas but I don't really care what the reason was. I don't want to have to wait 20 minutes before I can get on with my work

    2. They didn't integrate into other tools properly. Most of them expected me to go into windows explorer to check a file out - I want to do it without leaving SSMS, Vis. Studio or whatever other tool I'm working from. In fact, I don't want to have to do it at all, I want it to just happen.

    3. They usually required a change to be scripted. Really? I'm in SSMS with nice graphical designers and you would me to ignore all that gui goodness and write a query to add a column to a table?

    4. They all seemed to heavily favour the update-merge model and ignore checkin-checkout. We were a small team who weren't likely to tread on each others toes, CICO was safer for us and much less hassle than merging.

    5. They didn't enforce source control. They relied on human beings not making uncontrolled changes but certainly didn't prevent them from doing so. That means I can never have confidence that the repository really is gold.

    6. None of them had automated testing or continuous delivery functionality. That's the way most dev teams work now and I'd love to see DBAs get to the same place (you'd all be a lot less stressed, believe me).

    I'm bound to say (at not just because I'm posting this on SCC:-)) that the RedGate tool was the best we trialled at the time but even that fell short of the "absolutely seamless" standard that us devs get to work with. Every one of the tools, at some level, forced us to be aware of its existence.

    Of course, my experience was from 6 years ago and I'm willing to bet the tools have improved a lot since. I'd certainly urge anyone to at least take a look at them and try them out because, if they have improved enough to be seamless, they will improve your life hugely.

  • Andy Warren (2/2/2016)


    Steve, I wonder if calling it a design mistake is correct, or fair. It's a different product with a different purpose. I like the ability to make small changes (via VCS or not) such that I know the overall impact is focused and measurable. I bet devs would too! I think that drives a lot of the micro service adoption. Less surface area at risk when you make a change.

    My cynical take is that there isn't enough bang for the buck for most orgs to worry about deploying the db directly from source control. I've seen lots of places where we would update VCS daily based on production because we did want the history and the safety net. Those changes went through QA and were deployed via change management processes, just not directly from VCS. Perfect? Not at all. But reasonable, at least to me!

    Not saying its a great reason, but to do more than that requires it to be fairly painless/cheap or its a non-starter. I also wonder if its just a lack of experience on my part so that I could reduce the pain/cost to the point that it wasn't a reason to not do it.

    I dunno about design mistake either, but I see where Andy's coming from. However I have to disagree about impact. After all, if you change one small stored procedure it could have a profound impact as the small change causes changes to the data which *other* stored procedures then magnify because they aren't aware of the change.

    Classic dependency problem. 😀

    Just because the piece you change is small doesn't mean it isn't important...

  • Eric M Russell (2/3/2016)


    As a DBA, if I get a 2am call that a critical application is down, because a select in a stored procedure is missing a join operator, then I'm making that change myself and then notifying the development team.

    "As a DBA, if I get a 2am call that a critical application is down, because a select in a stored procedure is missing a join operator, then I'm making that change myself and then notifying the development team."

    Ok, THAT is your problem right there. How in the world did the missing join operator get through testing?

    Something that breaks the system shouldn't be able to make it into production in the first place. If it did, then your development process is severely broken. Either you're trying to shave pennies by having developers be your QA team (a horrifically stupid idea) or your development team hasn't got a representative dataset to test with. (also a horrifically stupid idea).

    I've heard the arguments about "protecting" real data from the prying eyes of developers and it's completely bogus. If you can't trust your developers to see the real data you can't trust your developers, period, and should never have hired them in the first place.

    If it's a matter of the development server not being secure enough then *SECURE IT*. Best practices are best practices.

    Sorry, pet peeve there...

  • roger.plowman (2/5/2016)


    Eric M Russell (2/3/2016)


    As a DBA, if I get a 2am call that a critical application is down, because a select in a stored procedure is missing a join operator, then I'm making that change myself and then notifying the development team.

    "As a DBA, if I get a 2am call that a critical application is down, because a select in a stored procedure is missing a join operator, then I'm making that change myself and then notifying the development team."

    Ok, THAT is your problem right there. How in the world did the missing join operator get through testing?

    Something that breaks the system shouldn't be able to make it into production in the first place. If it did, then your development process is severely broken. Either you're trying to shave pennies by having developers be your QA team (a horrifically stupid idea) or your development team hasn't got a representative dataset to test with. (also a horrifically stupid idea).

    I've heard the arguments about "protecting" real data from the prying eyes of developers and it's completely bogus. If you can't trust your developers to see the real data you can't trust your developers, period, and should never have hired them in the first place.

    If it's a matter of the development server not being secure enough then *SECURE IT*. Best practices are best practices.

    Sorry, pet peeve there...

    Clearly, the 'missing join operator' was just a flippant example.

    What you seem to be implying here is that all production software must be bug free and bullet-proof in its error-handling. It isn't.

    If you haven't even tried to resolve your issue, please don't expect the hard-working volunteers here to waste their time providing links to answers which you could easily have found yourself.

  • Phil Parkin (2/5/2016)


    roger.plowman (2/5/2016)


    Eric M Russell (2/3/2016)


    As a DBA, if I get a 2am call that a critical application is down, because a select in a stored procedure is missing a join operator, then I'm making that change myself and then notifying the development team.

    "As a DBA, if I get a 2am call that a critical application is down, because a select in a stored procedure is missing a join operator, then I'm making that change myself and then notifying the development team."

    Ok, THAT is your problem right there. How in the world did the missing join operator get through testing?

    Something that breaks the system shouldn't be able to make it into production in the first place. If it did, then your development process is severely broken. Either you're trying to shave pennies by having developers be your QA team (a horrifically stupid idea) or your development team hasn't got a representative dataset to test with. (also a horrifically stupid idea).

    I've heard the arguments about "protecting" real data from the prying eyes of developers and it's completely bogus. If you can't trust your developers to see the real data you can't trust your developers, period, and should never have hired them in the first place.

    If it's a matter of the development server not being secure enough then *SECURE IT*. Best practices are best practices.

    Sorry, pet peeve there...

    Clearly, the 'missing join operator' was just a flippant example.

    What you seem to be implying here is that all production software must be bug free and bullet-proof in its error-handling. It isn't.

    Oh of course there will be bugs. Just not show-stopping ones. That's why you have QA in the first place.

    No bug should be so bad it has to be fixed RIGHT NOW OR THE WORLD DIES! Given management's Chicken Little propensities (been on the receiving end more than once) you do as much edge-case testing as you can and hope the QA folks are more inventive than you are. 🙂

    It's very rare in a properly controlled environment that bugs can kill production. That's the *point*, actually...

  • No bug should be so bad it has to be fixed RIGHT NOW OR THE WORLD DIES!

    I would like to live in your fantasy world. It sounds so perfect. Here in the real world bugs happen, and if not fixed at night, at 3 AM sometimes, at least our management would think the world was ending if they didn't have there data.

    -------------------------------------------------------------
    we travel not to escape life but for life not to escape us
    Don't fear failure, fear regret.

  • below86 (2/5/2016)


    No bug should be so bad it has to be fixed RIGHT NOW OR THE WORLD DIES!

    I would like to live in your fantasy world. It sounds so perfect. Here in the real world bugs happen, and if not fixed at night, at 3 AM sometimes, at least our management would think the world was ending if they didn't have there data.

    Oddly enough, I do live in that world, every single day. 😎 It does help that I was able to institute this philosophy from the very get go. I do understand many MANY developers/DBAs are not so fortunate. They inherited a mess and have to live in it because it's *way* too much to fix, with all those 3AM fires (and no, I'm not being sarcastic).

  • Roger, it's not that I don't personally trust the devs, but in the real world of PCI and security breaches it's not logical or sane to NOT limit who can see the data, or to let them see data without auditing the use. I know it makes it harder. We will typically elevate a dev if there is a problem that requires prod access, work with them to fix it, then reduce the access. It's not all that sane to let the DBA's see it either! When there's a breach, guess who gets to talk to the men in black? Always Encrypted is the right idea, but I don't have the experience yet to know if its the right implementation.

  • roger.plowman (2/5/2016)


    below86 (2/5/2016)


    No bug should be so bad it has to be fixed RIGHT NOW OR THE WORLD DIES!

    I would like to live in your fantasy world. It sounds so perfect. Here in the real world bugs happen, and if not fixed at night, at 3 AM sometimes, at least our management would think the world was ending if they didn't have there data.

    Oddly enough, I do live in that world, every single day. 😎 It does help that I was able to institute this philosophy from the very get go. I do understand many MANY developers/DBAs are not so fortunate. They inherited a mess and have to live in it because it's *way* too much to fix, with all those 3AM fires (and no, I'm not being sarcastic).

    Sounds like a great place to work then. So be sure to post any job openings you have, I'm sure you will get a lot of people wanting to sign up when you mention that you don't have to worry about those 3 AM fires. :Wow:

    -------------------------------------------------------------
    we travel not to escape life but for life not to escape us
    Don't fear failure, fear regret.

Viewing 15 posts - 61 through 75 (of 99 total)

You must be logged in to reply to this topic. Login to reply