Avoiding Stored Procedures

  • I was just suggesting that hardware expenditures ( assuming the company has the funds ) is quicker than serious code re-engineering AND keeps development focused on all of those new features the customers just have to have.

    At least we're getting a dialog going now ( where I work ) on addressing performance, with some bodies dedicated to the effort.

    Quote from email to Development Lead today -- I don't expect anyone to read this:

    --------------------------------------------------------------

    "The problem is, that the database was designed many years ago, with many fewer records. We are now experiencing a variety of slowness issues such as deadlocks during some portion of pull reports, blocking in the aspstate database, slowness and elevated cpu when a plan created for a small client is re-used for a large client and a huge number of execution plans only used once.

    Another one that came up recently is making sure parameters are declared the same size as the character column in the database – not at the length of the string itself. This has a bearing on whether the plan is used again. The alternative, which TMS uses quite a bit, is declare all character variables as varchar(8000), but tying it to the actual data column length is better.

    @DerivedTable01_3311 VARCHAR(58), @DerivedTable01_7957 VARCHAR(13);

    SET @DerivedTable01_3311 = 'Fdi.Workflow.Po.FollowUpConstrainedDealerPhoneCallWorkItem';

    SET @DerivedTable01_7957 = 'Fdi.Po.Client';

    In general, we need to shift the focus away from new features in the application to performance. Along those lines, I’ve been pushing the idea of a QA hurdle. Since QA is very lightly loaded, any query, new or old, must be extremely fast in QA or it does not get promoted to production.

    Lastly, we need to pay attention to adding columns to tables that already have many columns. Service_request is the worst at over 130 columns. I’ve been gradually adding missing indexes, disabling unused indexes and raising the cost threshold for parallelism – first in QA, then in production, with coordination with QA/DEV leads. Cost threshold is now 25 in prod, up from the outdated default of 5. In QA some environments are at 25, some raised to 40 which seems to be the most recommended value on modern hardware with a mixed OLTP/Reporting application."

  • Indianrock (8/31/2016)


    In general, we need to shift the focus away from new features in the application to performance. Along those lines, I’ve been pushing the idea of a QA hurdle. Since QA is very lightly loaded, any query, new or old, must be extremely fast in QA or it does not get promoted to production.

    Lastly, we need to pay attention to adding columns to tables that already have many columns. Service_request is the worst at over 130 columns. I’ve been gradually adding missing indexes, disabling unused indexes and raising the cost threshold for parallelism – first in QA, then in production, with coordination with QA/DEV leads. Cost threshold is now 25 in prod, up from the outdated default of 5. In QA some environments are at 25, some raised to 40 which seems to be the most recommended value on modern hardware with a mixed OLTP/Reporting application."

    Wouldn't a lightly loaded QA environment be the last place to test for good performance? Now I'd be worried if bad performance showed up there. 😛

  • Absolutely correct. I don't know why we don't have load tools so that a prod-like load can be tested in QA. The need has been there for years.

  • Indianrock (8/31/2016)


    Absolutely correct. I don't know why we don't have load tools so that a prod-like load can be tested in QA. The need has been there for years.

    Unless you have an anti open-source policy there is always Gatling [/url]to play with.

    There is always the option of pushing a replay trace through your QA environment.

    One of the reasons that REDIS came to the fore was to address bottlenecks with people trying to manage session state in RDBMS'. In all honesty I thought ASPState DBs vanished in the last decade.

  • Jeff Moden (8/31/2016)


    ...What works best for a given situation is what every company and the individual teams in a company should strive for...

    This is why I often say my loyalty lies with the project. I see too much job protection, CV Engineering and bonus based motivation.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • David.Poole (8/31/2016)


    ...Hardware is cheaper right up until your data centre is full, then your next server costs millions.

    Cloud hardware is cheap provided you design to scale out on the low end kit and don't want it up 24/7/365...

    Code written today should be scaleable by default through configuration and deployment. There is rarely a need to do anything different in code as even writing a small system that should cater for some data increases would employ the same techniques. There is that much indirection in code and systems that items like caches are often changed with scant regard to the code e.g. when calling a SQL Server SP what edition of SQL Server is it? what's its configuration? is it clustered? load balanced? etc.

    Of course, very poorly written code can circumvent much of this. Particularly within a subsystem e.g. a badly written SP can make the SQL Server subsystem unacceptably slow. But then you just have to fix that point of performance failure and move on to the next.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • Jeff Moden (8/31/2016)


    ...It also doesn't normally require "sophisticated" code...

    I would go as far as saying it normally requires "unsophisticated" code.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • I haven't read every reply in this rather long thread, but here is my 2 cents.

    As a data architect who also develops applications in Python, the rapid development aspect of what I and other developers need is critical. Regardless of any splitting hairs on scalability, control and so forth, SQL developers are a bottleneck just as much as developers are a bottleneck. Combining both of these into one from a business perspective does speed up production and has huge savings upfront.

    Unfortunately, a lot of DB professionals argue the long-term impact of that approach. I'm not going to disagree that both ORM's and developers can and will make poor design choices going forward, but I do however feel, the chances of success even in the face of those poor design choices are acceptable to have when looking at the bigger picture. At least in my experience that is because the reality is, it's very hard to sit and wait for someone else to tell you when you can move forward when you can just do it yourself.

    I spend most of my focus having the discussion before the project where I can offer my assistance to help take on the data work with the developer or supporting that developer with data assistance going forward. Whatever route they chose, I support them and it seems to be working out just fine, ORM or not. Personally, I'm fine with them handling their own stuff so I can focus on improving the infrastructure not every piece of SQL that comes my way.

  • Design choices that impact the database matter.

    I made some suggestions about an upcoming product several years ago, indicating that it would have problems scaling after a few years of use and there were several maintenance issues going forward. Now I get to deal with those issues while the next generation product is being designed considering my suggestions and experience.

    Sometimes I feel like the one-eyed man in the land of the blind. Database and application design isn't rocket surgery, but the number of amateur mistakes that could be avoided is appalling.

  • chrisn-585491 (9/1/2016)


    Design choices that impact the database matter.

    I totally agree. As a data architect, I get frustrated when development managers are only concerned with right now and eschew data designs that are forward thinking. Then, when the predicted issues arise, they only want "minimally invasive" changes to fix things. Anything done to correct the underlying foundation is deemed to be too invasive and is not an option. Then they wonder why so much time is spent on maintenance, fixes, and performance issues. Hmm, I wonder....:ermm:

  • Aaron N. Cutshall (9/1/2016)


    chrisn-585491 (9/1/2016)


    Design choices that impact the database matter.

    I totally agree. As a data architect, I get frustrated when development managers are only concerned with right now and eschew data designs that are forward thinking. Then, when the predicted issues arise, they only want "minimally invasive" changes to fix things. Anything done to correct the underlying foundation is deemed to be too invasive and is not an option. Then they wonder why so much time is spent on maintenance, fixes, and performance issues. Hmm, I wonder....:ermm:

    Let's hardcode lookup values into multiple places across multiple platforms instead of making a lookup table WOOO!!!!!!

  • ZZartin (9/1/2016)


    Aaron N. Cutshall (9/1/2016)


    chrisn-585491 (9/1/2016)


    Design choices that impact the database matter.

    I totally agree. As a data architect, I get frustrated when development managers are only concerned with right now and eschew data designs that are forward thinking. Then, when the predicted issues arise, they only want "minimally invasive" changes to fix things. Anything done to correct the underlying foundation is deemed to be too invasive and is not an option. Then they wonder why so much time is spent on maintenance, fixes, and performance issues. Hmm, I wonder....:ermm:

    Let's hardcode lookup values into multiple places across multiple platforms instead of making a lookup table WOOO!!!!!!

    An important point is that it doesn't take a DBA to not do that. Just someone who is knowledgable. Dev or DBA.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • Aaron N. Cutshall (9/1/2016)


    chrisn-585491 (9/1/2016)


    Design choices that impact the database matter.

    I totally agree. As a data architect, I get frustrated when development managers are only concerned with right now and eschew data designs that are forward thinking. Then, when the predicted issues arise, they only want "minimally invasive" changes to fix things. Anything done to correct the underlying foundation is deemed to be too invasive and is not an option. Then they wonder why so much time is spent on maintenance, fixes, and performance issues. Hmm, I wonder....:ermm:

    It's not that they don't matter. They all matter. I mean, you can spend 10 years making an application. At what point do you stop and launch? When is done, done? If the one thing you are worried about is that 3 years down the road, we have to address this at some point, but we got 3 good years out of the work, then what exactly are you complaining about? My car is likely not going to live for 20 years. I eventually will have to have an oil change. While I can surely develop new and better ways to prolong or eliminate things, what will be the cost of that?

    I'm not going to say that we should not strive for perfection and doing things only once, not constantly having to fix things over and over again. But you guys, similar to the original post, there is some things you can just live with. Sure there is always better, but at what costs or complexity or whatever? Got to wonder...

    At the end of the day, anything can be done better to someone. When is better good enough?

  • xsevensinzx (9/1/2016)


    ...When is better good enough?

    Some sense there. In the hardware and general manufacturing world they accept faults more readily. They define acceptible Mean Time To Failure (MTTF) and %age Failures On Delivery. The target numbers vary on the product or often on the product sector.

    If a manufacturer measures that 3% of their paperclips produced are failed and make it into boxes then often you will find 103 or more in a box labelled as containing 100 paperclips.

    Car manufacturers use these values calculate whether to do a product recall. I am sure that they are not the only industry.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • xsevensinzx (9/1/2016)


    At what point do you stop and launch? When is done, done?

    Don't get me wrong. I'm not saying that we need to spend lots of time in planning and design just to launch the perfect product. If that were the case then nothing would ever get done. All work can be done in phases with one building upon the next. What I was referring to were shortcuts that are effective only for now, this phase, just to get the product out the door but cannot be built upon for the next phase. This is especially problematic when you know what the next phase is and preferably you know what the next several phases will be as well. To use your auto analogy, if manufacturers had to redesign and retool plants just to make next year's model, autos would be prohibitively expensive. The same would be true if the engine compartment was designed only for a single engine design and no one was willing to redesign the compartment for different engines. Even if requirements called for a bigger engine we'd be in trouble if we were not allow to redesign the compartment. There's only so many tweaks that can be done to the existing engine to improve performance without a bigger engine.

Viewing 15 posts - 121 through 135 (of 142 total)

You must be logged in to reply to this topic. Login to reply