Perfomance tuning and design

  • Dear Experts,

    We have a new project that requires some modeling ,design changes and performance tuning , as of now I am Administration side which I have only basic idea on the above said so I need some help, advices and suggestions.

    1) We have a huge db where the data is for historic and in addition we run reports on old records that are no longer actively being updated ,So how we can improve the performance of both reads and writes in this circumstance (100gib data) as of now there is no partition of tables are there and they are in single filegroup , Please provide some best practices and ideas on the same

    2) In which situation we De-normalize a database?

    3) When De-normalizing, how can we ensure data consistency?

    3) Can we De-normalize the above tables for better read/write ?

    4)When there is change in production db with a new release which has database changes, How do you ensure data integrity and minimize downtime when deploying ? also what are best practices for the same.

    Thanks

    MJ

  • MJ-1115929 (8/20/2014)


    1) In which situation we De-normalize a database?

    Once you have tested thoroughly and have determined that there is no way to meet performance requirements without denormalising. It's a step after tuning queries and ensuring useful indexes.

    Even then, you don't denormalise the database, you denormalise the specific, focused area where the problem is.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (8/20/2014)


    MJ-1115929 (8/20/2014)


    1) In which situation we De-normalize a database?

    Once you have tested thoroughly and have determined that there is no way to meet performance requirements without denormalising. It's a step after tuning queries and ensuring useful indexes.

    Even then, you don't denormalise the database, you denormalise the specific, focused area where the problem is.

    Thanks Gila I understand that , can you please also shed some light on my other queries as well like performance sides and deployments ?

  • MJ-1115929 (8/20/2014)


    Dear Experts,

    We have a new project that requires some modeling ,design changes and performance tuning , as of now I am Administration side which I have only basic idea on the above said so I need some help, advices and suggestions.

    1) We have a huge db where the data is for historic and in addition we run reports on old records that are no longer actively being updated ,So how we can improve the performance of both reads and writes in this circumstance (100gib data) as of now there is no partition of tables are there and they are in single filegroup , Please provide some best practices and ideas on the same

    In general, the best thing to do is break apart the OLTP aspects of the system from the reporting aspects of the system. This way you can optimize the design for the OLTP and optimize the design for reporting. Generally this is done in two different databases, not the same database with different partitions. Other than that, all you can really do is design your indexes appropriately, especially being very careful about the choice of your clustered index.

    2) In which situation we De-normalize a database?

    I am very adverse to denormalizing databases for performance. Most of the time this backfires. If a single giant table was better for performance, why would we normalize at all (other than some data integrity issues that we can resolve through coding)? Normalization actually helps performance. Here's just one example[/url].

    3) When De-normalizing, how can we ensure data consistency?

    You can't, and there's the problem. You're completely dependent on the application getting it right.

    3) Can we De-normalize the above tables for better read/write ?

    Not seeing the tables, who knows, possibly, but possibly not. Again, denormalization is frequently a crutch for people who have either made really poor design choices, especially on the clustered index, or are writing bad T-SQL, especially in their JOIN criteria. So they throw everything into one giant table and suddenly it's "faster" but not as fast as it really could be.

    4)When there is change in production db with a new release which has database changes, How do you ensure data integrity and minimize downtime when deploying ? also what are best practices for the same.

    Thanks

    MJ

    Deployments are a giant process. The best single resource discussing this topic that I know of currently can be found here.

    Since you've never done this type of work before, I strongly suggest you hire a consultant with some experience to walk you through it. There are so many possible mistakes to be made. You want to minimize those as much as possible.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • Hello Grant,

    Thanks for shedding some light and highly appreciate the help provided, and nice to meet you , I have read your book on performance tuning and I am so glad to meet you, didnt know experts like you are here in this forum.. 🙂 once again thanks to you and Gail too,

    Thanks

    MJ

  • MJ-1115929 (8/20/2014)


    Hello Grant,

    Thanks for shedding some light and highly appreciate the help provided, and nice to meet you , I have read your book on performance tuning and I am so glad to meet you, didnt know experts like you are here in this forum.. 🙂 once again thanks to you and Gail too,

    Thanks

    MJ

    Thank you very much for the kind words. Many of the people here, Gail included, know more than I do, which is a big part of why I hang out here. I'm learning all the time.

    ----------------------------------------------------The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood... Theodore RooseveltThe Scary DBAAuthor of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd EditionProduct Evangelist for Red Gate Software

  • See if it's possible to summarize data and get rid of some of the details. Yes, they always say then need "everything", but then frequently they roll things up and ignore a lot anyway. Let's face it, humans can't process 100GB of data anyway, so they have to summarize it to make sense of it.

    Also, yes, the clustered index is the most critical factor for good performance here. You'll need to look at actual data requests, missing index stats, index usage and cardinality (at a minimum) to determine which column(s) are best to cluster on. No, it's often not an easy or automatic process (although it can be for certain data).

    SQL DBA,SQL Server MVP(07, 08, 09) "Money can't buy you happiness." Maybe so, but it can make your unhappiness a LOT more comfortable!

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply