SQL Cluster - relatively low-cost SAN recommendation?

  • Hi all....  I've been researching on the web for some time now and I'm hoping maybe one of you can give me a hand....  We've got 2 main production SQL Servers at my company.  They're running on some real old hardware, and are due for some overhauling.  I want to cluster the two together as active/active.  The databases on each of these don't get a whole lot of transactions going through them, nor are they particularly huge.  Around 15 gigs total database size between the two servers.  But, we are a growing company and need to plan for the future...

    I think the budget for the hardware is around 30-grand.  That doesn't include licenses - we already own a handful of licenses for SQL Enterprise Edition, so we're good there.  I'm thinking maybe 15 to 20 grand for a SAN, and 7500 each for 2 new servers.  

    Does anybody have any experience with some of these less expensive SANs?  I've read about ones from Compellent, HP, and MPC that all seem to be right around in that price range.  I'm not traditionally a hardware guy and have been sort of thrust in this role.  Here is the first option I came up with for the SAN configuration:

    Contents

    Drive Letter

    RAID Level

    # Disks

    Total Size

    Quorum Drive

    Q

    1

    2x36GB

    36GB

    SQL Server A Data Files and TempDB

    R

    1

    2x144GB

    144GB

    SQL Server A Transaction Log

    S

    0+1 or 10

    4x36GB

    72GB

    SQL Server B Data Files and TempDB

    T

    1

    2x144GB

    144GB

    SQL Server B Transaction Log

    U

    0+1 or 10

    4x36GB

    72GB

    Backups and Bulk/Imported Data Files

    Z

    1

    2x144GB

    144GB

    TOTAL:

    16 drives

    And, here was my more economical option:

     

    Contents

    Drive Letter

    RAID Level

    # Disks

    Total Size

    Quorum Drive

    Q

    1

    2x36GB

    36GB

    SQL Server A Data Files, TempDB and Log

    R

    0+1 or 10

    4x72GB

    144GB

    SQL Server B Data Files, TempDB and Log

    S

    0+1 or 10

    4x72GB

    144GB

    Backups and Bulk/Imported Data Files

    Z

    1

    2x144GB

    144GB

    TOTAL:

    12 drives

    Am I on the right track here?  Should I just start calling some of these vendors and see what they suggest?  I always feel like these salespeople don't really know the technical side as much, so I'd like to learn as much as I can before bringing any vendors in to give me the dog and pony show...

    Do any of you have experience in creating a cluster using a SAN that's in the <= $20,000 price range, and can offer any words of wisdom?

    Thanks for any help you may be able to provide!!

  • We're currently looking at a Dell AX100 for our QA environment with 1.5 TB of storage and that's comming in below your budget.

    Whilst it might not meet your needs it should give you a place to start looking

    I'd use raid 1 for your transaction logs and either raid 5 or 10 for the data (depending upon how much money you have, I don't think the AX allows 10 but the next model up might)

  • I admit up front I'm not a lover of SAN technology, but there's very little that proves to me that DAS will not out perform even a very expensive SAN. My best advice would be to steer clear.

    SAN technology is very expensive and I remain unconvinced of any performance gains. I've previously always used raid 1 and raid 10 for sql server, raid 5 is fine if it's a read database but if you're writing it's performance is very poor

    It's difficult without knowing your exact applications, I would have thought 20k for a SAN was maybe optimistic, I seem to remember a single external array box with 14 disks was around 10k.

    Microsoft recommend seperate drives for o/s, data, logs and backups, you might want tempdb sperate too.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • Will a DAS setup be able to work as seamlessly with MSCS and SQL Server as a SAN?  I can do all the failovering and what-not just as easily with DAS as SAN?

    I thought I remembered reading somewhere that DAS was a technology that's on its way out.  Not true?

    I'll have to look into those a bit more...  I'll also check out that Dell that you mention, Mike.  Thanks...

     
  • I believe you have to use fibre channel , or so I am told, to use clustering - but I suspect this might not be the absolute case, you do need shared drives but all array boxes I've used allow for two connections.

    Some companies, I am told, are moving high performance servers away from SAN.

    I guess if you have a dedicated SAN you'll be OK, I asked about some of the SAN issues I have at a user group meeting.. I think the main issue from the DBA point of view is that the config and settings are not so clear, It's obvious to me that to a large extent the config and tuning of your storage is in the hands of a third party, which isn't always a good thing. From the Vendor's point of view it's good of course as each call will probably entail a charge - or am I just cynical ??? < grin >

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • Definately cynical, and there's nothing wrong with that at all (it's right up towards the top of the dba traits list, along with paranoid, suspicious and loads of others) 

    We use a dedicated SAN for our main DB system which works fine, also I control all of the setup and ongoing configuration so maybe i'm just lucky?

    At a previous company we used an early SAN (we're talking years ago here) to consolidate storage across several of our servers, which was a good idea but what actually happened was that instead of one server being taken out if the SAN needed bouncing they all did .

    I'm happy to say that things have definately improved hardware wise and so long as you know precisely what you want and are stood over the guys doing the initial setup (with a big stick) then it's not too painfull

  • Hi

    I had cause to investigate SAN technology last year from the HA/DR aspect rather than performance.

    However, performance was the aspect that I ended up testing quite heavily just to be sure.

    Our 3 OLTP servers had an average combined throughput of around 25 mb/sec.  Not excessive, but we wanted to make sure that peak loads of 400% average could be easily coped with.  The servers were configured with DAS and mixed 0, 0+1 raid as recommmended.

    I used the same basic test throughout (Iometer) and configured the 14 disks in the SAN in multiple RAID setups (0 + 1 and 1; all 0+1; all 5).  Each coped well with the load, but 0+1 was obviously better for the data files.

    In the end, there was little benefit to be seen by splitting data and txlog on spearate RAID types as the reduction in spindles seemed to offset the performance gain.  We opted for 0 + 1 for all   partitions

    The servers have been running on this for a year now with no problems.  Some rogue code even pushed throughput on one server up to 80mb/s sustained without bringing the system to its knees.

    The 'budget' SAN we used came in at around £15k for 1.5TB and was a Eurologic SANBloc

    btw, we ended up adopting Legato CoStandby AAdvanced as a clustering solution as it allowed shared nothing clustering (using 2 SANs) on SQL Standard licenses .  Again, it was tested quite heavily before trusting it with the crown jewels.

    I'd always shied away from SAN for most of the reasons already stated, but the manageability of large amounts of DAS and exponential data growth forced me to re-evaluate

    Andy

     

    Life as a DBA: Living in a box, but thinking outside of it.

    Blog: www.chilledsql.com[/url]

  • Yeah I think the problem I run into is that the SANs I have sql servers connected to have many servers attached, and the physical raids are carved so that I don't even get dedicated spindles for the sql server(s). I'm sure that a dedicated SAN would probably run ok.

    In fairness I've normally run a prod server with 30 or more spindles ( excluding binaries and o/s ) so I'd actually see 14 as being suitable for one data array only. I also have lots of raid 5 whilst I've always used raid 10 before. i/o's per second are probably the more limiting factor rather than bandwidth. A modern DAS scssi bus should supporta  theoretical 320mb/s ( probably get around 260 in reality )

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • For Clustering you must use a SAN as DAS will not work because the drives need to be shared. Your Q: Drive is way to large. It only needs to be 1GB not 36GB. Remember if you go active/active you needs 2 times the memory used on both nodes so when there is a failover one node has enough memory to run both instances of SQL.

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply