2 SQL FCIs and 1 HAG

  • So, as a final response ( I hope). My goal is not to reduce shared storage. In fact, my goal is to decrease redundant storage (In this case only). Also, I don't need automatic failover for the HAG. In fact, I want it Manual. All that being said, do you think the suggestion mentioned about having 1 node passive for both, but 2 nodes as the active for each is a good way to get a majority setup without quorum for my current situation? Else, if I have the original plan with 4 nodes all in the same data center/subnet, is a file share the way to go? I only ask because I have read several places that a file share witness does not hold the full cluster database information, only some metadata and that if the nodes are not multi-site, it is suggested to use a disk witness.

    I would say yes, and using the disk quorum should be fine. If you have shared storage in a WSFC shared among one or multiple FCI's, and the quorum disk comes from the same storage pool, if you lose storage, you'll have bigger issues due to the fact that all of your disk is gone, regardless of if the quorum is up or not. In the case of losing all disk, you'll still have three nodes, but no quorum disk, so the cluster would effectively be up, but your SQL Server services will be down from the lack of storage. In my experience, it is unlikely that you'll only lose the quorum disk yet still have all other disk presented and operating as expected.

    I think you'll still have redundant storage, depending on how you define redundant, because you're looking at having a second copy of the database for read activity from your legacy application. As Perry suggested, there are some things to consider with using AG's in an FCI. In my opinion, it is manageable if you are staying within one data center.

    Given your requirement, I will stand by the 3-node WSFC, two FCI's of SQL Server within that cluster, and then you can establish an availability group between the two FCI's.

  • SQLKnowItAll (1/28/2015)


    So, as a final response ( I hope). My goal is not to reduce shared storage. In fact, my goal is to decrease redundant storage (In this case only). Also, I don't need automatic failover for the HAG. In fact, I want it Manual.

    If you're happy with manual failover in the group then using FCIs really won't cause you any issues. Don't forget to set the timeout value on the group replicas to account for how long the FCIs take to failover

    SQLKnowItAll (1/28/2015)


    All that being said, do you think the suggestion mentioned about having 1 node passive for both, but 2 nodes as the active for each is a good way to get a majority setup without quorum for my current situation?

    No, for the reason i have specified below in my response to S Kusen, it breaks the AO group ruleset

    SQLKnowItAll (1/28/2015)


    Else, if I have the original plan with 4 nodes all in the same data center/subnet, is a file share the way to go? I only ask because I have read several places that a file share witness does not hold the full cluster database information, only some metadata and that if the nodes are not multi-site, it is suggested to use a disk witness.

    You are correct a fileshare witness does not hold a full copy of the cluster database, it's really designed for stretched clusters. Since you're not planning this here you maybe don't need a fileshare witness.

    Using the 4 nodes, and i think i said this previously, you can still employ Majority Node Set rather than a disk based witness by removing a vote from a specific node. In other words Node1, Node2 and Node3 would all have a cluster vote whereas Node4 would have its vote removed, you don't require a witness then. Quorum disk witnesses are so last century 😀

    As i said previously the smart choice is the use of Windows Server 2012 R2, there are some enhancements to the clustering sub system that provide greater resilience.

    S. Kusen (1/28/2015)


    I will stand by the 3-node WSFC, two FCI's of SQL Server within that cluster, and then you can establish an availability group between the two FCI's.

    This won't work, AlwaysOn group pre reqs require that any replica resides on a separate node to other replicas in the same group. With 3 nodes and 2 FCIs the logic dictates that at least one node will home both replicas, this breaches the AO group ruleset

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • S. Kusen (1/28/2015)


    I will stand by the 3-node WSFC, two FCI's of SQL Server within that cluster, and then you can establish an availability group between the two FCI's.

    This won't work, AlwaysOn group pre reqs require that any replica resides on a separate node to other replicas in the same group. With 3 nodes and 2 FCIs the logic dictates that at least one node will home both replicas, this breaches the AO group ruleset

    Good to know. Thanks for that information. I've only done cross-data center AO groups where there is two replicas, but in separate data centers, and thus separate nodes. Appreciate the lesson on that.

  • Perry Whittle (1/28/2015)


    You are correct a fileshare witness does not hold a full copy of the cluster database, it's really designed for stretched clusters. Since you're not planning this here you maybe don't need a fileshare witness.

    Using the 4 nodes, and i think i said this previously, you can still employ Majority Node Set rather than a disk based witness by removing a vote from a specific node. In other words Node1, Node2 and Node3 would all have a cluster vote whereas Node4 would have its vote removed, you don't require a witness then. Quorum disk witnesses are so last century 😀

    As i said previously the smart choice is the use of Windows Server 2012 R2, there are some enhancements to the clustering sub system that provide greater resilience.

    Brilliant Perry! Moving forward with this approach. 🙂

    Jared
    CE - Microsoft

Viewing 4 posts - 16 through 18 (of 18 total)

You must be logged in to reply to this topic. Login to reply