SQL Server 2012 Multi-Subnet Cluster Part 1

By:   |   Comments (27)   |   Related: 1 | 2 | 3 | 4 | > Clustering


Problem

In a previous tip on Install SQL Server 2008 on a Windows Server 2008 Cluster Part 1, we have seen how to install and configure a SQL Server 2008 on a Windows Server 2008 Failover Cluster. We now have a new requirement to deploy a SQL Server 2012 instance on a multi-subnet cluster that spans different geographical locations. How do I install and configure a SQL Server 2012 on a multi-subnet cluster?

Solution

Setting up a SQL Server multi-subnet cluster has become a much talked about topic ever since Windows Server 2008 came out. This is because Windows Server 2008 failover clusters support nodes that are in different subnets. However, SQL Server 2008 R2 still does not support a multi-subnet cluster. You have to implement a stretched virtual local area network (VLAN) in order to span SQL Server failover clusters across different geographical locations. One major challenge here is the network configuration. Most of us SQL Server DBAs do not have control over implementing network topologies as part of the implementation of a SQL Server failover cluster. In most cases, the network topology has already been implemented and managed by the network engineers. Because of its complexity, implementing and managing a dedicated VLAN just for database servers would be yet another reason for network engineers and DBAs to not get along well.

SQL Server 2012 is the first version of SQL Server that natively supports multi-subnet clusters, thus, eliminating the need for a stretched VLAN. This allows SQL Server DBAs to implement multi-subnet clusters within the same data center for high availability but also across data centers for disaster recovery purposes. As SQL Server depends so much on the Windows Failover Cluster infrastructure, it is important to get it working properly prior to installing SQL Server 2012. The goal of this series of tips is to prepare the SQL Server DBA in planning and implementing a multi-subnet SQL Server 2012 failover cluster instance.

NOTE: The installation and configuration process outlined below needs to be done on all of the nodes in the SQL Server 2012 Failover Cluster unless otherwise specified.

Design Your Network Architecture

Proper network architecture design is key to successfully implementing a multi-subnet SQL Server 2012 failover cluster instance. You need to enlist the help of your network engineers to make sure that your design complies with your corporate standards and done appropriately (this is also the perfect time to build good rapport with your network engineers.) Below is the network diagram that I will use to implement my multi-subnet SQL Server 2012 failover cluster.

Design your network architecture

In the diagram above, I have two domain controllers - DC1 and DC2 - in the same Active Directory domain. It is required that all servers acting as nodes in a failover cluster reside in the same Active Directory domain, whether they are in the same subnet or not. In my environment, the domain controllers are in different network subnets, each on a dedicated Active Directory site and being replicated on a regular basis. Cluster nodes SQLCLUSTER1 and SQLCLUSTER2 have three network adapters - one for production traffic, one for heartbeat communication and one for the iSCSI storage. I am using a product called StarWind iSCSI SAN for my clustered storage as it provides storage-level replication or what they call data mirroring. Technically, there is no shared storage in a multi-subnet cluster because each node will have its own storage subsystem. However, the storage subsystem used by one node is an exact replica of the storage subsystem being used by the other nodes. Talk to your storage vendors regarding support for storage replication for use in a multi-subnet failover cluster. In my environment, storage system SAN1 is being replicated over to SAN2 via a TCP/IP connection. A breakdown of the servers, storage systems and IP addresses is shown in the table below.

Hostname IP Address Purpose
DC1 172.16.0.100 Domain Controller/DNS Server
DC2 192.168.0.100 Domain Controller/DNS Server
SQLCLUSTER1 172.16.0.111 Cluster Node 1 - public traffic
197.160.0.111 Heartbeat communication
10.0.0.111 iSCSI communication to SAN1
SQLCLUSTER2 192.168.0.111 Cluster Node 2 - public traffic
197.160.1.111 Heartbeat communication
10.0.1.111 iSCSI communication to SAN2
SAN1 10.0.0.100 iSCSI communication to SQLCLUSTER1/Replication traffic to SAN2
SAN2 10.0.1.100 iSCSI communication to SQLCLUSTER2/Replication traffic to SAN1

A bit of a side note on the network diagram: There are some discussions online about whether or not to have a dedicated network for the heartbeat communication. Since the Windows Server 2008 Failover Cluster Vallidation Wizard only flags having a single network adapter as a Warning and not an error, it is technically supported to have just one network adapter for your cluster. Now, that doesn't mean it's OK. Understand that the reason we build clusters is to have them highly available. If the network adapter fails, your cluster is gone. SQL Server MVP and Microsoft Certified Master Brent Ozar (blog | Twitter) wrote about having multiple network adapters configured in a teamed pair. And that's really a great idea. But here's my take on it: even if we have multiple NICs per node that are teamed up, how sure are we that the network switches are redundant and highly available? I've seen DR exercises where only the servers are tested but not the underlying network architecture. Only when the network switches themselves fail do they realize that they are not at all highly available. I still recommend having a dedicated network for the heartbeat communication and if the customer can guarantee that the network layer - switches, routers, etc. - is highly available, then I'll be happy with a NIC teaming implementation (thus, my reason for having a dedicated network for the heartbeat.)

Carve Out Your Disks

Similar to the previous tip, you need to provision your clustered storage depending on your requirement. Since this involves configuration of replicated storage, enlist the assistance of your storage vendors and engineers to configure this properly. If you are only installing the database engine, you do not need to have the MS DTC clustered resource configured, thus, saving you a clustered disk allocation. At a minimum, make sure you have at least 3 clustered volumes for SQL Server's use - one for the system databases, one for the data files and one for the log files. You might be wondering why I didn't specify a quorum disk for this configuration unlike in the previous tip. I will talk about it in detail when we get to the section on Configuring the Cluster Quorum Settings.

Present your disks to the cluster nodes

Note that I didn't say "shared disks." That is because, technically, the storage subsystems are not shared by the cluster nodes as I've previously mentioned. This can be confirmed as part of the clustered disk configuration. The process of connecting the servers to the iSCSI storage is the same as in the previous tip with few things to consider.

  1. In the Discovery tab of the iSCSI Initiator Properties page, the IP address of the target portal will be different on each cluster node. In the screenshot below, the IP address of the target portal for SQLCLUSTER1 is 10.0.0.100 while the IP address of the target portal for SQLCLUSTER2 is 10.0.1.100, confirming that the cluster nodes indeed do not share the same storage subsystems.

    Present your disks to the cluster nodes

  2. Multi-path I/O needs to be enabled on all of the cluster nodes. This is to allow the use of iSCSI drives with MPIO for redundancy and fault tolerance. To enable multi-path I/O, it has to be added using the Add Features Wizard.
    1. Open the Server Manager console and select Features.
    2. Click the Add Features link. This will run the Add Features Wizard

      Open the Server Manager

    3. In the Select Features dialog box, select the Multipath I/O checkbox and click Next (and while you're at it, you can include the .NET Framework 3.5.1 Features and the Failover Clustering feature since these will be used by the SQL Server 2012 failover cluster installation).

      select the Multipath I/O checkbox

    4. In the Confirm Installation Selections dialog box, click Install to confirm the selection and proceed to do the installation of the Multipath I/O feature.

      Confirm Installation Selections dialog box

    5. Once the installation is complete, open the Administrative Tools and select MPIO.

      open the Administrative Tools and select MPIO

    6. In the MPIO Properties page, select the Discover Multi-Paths tab and click the check box for Add support for iSCSI devices. Click the Add button. This will prompt you to reboot the server.

      In the MPIO Properties page

Once the cluster nodes have been rebooted, we can now proceed to configure the disks.

Initialize And Format The Disks

I wanted to go thru the disk initialization process even though it is something that most Windows administrators already know. This is because it is a great way to validate whether or not the storage replication process works as per your vendor specification. Disk configuration changes made on one of the cluster nodes should be replicated over to the other nodes within your cluster.

Any new disks added on a Windows Server 2008 system has to first be brought online and initialized before any partitions can be created on it. This can be done via the Disk Management console from within Server Manager.

  1. From within Server Manager, expand the Storage section and select Disk Management.

    expand the Storage section and select Disk Management

  2. Right-click any of the disks that you want to configure and select Online. Once the disk is brought online, it is now marked as Not Initialized.

    it is now marked as Not Initialized

  3. To initialize, right-click on the disk and select Initialize Disk. The Initialize Disk dialog box will appear.

    select Initialize Disk

  4. In the Initialize Disk dialog box, make sure that the correct disk is selected for initialization and then choose whether to initialize the disk using the MBR or GPT partition styles. The most common partition style for disks used in SQL Server instances is MBR, which is selected by default. Click OK to select the MBR partition style.

    The most common partition style for disks used in SQL Server instances is MBR
  5. To create a disk partition, right-click on the unallocated space and select New Simple Volume.

    right-click on the unallocated space and select New Simple Volume

  6. In the Welcome to the New Simple Volume Wizard dialog box, click Next.

    Welcome to the New Simple Volume Wizard

  7. In the Specify Volume Size dialog box, enter the volume size and click Next.

    Specify Volume Size dialog box

  8. In the Assign Drive Letter or Path dialog box, specify the drive letter you would like to use and click Next.

    specify the drive letter you would like to use

  9. In the Format Partition dialog box,
    • Make sure that the file system selected is NTFS.
    • To follow Microsoft best practices on allocation unit size, select 64K.
    • In the Volume label: text box, enter the appropriate name. In my example, I used M_Cluster_Drive. Take note of this volume label because we will use this to verify the configuration on the other cluster node.

      Click Next

      the Format Partition dialog box

    • In the Completing the New Simple Volume Wizard dialog box, review the settings you have made and click Finish.

       Completing the New Simple Volume Wizard

Repeat these steps on all of the disks that you want to configure as part of your cluster.

Verify The Storage Replication Process

As I've mentioned in the previous section, the only reason why I walked thru the process of bringing the disk online to formatting a partition is because we wanted to verify if the storage replication process works as per your vendor specification. In order to verify this process, simply bring all of the disks on the other cluster nodes online. If the storage replication works, you will see that the volume name we have assigned has been propagated on all of the cluster nodes. In my example, I have named my clustered disks M_Cluster_Drive, L_Cluster_Drive and S_Cluster_Drive on SQLCLUSTER1. After bringing the disks online, the same volume properties have appeared on SQLCLUSTER2. The drive letters were not the same, though, but we will be removing them since we will define the drive letters from within the Windows Failover Cluster.

Verify the storage replication process

    There are other ways to verify the storage replication process as this is totally dependent on the features of your storage platform. This is just a simple way to verify if it indeed works as expected. Work with your storage vendors and engineers to make sure that this verification step has been done and that all potential issues have been addressed prior to moving to the next step. It will be more difficult to troubleshoot and address issues down the road as more components are added to the stack.

    Next Steps
    • Review the previous tips on Install SQL Server 2008 on a Windows Server 2008 Cluster Part 1, Part 2, Part 3 and Part 4.
    • Download and install an Evaluation copy of Windows Server 2008 R2 for this tip.
    • Start working on building your test environment in preparation for setting up a SQL Server 2012 multi-subnet cluster on Windows Server 2008 R2. This is a great opportunity to learn more about networking concepts and fundamentals as a SQL Server DBA.


    sql server categories

    sql server webinars

    subscribe to mssqltips

    sql server tutorials

    sql server white papers

    next tip



    About the author
    MSSQLTips author Edwin Sarmiento Edwin M Sarmiento is a Microsoft SQL Server MVP and Microsoft Certified Master from Ottawa, Canada specializing in high availability, disaster recovery and system infrastructures.

    This author pledges the content of this article is based on professional experience and not AI generated.

    View all my tips



    Comments For This Article




    Tuesday, September 3, 2019 - 7:24:47 PM - bass_player Back To Top (82227)

    Mayank,
    Did you check the network? A simple PING and TELNET test will tell you if a firewall rule is blocking traffic to the virtual IP. Start with that.


    Monday, September 2, 2019 - 10:05:10 AM - Mayank Singh Back To Top (82212)

    I have set up two node cluster . Following are the IP address that I am using
    DC-10.0.0.16
    node1-10.0.0.15

    node2-10.0.0.10
    I created the failover cluster on node1. For failover cluster I gave IP- 10.0.0.32
    Then I installed sql enterprise on node1. For that I gave IP- 10.0.0.27
    Then I joined node 2 to sql server.
    Failover cluster is working properly.But the problem is when I try to remotely access the ms sql. I am not able to do that. I am using sql IP(10.0.0.27) to connect to my sql server. Nothing is happening, I am getting error. Neither node1's IP is working nor nor Node2's and neither CLuster's IP(10.0.0.32) is working.
    Help me with this. How do I remotely can access my sql cluster


    Saturday, January 12, 2019 - 12:18:39 PM - bass_player Back To Top (78749)

    sakamboh,

    It is very difficult to provide guidance without knowing more about the environment and the requirements.

    1. Is this a SQL Server 2012 Failover Clustered instance?
    2. Why is there a need to replicate data from SAN1 to SAN2?
    3. How is data from SAN1 being replicated to SAN2?
    4. How many cluster nodes do you have on the primary site?
    5. What is your DR recovery objectives and service level agreements?

    Saturday, December 22, 2018 - 6:12:34 AM - sakamboh Back To Top (78540)

     Hi All,

    I have an SQL cluster 2012 running on Windows 2008 on our primary site, The Data is stored on SAN1 on primary site and already getting replicated to our SAN2 on DR site. The requirement is to add a node in DR site which can run through the SAN2 of DR in case of primary site having issues etc. Please advise how i can add a node from DR site to exitsing running Environment. Your support in this regard will be much appreciated.  


    Monday, May 30, 2016 - 3:55:52 AM - Firoz Attar Back To Top (41578)

    Thanks a lot for the information.

    Do you know, how much vSAN disk size for mirroring starwind support? My application recommends disk size of 200 GB for Database.

    Does customer uses vSAN mirroring in production?

     


    Friday, May 6, 2016 - 1:21:26 PM - bass_player Back To Top (41426)

    You can use this guide to configure StarWind Virtual SAN on two nodes across data vcenters The only challenge you will face here is the network stack

    https://www.starwindsoftware.com/technical_papers/Starwind_Virtual_San_Installation_and_Configuration_of_Hyper_Converged_2_Nodes_With_Hyper_V_Cluster.pdf

    Here's a more comprehensive document on building the SQL Server failover clustered instance with StarWind Virtual SAN in a multi-subnet configuration

    https://www.starwindsoftware.com/installing-and-configuring-sql-server-2014-multi-subnet-cluster-on-windows-server-2012-r2

     


    Friday, May 6, 2016 - 12:13:47 PM - Firoz Attar Back To Top (41424)

    Can you share the document or link for document, how to configure Star wind vSAN on two nodes across two datacenters? So that, Geo-redundancy for vSAN also achieved.

     


    Friday, December 18, 2015 - 10:16:22 AM - bass_player Back To Top (40279)

    I believe you've asked this question as well on the Microsoft forums. Great that you already got it resolved.

    https://social.technet.microsoft.com/Forums/windowsserver/en-US/97631e41-b6ff-4a69-9713-b544b4be359b/multi-site-clusterring-setup-with-different-subnet?forum=winserverClustering#bc7e9b77-1c90-47c2-b8b7-5a5ccfc8f34b


    Monday, November 23, 2015 - 8:24:05 AM - ananda Back To Top (39123)

    I have configured multi subnet WSFC clustering and multiple IPs are display on my core cluster resource group.

    I am not using shared disk like SAN, NAS storage.. also not created File share witness quorum for failover as below setup. also not enable mutilple I/O feature in OS Level

    Total three (HyperV) Nodes: 2 Nodes for Local HA and 1 Node for DR. OS: Win 2012 Std.Edition and SQL 2012.

    I tried again manually move to cluster resource to DR datacenter in another subnet IP.

    Select Cluster name-> select More action-> select Move Core Cluster Resource -> Select Node - choose DR Node
    It is working fine and no issues. DR Node Cluster IP address become a online and Green color.

    But Local datacenter Core Cluster Resource failed as below errors, I think This is normal behavior of Multi subnet cluster Group setting and As per cluster OR logic working. Cluster IP become a online whenever DR-Node OR Local Nodes current Host server status.

    Pls. suggest me if I am wrong learned 

     Warning----------No matching network interface found for resource 'Cluster IP Address' IP address 'XX.XX.44.25'(return code was '5035').If your cluster nodes span different subnets, this may be normalError Messages------------------Cluster resource 'Cluster IP Address'of type 'IP Address'inclustered role 'Cluster Group' failed.Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the groupto another node of the cluster andthen restart it.Check the resource andgroup state using Failover Cluster Manager or the Get-ClusterResource Windows PowerShell cmdlet.

    Thanks


    Friday, July 25, 2014 - 11:31:14 AM - bass_player Back To Top (32887)

    Hi Peter,

    With Windows Server Failover Clustering, the SAN is just a component in the list of cluster resources. For a multi-instance (previously active/active) cluster, both servers can own the cluster resources (for example, Database1 and Database2.) But since a cluster resource group cannot have ownership of a clustered resource in another group, there will be no overlapping between the storage. Here's how what you are describing will look like. A Windows Server Failover Cluster with 2 nodes will run Database1 with Storage1 and Database2 with Storage2. Database1 with Storage1 can be owned by either of the nodes in the cluster. Same thing with Database2 with Storage2. But Storage1 can never be shared with Database2 or Storage2 with Database1. That is the main purpose of the cluster resource groups. Of course, since the SAN appears as a logical storage entity in the cluster, what happens in the physical layer is abstracted. It's really a matter of proper configuration to make sure that both the pysical and logical architecture are consistent to avoid storage-level issues


    Friday, July 25, 2014 - 7:54:39 AM - Peter Back To Top (32880)

    Thanks you for your detailed explanation, it's so useful.

    Other databases provide an architecture where Database1 placed in Server1 can use Storage1 (and it is recommended to do it) and Database2 placed in Server2 can use Storage2 and so on, although the different SANs are constantly replicating their data.

    Is it possible in SQL Server technology? In other words, can we get an active-active cluster configuration at the san level also?

     


    Thursday, July 24, 2014 - 6:40:52 PM - bass_player Back To Top (32868)

    That really depends on how the SAN replication is configured. Since each cluster resource group will have its own set of shared storage, there will be a one-to-one mapping between the storage on the production data center versus the storage on the DR data center. The SQL Server instance will reside on one cluster resource group with its own set of shared storage that gets replicated on the other side and vice versa. You cannot share LUNs between two different cluster resource groups so a write on one LUN in the production data center is different than a write on the LUN in the DR data center.

    I wouldn't do a two-way storage replication for a multi-instance (previously active/active) cluster, especially on a multi-subnet cluster where the other node is on a different data center. The goal for having the other node is for DR purposes. Plus, the more features you require for your SAN, the more expensive it becomes.


    Thursday, July 24, 2014 - 1:35:33 PM - DBA Back To Top (32862)

    Does this configuration allow for an active/active configuration? 

     

    If so, what would happen if the san link went down between the 2 sites and both servers got writes? What would happen when the link came up?


    Wednesday, May 28, 2014 - 10:06:23 PM - bass_player Back To Top (31978)

    Hi Charles,

    The traditional SQL Server clustering requires shared storage which, in the case of a geographically-dispersed cluster, need to be replicated across the nodes. With the introduction of SQL Server 2012 Availability groups, you can implement a geographically dispersed Windows cluster without the need for shared storage since the databases are the ones getting replicated and not the storage. 

    To learn more about SQL Server clustering, check out these articles. While they were written for SQL Server 2008 on Windows Server 2008, they still apply to SQL Server 2012 on Windows Server 2012

    http://www.mssqltips.com/sqlservertip/1687/install-sql-server-2008-on-a-windows-server-2008-cluster-part-1/

    http://www.mssqltips.com/sqlservertip/1698/install-sql-server-2008-on-a-windows-server-2008-cluster-part-2/

    http://www.mssqltips.com/sqlservertip/1709/install-sql-server-2008-on-a-windows-server-2008-cluster-part-3/

    http://www.mssqltips.com/sqlservertip/1721/install-sql-server-2008-on-a-windows-server-2008-cluster-part-4/


    Tuesday, May 27, 2014 - 4:23:12 PM - Charles Back To Top (31959)

    Edwin -

    Your article here is fantastic and it has just promted a number of questions.  Here's what I'm wondering.

    To do clustering, do you have to be using storage on SANs where the data is being replicated between the two SANs?

    Is the clustering of the SQL Server really a clustering of the Windows OS and SQL server just happens to reside on the mirrored/replicated partitions?

    So as to not clutter up your comments section and since I am a newbie at this type of thing, where is a good place to start learning about SQL Server Clustering?

    Thank you,

    Charles


    Friday, May 31, 2013 - 4:28:20 PM - bass_player Back To Top (25222)

    Hi Carlo,

    Thanks for the feedback. I agree with you on the fact that dedicated heartbeat networks don't exist that much. The reason I recommend a dedicated network is because of high availability. The main reason you implement failover clustering is because you want high availability. Any component that becomes a single point of failure needs to be addressed. I have yet to see a DR exercise where everything - from servers to network switches to the entire data center facilities - is tested. But that is the ultimate goal.


    Friday, May 31, 2013 - 1:19:09 PM - Carlo Back To Top (25216)

    Rethink your networking strategy.  Current Server architectures such as UCS and VFlex use single or dual 10/40/80Gb/s NICs on standalone blades that are provisioned into vNICs or trunked to the OS level.  There is no other network, no other NICs, no other wires (we are talking about 10GB-KR backplane traces only).  Dedicated heartbeat networks can't exist, dedicated heartbeat NICS are an illusion, and dedicated heartbeat subnets are a last gasp attempt to hold on to some security blanket that doesn't exist.  Your cluster availability is dependant on the upstream network, period.  Sorry for the wake up call but there is a reason why MS gave up on it as a requirement and we all should too.


    Tuesday, October 2, 2012 - 2:12:21 PM - Raofu Back To Top (19766)

    I have read this article and completed up to whatever done here fo one node but with FC SAN. This SAN configuration(replication) allows only one node will see the SAN drive on a given time. This is accomplished by making primary of a particular node.

    if one node is fail then i will make other node as primary in SAN and want to run SQL server.

    How can I start SQL 2012 clustering from this point. Or is it same as 2008 as it is described in part-1 to part-4?


    Monday, October 1, 2012 - 3:51:01 PM - bass_player Back To Top (19759)

    You can simply perform an in-place upgrade of your existing SQL Server 2008 R2 cluster to SQL server 2012. This will convert your SQL Server 2008 R2 instance to SQL Server 2012. You can also run another instance on the existing cluster but that is not without any impact on the current production environment. Remember that this is no different from running multiple instances on a single server aide from the fact that you have the flexibility of running one instance on one node and another on the second node. But if one of the nodes crashes, you are back to the concept of having multiple instances on a single server


    Monday, October 1, 2012 - 2:47:54 PM - Dave Back To Top (19758)

    Hi all, 
    I need to know the following if it is possible or not 

    I have a SQL 2K8 R2 cluster running, Active/Passive with OS w2K8 -64bit 

    I have two questions: 

    1. Can I build SQL 2012 on it and migrate 2k8 to it and then get rid of 2k8 

    What are the steps I need to follow?

    2. It is possible to run another Instance from the cluster(without effecting current prod environment) ?
    What are the steps ?, anything special I need to consider ? any performance issues with #2 


     

    Please let me know, thanks
     


    Wednesday, July 18, 2012 - 6:26:52 PM - Simon Tran Back To Top (18618)

    Hi,

    I'm sorry I used raid 1 mirrored disk option instead of HA option. I'm good to go now. Thanks for your quick response.

    Simon


    Wednesday, July 18, 2012 - 4:12:33 PM - bass_player Back To Top (18617)

    Simon,

    What do you mean StarWind does not support failback? What issues are you experiencing? Any specific error message from the StarWind log file? How did you configure your iSCSI SAN replication? I used StarWind for this series of articles (hopefully the succeeding ones get published soon) and everything works fine for me. The very first thing that I do after installation is failover and failback and I've done it more than a dozen times. 


    Wednesday, July 18, 2012 - 1:45:06 PM - Simon Tran Back To Top (18610)

    Nice article. Thanks Edwin!

    I'm experimenting with multi-subnet cluster for sql 2012. I've got to a point where I can failover to a node on the second subnet, but it seems like Starwind software doesn't support failback yet. Would anybody have any suggestion, perhaps a different iSCSI SAN software?

    Thanks

    Simon

     


    Monday, July 16, 2012 - 10:34:50 PM - bass_player Back To Top (18552)

    SQL Server DR implementation will depend on your requirements and should be the guide to selecting which technology to use. If you intend to use EMC's SRDF technology, how will you be using it - for multisubnet clusters, simple SAN replication? It is important to first define your recovery objectives prior to implementing the technology


    Thursday, July 12, 2012 - 6:22:18 AM - ESL Back To Top (18467)

    Hi,

    how do you build the DR SQL Server  with SRDF technology ?


    Wednesday, July 11, 2012 - 10:26:21 AM - bass_player Back To Top (18447)

    An availability group listener appears as a cluster resource. This means that, for a multi-subnet cluster, you will have multiple IP addresses for each virtual server name. This will be covered in future parts of this article where I'll talk about creating the Windows Failover Cluster and configuring network settings in the cluster.

    That being said, it is important to understand and define your high availability and disaster recovery requirements prior to designing your infrastructure. Let your requirements dictate your design. My design philosophy is to keep it simple while addressing the requirements. 


    Wednesday, July 11, 2012 - 9:41:26 AM - Earl Cooper Back To Top (18445)

    I am about to read the article and loof forward to it eagerly but I am hoping the author will follow up and discuss how an availability group listener should be configured in sql server 2012 if nodes in the cluster are in different sub-nets. Does the listener need more than 1 IP address in DNS. What is involved?















    get free sql tips
    agree to terms