Continuously Available File Server (CAFS) CLUSTER DOCUMENTATION
Contents
1 – Server components and network configuration
2 – Quorum disk configuration
2.1 – Quorum disk mapping on server CAFS1
2.2 – Quorum disk mapping on server CAFS2
3 –Cluster feature deployment
3.1 – Add the Cluster Feature on server CAFS1
3.2 – Add the Cluster Feature on server CAFS2
4 –Cluster Configuration validation
5 – Creation of the cluster
1 – Server components and network configuration
Servers components review:
Server Name CAFS1
Operating System Windows 2012 R2
CPU 2 vCPU
RAM 4 GB
Network interfaces 2 (Production / Private)
Disks 1 (System Drive – 60 GB)
Server Name CAFS2
Operating System Windows 2012 R2
CPU 2 vCPU
RAM 4 GB
Network interfaces 2 (Production / Private)
Disks 1 (System Drive – 60 GB)
Netbios Names :
CAFS = Cluster
CAFS1 = Node 1 name
CAFS2 = Node 2 name
The cluster needs a minimum of 2 separates networks :
– 1 public network interface for clustered services
– 1 private network for the cluster communication (heartbeat)
Public IP Addresses (network name “Production”):
10.5.239.107/24 CAFS
10.5.239.108/24 CAFS
10.5.239.109/24 CAFS
Private IP Addresses for cluster communication (network name “Private”):
172.20.10.1 CAFS
172.20.10.2 CAFS
The network card named “Ethernet” will be joined to the public (prod) network and the network card named “Ethernet2” will be joined to a private network for cluster communication.
2 – Quorum disk configuration
In a VMWare environment, the disk is mapped and shared as a “Raw Device Mapping” storage.
The quorum is a shared virtual SCSI disk on a shared SCSI bus amongst the virtual machines.
The shared disk is small (witness role only, no storage): 1 GB is required in our scenario.
The shared disk has been created in the SAN with the name “cafs-quorum” (storage pool n°3).
2.1 – Quorum disk mapping on CAFS1
NOTE: the virtual machine must be shut down before mapping the quorum shared disk.
In the virtual machine settings click on “Add” on top of window.
Select “Hard disk” and click on “Next”.
Select “Raw device mappings” and click on “Next”.
Select the “DGC Fibre Channel disk” pointing to the quorum shared disk with the size of 1 GB then click on “Next”.
Select “Store with the virtual machine” and click on “Next”.
Select “Virtual” for the compatibility mode and click on “Next”.
Set the virtual device node to “SCSI (1:0)” (a new SCSI controller will be created) and click on “Next” then click on “Finish”.
Back to the virtual machine settings you will see a new SCSI bus and a new hard drive with the description “Mapped RAW LUN”.
Click on the new SCSI controller and activate the “SCSI Bus Sharing” to “Physical: virtual disks can be shared between virtual machines on any server”.
Boot on the server CAFS1. In disk management you should now see the quorum disk (Disk 1 – 1.00 GB). Right click on the disk and select Initialize disk.
Select “MBR” for the partition table then click “Ok”.
Back the disk management, right click on the quorum disk and create a new simple volume.
On the welcome screen click Next. Take all the disk size. Click Next.
Select “Do not assign any a driver letter or drive path” then click on “Next”.
Quick format the disk and set the volume label to “QUORUM”. Click “Next” and then click on “Finish”.
Back the disk management the disk is now online, initialized with a NTFS volume.
The quorum disk is now ready for a cluster deployment on CAFS1.
Close the disk management window.
2.2 – Quorum disk mapping on CAFS2
NOTE: the virtual machine must be shut down before mapping the quorum shared disk.
In the virtual machine settings click on “Add” on top of window.
Select “Hard disk” and click on “Next”.
Select “Use an existing virtual disk” and click on “Next”.
Browse to the “CAFS1” virtual machine storage and select the disk “CAFS1.vmdk” with the size of “1 GB” and click “Open”.
Confirm the path to the VMDK file and click “Next”.
Set the virtual device node to “SCSI (1:1)” (a new SCSI controller will be created) and click on “Next” then click on “Finish”.
NOTE: it is very important that the SCSI BUS ID (the 1st number) is set to the same number as defined for other cluster nodes sharing the cluster disk, also it is very important that the Disk ID (the 2nd number) is unique amongst the other nodes sharing the disk. In our scenario, CAFS1 is using the SCSI configuration “1:0”, so for the second node CAFS2 we are using “1:1” (same bus, different disk ID).
Back to the virtual machine settings you will see a new SCSI bus and a new hard drive with the description “Mapped RAW LUN”.
Click on the new SCSI controller and activate the “SCSI Bus Sharing” to “Physical: virtual disks can be shared between virtual machines on any server”.
Now you can boot the server on. In disk management you will see the QUORUM disk initialized but “offline”.
This is the normal “standby mode” for the cluster and the disk should not be brought online by the disk management console but only by the Failover Cluster management console.
3 –Cluster feature deployment
The Failover cluster feature will be installed on CAFS1 and CAFS2 servers.
3.1 – Add the Cluster Feature on CAFS1
Open Server Manager. Click on “Manage” then click on “Add Roles and Features”.
Click on “Next” at the Welcome Screen.
Select “Role-based or feature-based installation” then click on “Next”.
The destination server is the local server by default, in this case “CAFS1”. Click on “Next”.
At the Roles pages, do not select any role. Click on “Next”.
At the Features page, select “Failover Clustering”.
Add the requested features and tick “Include management tools” then click “Add features”.
Back on the Features page, click on “Next”.
Click on “Install”.
Wait for the end of the installation process then click on “Close”.
3.2 – Add the Cluster Feature on CAFS2
Repeat the same step-by-step process done for CAFS1 from chapter 3.1.
4 –Cluster Configuration validation
Before you begin the cluster validation make sure that both servers CAFS1 and CAFS2 can communicate through the cluster dedicated network and the public network, the shared quorum must be visible in disk manager and the cluster feature has been successfully installed.
Open the Server Manager on CAFS1. From the “Tools” menu open the “Failover Cluster Manager”.
In the Failover Cluster Manager, “Management” section, click on “Validate Configuration”.
At the Welcome Screen click on “Next”.
Add the two servers “CAFS1” and “CAFS2” then click on “Next”.
Check “Run all tests” then click on “Next”.
Click on “Next” to begin all tests. Verify that the result is ok.
You can read “The configuration appears to be suitable for clustering”. Tick the “Create the cluster now using the validated nodes” checkbox then click on “Next”.
The cluster will now be created.
5 – Creation of the CAFS cluster
After you run the “Validate Configuration” wizard, the cluster creation wizard should pop-up. Click on “Next” to begin installation.
The Cluster Name is “CAFS” and the virtual IP address is set to “10.5.239.109”. Click on “Next”.
Tick the checkbox “Add all eligible storage to the cluster” then click on “Next”.
Wait for the cluster to be created.
When the cluster creation is finished, you can now see a new entry named “CAFS” in the Failover Cluster Management window. If you click on “Nodes” you can see the two servers “CAFS1” and “CAFS2” up and running.
If you click on “Disks” under the “Storage” menu you can see the quorum risk available and what server own the cluster shared disk.
You can ping the virtual IP “10.5.239.109” and check that the cluster is responding to the network requests.
In Active Directory, a virtual computer resource has been added with cluster’s name (CAFS).
And a new record has been automatically added in the DNS zone.
The cluster deployment is now finished. We can now add clustered services (File server, DHCP, etc).