Situatie
It is not required but very good practice to keep management networks from VM networks. As I had four ports on each server, I went with following design:
Port 1 on each server — Management Network
This is what I provided to Proxmox at installation time, 192.168.3.x range
Linux Bridge was created by Proxmox
Port 2 on each server — VM Network (Data Plane)
In my case, this is connected to different switch/subnet, 192.168.4.x range
We’ll now create Linux Bridge to use this port
Linux Bridge: “A Linux Bridge allows the physical network interface to act as a virtual switch inside our server. This helps connect our virtual machines to different network segments.”
Log on on to one node and get to Network tab (process needs to be done on all nodes):
I am going to use eno2 that is connected to a separate subnet and use this network for VMs. First bridge created by Proxmox is named as vmbr0, so we’ll name the new one as vmbr1. In comments, I put ‘vm-net4’ to represent usage and subnet but you can put any description here:
Leave Gateway blank here. Click on Apply Configuration, otherwise changes will only take effect on the next restart.
Next time we create VMs, we’ll pick this new network that we just created:
We can change it for existing VMs also:
Even though we have not set up the Cluster yet, we can define Shared storage at Datacenter level and not node level. I have access to NFS share from my Synology but as we can see below, we have many options to pick from:
In Content above, we decide what this shared storage can be used for, like for VM disks, ISO images etc. We have defined one for virtual machines above, let us define one for backups also:
With shared storage configured, we can now leverage it to seamlessly move VMs between nodes in the cluster. This flexibility is a key benefit of clustering, as it allows for workload balancing and high availability.
At this stage, we have three separate Proxmox hosts. We have set up two networks on each and added shared storage to one. At this stage we are good for creating the cluster.
Let us Create the cluster now:
Pick a cluster name and give it node IP:
Now we have a cluster with one node. By clicking on Join Information button, we can grab information that we need to take to other nodes to add them to same cluster:
Now we can refresh page and see all nodes in cluster:
Also, in above we can see that Shared storage is accessible to all nodes even though we added it to only one node.
Virtual Machine Migrations
We can test our cluster by Migrating VM:
This VM was created using local disk. Migrating VM with local disk:
It took a few minutes to migrate over 1Gb network. The VM was responsive during this time.
Now we can see that VM has move to prx-2 node:
We can also verify from logs at bottom of screen:
Now let us Create VM using shared storage. Process is same except that we select disk from shared storage:
Migrate VM with shared storage:
It is very quick:
So far we have migrated VMs manually. There are use cases for doing so. But if want VMs to migrate when there is an issue with a given node, then we need High Availability.
- Click on Add under Resources
- And select VM to add to HA
We can leave Max Restarts and Max Relocate to 1. We do not want migration to be tried multiple times, we would rather know about underlying issue first at that time. VM was stopped when adding, so it is starting now as requested state above was Started:
Now, if something happens to the node on which the VM is running, it’ll be moved to another node automatically (hence HA). I tested it by unplugging the network cable from the server on which VM was running and it moved to another node.
While High Availability (HA) is a powerful feature with clear use cases, it’s important to assess whether it’s necessary based on specific requirements. In scenarios where redundancy is already built into the application layer, such as Elasticsearch or Kafka clusters, we might prioritize high-speed local storage over fast VM migration, even if that means reduced mobility for the virtual machines. Ensuring data consistency often takes precedence in such cases. Conversely, for services like NGINX acting as a load balancer, HA can be a valuable option, as configuration updates are infrequent and the service benefits more from uptime and reliability than instantaneous data replication.
Leave A Comment?