Deploying a guest network with Client/AP isolation

Wireless Guest Network is a very common feature for an Access Point device. In this article, I’ll explain why do we need such network and how exactly it is implemented in a typical access point.

As the name suggests Guest Network is for the guest. Now a days, almost all, if not all, organisation provides wireless network in the office for their employees to connect and access various network resources. However when some external customers, employee’s friend, relatives, interviewee etc comes to visit the office, how the organisation can provide internet connectivity to them? Of Course not, it’s a grave security concern. We need to find a way to allow guest internet access, at the same time deny any access to LAN resources.

Let’s consider above topology. Real office topology can be way more complex or simpler depending on the size of organisation. My intention here is is just make it complex enough to demonstrate the concept.

Employee connected to emp_ssid should be able to reach to server1, server2, server3, server4 as well as internet. Firewall can do routing when server2 or server3 are accessed Or there can be an L3 switch in the mix or some router also can be there for routing. Let’s assume some magic box is there to route between resources on switch1 & switch2 side and switch3 side. With this assumption, let’s put the requirements for Guest Network (guest_ssid).

  1. Guest should be able to connect to guest_ssid network with authentication configured in the guest_ssid.
  2. Guest shouldn’t be able to reach out to any resources on the LAN (server1, server2, server3 , server4 or any other employees devices that are connected to emp_ssid).
  3. If admin wants, may provide access to specific IP:Port on the LAN. Let’s say server4:port1111 made accessible.
  4. Guest should be able to get internet connection.
  5. One Guest Device shouldn’t be able to reach another Guest Device.

Whenever we think about traffic isolation, the default option come to our mind is VLAN. We can isolate guest traffic using VLAN without using guest network feature of access point. Most, if NOT all, access point supports VLAN configuration for a specific network (a.k.a SSID). We can put guest_ssid network on a specific VLAN other than the ones where LAN resources are . But there is a catch here. We have to be very careful about routing part in the magic box. We have to make sure that routing for guest VLAN subnet must not happen to switch3 side.

Even though VLAN can handle the Guest, it requires a certain level of competency. But most of the access point provides a very simple, one click solution for the same.

Press enter or click to view image in full size

Sophos AP, single click guest network

Single click fulfils all of the requirements, but let’s see how all of those requirements are achieved. I have seen two kind of implementation in the access points.

This feature makes sure that clients connected to same AP, can’t talk to each other. Typically, it’s a single click and when guest network is chosen, this is also chosen by default.

Press enter or click to view image in full size

Underlying implementation is done at the wireless driver level. I have found some wireless driver code in github and providing link of the same. Client Isolation selection results in setting up IEEE80211_F_NOBRIDGE flag in the driver. At wireless driver data delivery path there is a check if this flag is set or not. If not set (i.e. client isolation is disabled) then that means bridging within the wirless interface will be attempted. If destination is connected to same interface, data is retransmitted to same, without pushing the packets to network stack. If destination is doesn’t belong to same interface or IEEE80211_F_NOBRIDGE flag is set, packets are pushed to kernel network stack.

Now the biggest question is which method to use for securing the network so that guest can’t access network ?

VLAN is the fundamental element in networking, used for traffic isolation, but it requires a some technical competency. If competency is available, one must use VLAN to isolate guest traffic. On the other hand Guest network, combined with Client Isolation is also a robust mechanism to restrict guest for any other access and configuration is just few clicks and a layman can do it. Any attempt to access private network will be dropped at access point itself.

One must use guest network and if competency is there, guest network with VLAN is the best option.

[mai mult...]

How to safely and permanently delete your files in Windows 10/11 with Cipher

On Windows 11, those deleted files can be recoverable. Use the Cipher tool to remove them permanently from the hard drive.

  • To erase deleted files beyond recovery on Windows 11 (or 10), use the “cipher /w:DRIVE-LETTER:\FOLDER-PATH\” or “cipher /w:DRIVE-LETTER:\” command.

On Windows 11, you can use the “Cipher” tool to wipe out deleted data from the hard drive to make it unrecoverable without formatting the entire storage, and in this guide, I’ll walk you through the steps to use this tool.

Cipher.exe is a command-line tool that has been around for a long time in the client and server versions of the operating system. Microsoft designed the utility to encrypt and decrypt data from drives using the NTFS file system. However, you can also use it to overwrite deleted data to prevent recovery.

When you delete a file or folder, the system does not immediately remove the data from the hard drive. Instead, it marks the data for deletion and keeps it available until other data overwrites it. It’s why you can recover accidentally deleted data and why it is always best to stop using the device immediately after accidental deletion to improve your chances of recovery using special software.

If you have deleted data beyond the Recycle Bin and want to ensure it’s unrecoverable, you can use the Cipher tool in Command Prompt to overwrite it with zeros and ones, making it difficult to recover.

Use Cipher to overwrite deleted data on Windows 11

To wipe out deleted data from the drive with Cipher on Windows 11 (or 10), use these steps:

  1. Open Start on Windows 11.
  2. Search for Command Prompt, right-click the top result, and choose the Run as administrator option.
  3. Type the following command to securely erase deleted data and press Enter/p>
    cipher /w:DRIVE-LETTER:\FOLDER-PATH\

    Cipher overwrite deleted data command

    In the command, replace “DRIVE-LETTER” with the drive letter with the deleted content and “FOLDER-PATH” with the path to the folder to completely erase from the hard drive. For example, this command uses Cipher to wipe out the “aws-rclone-test” folder that I previously deleted: cipher /w:c:\aws-rclone-test

  4. Type the following command to securely erase the free space that may contain deleted data information and press Enter:
    cipher /w:DRIVE-LETTER:\

    In the command, replace “DRIVE-LETTER” with the drive letter of the storage you want to wipe out the free space. For example, this command wipes out only the free available space of the “C:\” that may contain recoverable data: cipher /w:c:\

  5. (Optional) Type the following command to overwrite deleted data with multiple passes and press Enter:
    cipher /w:DRIVE-LETTER:\ /p3

    In the command, replace “DRIVE-LETTER” with the drive letter of the storage you want to wipe out the free space. You can also change “3” for the number of passes you wish to use. The greater the number, the more time it will take to complete the process.

Once you complete the steps, Cipher will overwrite the deleted data, making it very difficult for anyone to use recovery software to reconstruct and restore the files and folders from the hard drive. Cipher only overwrites free available space where deleted data may still reside. It doesn’t wipe out the existing and accessible data. You can also run this tool in the “C:\” drive where the operating system is installed.

[mai mult...]

Generating a localhost SSL certificate for local development

Sometimes people want to get a certificate for the hostname “localhost”, either for use in local development, or for distribution with a native application that needs to communicate with a web application. Let’s Encrypt can’t provide certificates for “localhost” because nobody uniquely owns it, and it’s not rooted in a top level domain like “.com” or “.net”. It’s possible to set up your own domain name that happens to resolve to 127.0.0.1, and get a certificate for it using the DNS challenge. However, this is generally a bad idea and there are better options.

For local development

If you’re developing a web app, it’s useful to run a local web server like Apache or Nginx, and access it via http://localhost:8000/ in your web browser. However, web browsers behave in subtly different ways on HTTP vs HTTPS pages. The main difference: On an HTTPS page, any requests to load JavaScript from an HTTP URL will be blocked. So if you’re developing locally using HTTP, you might add a script tag that works fine on your development machine, but breaks when you deploy to your HTTPS production site. To catch this kind of problem, it’s useful to set up HTTPS on your local web server. However, you don’t want to see certificate warnings all the time. How do you get the green lock locally?

The best option: Generate your own certificate, either self-signed or signed by a local root, and trust it in your operating system’s trust store. Then use that certificate in your local web server. See below for details.

For native apps talking to web apps

Sometimes developers want to offer a downloadable native app that can be used alongside a web site to offer extra features. For instance, the Dropbox and Spotify desktop apps scan for files from across your machine, which a web app would not be allowed to do. One common approach is for these native apps to offer a web service on localhost, and have the web app make requests to it via XMLHTTPRequest (XHR) or WebSockets. The web app almost always uses HTTPS, which means that browsers will forbid it from making XHR or WebSockets requests to non-secure URLs. This is called Mixed Content Blocking. To communicate with the web app, the native app needs to provide a secure web service.

Fortunately, modern browsers consider http://127.0.0.1:8000/ to be a “potentially trustworthy” URL because it refers to a loopback address. Traffic sent to 127.0.0.1 is guaranteed not to leave your machine, and so is considered automatically secure against network interception. That means if your web app is HTTPS, and you offer a native app web service on 127.0.0.1, the two can happily communicate via XHR. Unfortunately, localhost doesn’t yet get the same treatment. Also, WebSockets don’t get this treatment for either name.

You might be tempted to work around these limitations by setting up a domain name in the global DNS that happens to resolve to 127.0.0.1 (for instance, localhost.example.com), getting a certificate for that domain name, shipping that certificate and corresponding private key with your native app, and telling your web app to communicate with https://localhost.example.com:8000/ instead of http://127.0.0.1:8000/. Don’t do this. It will put your users at risk, and your certificate may get revoked.

By introducing a domain name instead of an IP address, you make it possible for an attacker to Man in the Middle (MitM) the DNS lookup and inject a response that points to a different IP address. The attacker can then pretend to be the local app and send fake responses back to the web app, which may compromise your account on the web app side, depending on how it is designed.

The successful MitM in this situation is possible because in order to make it work, you had to ship the private key to your certificate with your native app. That means that anybody who downloads your native app gets a copy of the private key, including the attacker. This is considered a compromise of your private key, and your Certificate Authority (CA) is required to revoke your certificate if they become aware of it. Many native apps have had their certificates revoked for shipping their private key.

Unfortunately, this leaves native apps without a lot of good, secure options to communicate with their corresponding web site. And the situation may get trickier in the future if browsers further tighten access to localhost from the web.

Also note that exporting a web service that offers privileged native APIs is inherently risky, because web sites that you didn’t intend to authorize may access them. If you go down this route, make sure to read up on Cross-Origin Resource Sharing, use Access-Control-Allow-Origin, and make sure to use a memory-safe HTTP parser, because even origins you don’t allow access to can send preflight requests, which may be able to exploit bugs in your parser.

Making and trusting your own certificates

Anyone can make their own certificates without help from a CA. The only difference is that certificates you make yourself won’t be trusted by anyone else. For local development, that’s fine.

The simplest way to generate a private key and self-signed certificate for localhost is with this openssl command:

openssl req -x509 -out localhost.crt -keyout localhost.key \
  -newkey rsa:2048 -nodes -sha256 \
  -subj '/CN=localhost' -extensions EXT -config <( \
   printf "[dn]\nCN=localhost\n[req]\ndistinguished_name = dn\n[EXT]\nsubjectAltName=DNS:localhost\nkeyUsage=digitalSignature\nextendedKeyUsage=serverAuth")

You can then configure your local web server with localhost.crt and localhost.key, and install localhost.crt in your list of locally trusted roots.

If you want a little more realism in your development certificates, you can use minica to generate your own local root certificate, and issue end-entity (aka leaf) certificates signed by it. You would then import the root certificate rather than a self-signed end-entity certificate.

You can also choose to use a domain with dots in it, like www.localhost, by adding it to /etc/hosts as an alias to 127.0.0.1. This subtly changes how browsers handle cookie storage.

[mai mult...]

Setting up a Proxmox cluster in your homelab

It is not required but very good practice to keep management networks from VM networks. As I had four ports on each server, I went with following design:

Port 1 on each server — Management Network
This is what I provided to Proxmox at installation time, 192.168.3.x range
Linux Bridge was created by Proxmox

Port 2 on each server — VM Network (Data Plane)
In my case, this is connected to different switch/subnet, 192.168.4.x range
We’ll now create Linux Bridge to use this port

Linux Bridge: “A Linux Bridge allows the physical network interface to act as a virtual switch inside our server. This helps connect our virtual machines to different network segments.”

Log on on to one node and get to Network tab (process needs to be done on all nodes):

Proxmox Network — Setup Bridge

I am going to use eno2 that is connected to a separate subnet and use this network for VMs. First bridge created by Proxmox is named as vmbr0, so we’ll name the new one as vmbr1. In comments, I put ‘vm-net4’ to represent usage and subnet but you can put any description here:

Create Linux Bridge

Leave Gateway blank here. Click on Apply Configuration, otherwise changes will only take effect on the next restart.

Apply Network Configuration

Next time we create VMs, we’ll pick this new network that we just created:

Select network for VM creation

We can change it for existing VMs also:

Even though we have not set up the Cluster yet, we can define Shared storage at Datacenter level and not node level. I have access to NFS share from my Synology but as we can see below, we have many options to pick from:

Add Shared Storage to Proxmox

In Content above, we decide what this shared storage can be used for, like for VM disks, ISO images etc. We have defined one for virtual machines above, let us define one for backups also:

With shared storage configured, we can now leverage it to seamlessly move VMs between nodes in the cluster. This flexibility is a key benefit of clustering, as it allows for workload balancing and high availability.

At this stage, we have three separate Proxmox hosts. We have set up two networks on each and added shared storage to one. At this stage we are good for creating the cluster.

Let us Create the cluster now:

Create Cluster

Pick a cluster name and give it node IP:

Create Cluster — Output

Now we have a cluster with one node. By clicking on Join Information button, we can grab information that we need to take to other nodes to add them to same cluster:

Now we can refresh page and see all nodes in cluster:

Also, in above we can see that Shared storage is accessible to all nodes even though we added it to only one node.

Virtual Machine Migrations

We can test our cluster by Migrating VM:

This VM was created using local disk. Migrating VM with local disk:

It took a few minutes to migrate over 1Gb network. The VM was responsive during this time.

Now we can see that VM has move to prx-2 node:

We can also verify from logs at bottom of screen:

Now let us Create VM using shared storage. Process is same except that we select disk from shared storage:

Migrate VM with shared storage:

It is very quick:

So far we have migrated VMs manually. There are use cases for doing so. But if want VMs to migrate when there is an issue with a given node, then we need High Availability.

  • Click on Add under Resources
  • And select VM to add to HA

We can leave Max Restarts and Max Relocate to 1. We do not want migration to be tried multiple times, we would rather know about underlying issue first at that time. VM was stopped when adding, so it is starting now as requested state above was Started:

Now, if something happens to the node on which the VM is running, it’ll be moved to another node automatically (hence HA). I tested it by unplugging the network cable from the server on which VM was running and it moved to another node.

While High Availability (HA) is a powerful feature with clear use cases, it’s important to assess whether it’s necessary based on specific requirements. In scenarios where redundancy is already built into the application layer, such as Elasticsearch or Kafka clusters, we might prioritize high-speed local storage over fast VM migration, even if that means reduced mobility for the virtual machines. Ensuring data consistency often takes precedence in such cases. Conversely, for services like NGINX acting as a load balancer, HA can be a valuable option, as configuration updates are infrequent and the service benefits more from uptime and reliability than instantaneous data replication.

[mai mult...]

Self-hosting your own VPN on a Proxmox LXC

Highly regarded in the networking crowd as one of the best modern alternatives to OpenVPN, WireGuard combines rock-solid performance with a simple UI and tip-top security provisions. As with any other self-hosted service, there are a couple of ways you can create and run WireGuard in Proxmox, though we’ll keep things simple by choosing the ultra-easy Proxmox VE Helper Script created by developer tteck.

  1. Select your Proxmox node and navigate to its Shell tab.
    Heading to the Shell tab in the Proxmox web UI
  2. Paste the following command into the Shell and tap the Enter key.
    bash -c "$(wget -qLO - https://github.com/tteck/Proxmox/raw/main/ct/wireguard.sh)"
    The command to install the WireGuard VPN in Proxmox
  3. Choose Yes when Proxmox asks for your approval to create an LXC container for WireGuard.
    Choosing Yes when Proxmox asks the user to create a new LXC WireGuard container
  4. Pick Yes when you’re prompted to use the Default Settings.
    Choosing Yes when Proxmox asks for the user's confirmation to use the default settings when creating an LXC container for WireGuard

    If you encounter network issues when running the script, you can run the apt update and apt dist-upgrade commands in the script. Alternatively, you can try setting the IPv4 and Gateway addresses in the Advanced installation mode instead of going with the Default Settings.

  5. Wait for Proxmox to create and deploy the WireGuard container.

Configuring the WireGuard container

Since the WireGuard script installs the WGDashboard GUI, you can create a private VPN server without messing around with terminal commands.

  1. Open the URL generated by the WireGuard script on your web browser.
    The URL for the WireGuard VPN
  2. Type admin as the Username and Password.
    Logging into the WGDashboard
  3. Create a new Username and Password for the WireGuard container before tapping Next.
    Creating a WireGuard account
  4. (Optional) If you want extra security, you can set up 2FA using your favorite authenticator app.
  5. Press the Configuration button inside the WGDashboard.
    The Configuration button in the WGDashboard
  6. Pick a Name for your WireGuard VPN config and choose the Listen port for the tunnel.
    Entering a name and listen port for the WireGuard VPN tunnel
  7. Enter your preferred IP address & Range and click on Save Configuration.
    Setting the IP address for the WireGuard VPN tunnel

Connecting clients to your WireGuard VPN

With the WireGuard configuration properly set up, it’s time to pair some clients with the VPN server.

  1. Tap the Arrow button next to your freshly created WireGuard configuration.
    Accessing the newly created WireGuard configuration
  2. Click on the + Peer icon.
    Adding new peers in the WireGuard VPN
  3. Simply enter a Name for the new Peer and hit Add while leaving the other settings at their default values.
    Adding a name for the peer in WGDashboard
  4. Download the official WireGuard app on the platform of your choice.
  5. Switch to the client device and tap the + icon inside the WireGuard application.
  6. Head back to the WireGuard web GUI and open the Triple Dot menu next to the Peer.
    Connecting a new peer to the Proxmox WireGuard server
  7. Depending on the client device, you can either use the QR code, .conf file, or join link to connect to the WireGuard VPN.

Maintaining your online privacy with a self-hosted WireGuard container

The WGDashboard with two VPN tunnels set up

If you followed everything correctly, you should be able to connect to the VPN from all your local devices. To take this project to the next level, you can combine the WireGuard VPN with an ad-blocking Pi-Hole container and enjoy an ad-free experience while surfing the web anonymously. However, you’ll need to set up port-forwarding on your router to access the VPN server from external networks. For a truly anonymous experience, you’ll have to configure the WireGuard container to route all the traffic through a third-party VPN provider, preferably one that has servers in different countries if you want the added benefits of location-spoofing.

Besides WireGuard, there are a bunch of other projects you can host on your Proxmox server, including the document organization tool Paperless-ngx, private cloud CasaOS, and Network Video Recorder ZoneMinder. Alternatively, you might want to check out some insane project ideas if you want to build fun things using your Proxmox machine.

[mai mult...]

Instalare AdGuard Home DNS Proxy in Proxmox

AdGuard Home DNS Proxy este un server DNS local care funcționează ca un intermediar între dispozitivele tale și serverele DNS upstream (cum ar fi Cloudflare, Google DNS, etc.).

Ce face DNS Proxy-ul:

Funcția principală:

  • Primește cererile DNS de la dispozitivele din rețeaua ta
  • Le redirecționează către serverele DNS upstream configurate
  • Returnează răspunsurile înapoi către dispozitive

Avantajele folosirii unui DNS Proxy local:

Filtrare și securitate:

  • Blochează reclame, trackere și domenii malware
  • Aplică liste de filtrare personalizate
  • Protecție împotriva phishing-ului

Control și monitorizare:

  • Vezi toate cererile DNS din rețea
  • Statistici detaliate despre traficul DNS
  • Posibilitatea de a permite/bloca domenii specifice

Performance:

  • Cache DNS local (răspunsuri mai rapide pentru domenii accesate frecvent)
  • Reduce latența comparativ cu serverele DNS remote
  • Load balancing între multiple servere upstream

Configurare flexibilă:

  • Poți folosi DNS-over-HTTPS, DNS-over-TLS pentru securitate
  • Setări diferite pentru dispozitive diferite
  • Reguli personalizate de rewrite DNS

În esență, AdGuard Home DNS Proxy transformă rețeaua ta într-un mediu mai sigur și mai rapid prin filtrarea traficului DNS direct la nivelul rețelei, înainte ca cererile să ajungă la serverele DNS publice.

Acum ca stim ce este si cum ne-ar imbunatati securitatea retelei de acasa sa trecem la implementare.

Voi aborda cea mai simpla metoda, printr-un helper script care practic face totul de unul singur. Tot ce ramane de facut e configurarea de dupa.

Vei avea nevoie: de un hipervizor tip 1 online (proxmox, vmware esxi etc.) care sa fie online 24/7. Un mini-PC SH de cateva sute de lei ar trebui sa tina acest LXC (container) fara probleme. Pe langa asta, poti adauga mai multe servicii lightweight sa iti extinzi laboratorul.

Cerinte hardware:

  • 512 MB RAM sunt mai mult decat suficient pentru reteaua ta de acasa.
  • 1 CPU-core.
  • 2 GB Storage.

Link-ul oficial GitHub de unde vom copia script-ul: https://community-scripts.github.io/ProxmoxVE/scripts?id=adguard

Nu rulati script-uri de pe surse neconfirmate si fara un research in prealabil!

Vom lua ca exemplu un nod Proxmox, Debian (setup-ul meu actual). Script-ul din link-ul de mai sus este strict pentru Proxmox Virtual Environment, dar daca folositi altceva sunt alternative de script-uri (Windows, Vmware etc.):

  1. Logati-va prin SSH pe nodul pe care vreti sa instalati/gazduiti containerul AdGuard.
  2. Ca o buna practica rulati comanda:

    sudo apt update && sudo apt upgrade -y 

  3. Rulati script-ul de pe GitHub:

    bash -c “$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/adguard.sh)”

  4. Locatia config-ului (in caz de vreti sa configurati din CLI si nu din dashboard-ul AdGuard):                                         /opt/AdGuardHome/AdGuardHome.yaml
  5. Acest script iti efectiv iti va instala automat containerul, cu configuratia basic.
  6. Dupa ce a terminat, ai 2 optiuni: Default settings, configurarea initiala este automatizata; Advanced Settings (pe care o recomand) unde ai posibilitatea de a-i pune IP-ul fix pe care il doresti, numele si alte configuratii de inceput.
  7. La final vei primi un link care va arata asa: http://YOUR-IP:3000.
  8. In browser-ul tau te vei conecta pe acel link si de acolo poti incepe sa configurezi blocklist-urile, Upstream DNS Servers (recomand cloudflare si Google cu DNSSEC activat pentru rapiditate si securitate).
  9. Ca totul sa functioneze va trebui sa te conectezi pe routerul tau si sa ii schimb serverele de DNS. Primary server va fi IP-ul setat de tine pentru containerul AdGuard, secondary server 1.1.1.1 (Cloudflare)/8.8.8.8 (Google) – in caz de se intampla ceva cu nodul/containerul tau sa ai in continuare acces la internet.
  10. Pentru configurarea initiala (DNS Settings), recomand fie sa urmaresti un tutorial step by step, fie sa iei fiecare sectiune in parte si sa o cauti pe internet ca sa configurezi dupa preferintele si nevoile tale si sa intelegi principiile din spate.

Dupa ce ai configurat totul ar trebui sa incep sa vezi in Dashboard query-ul DNS generate in reteaua ta.

La sectiunea “Filters” vei avea by default lista AdGuard care blocheaza reclamele, analizele de date Google/Netflix s.a.m.d. Recomand sa o lasi bifata. Pe langa acea lista poti adauga tu ceea ce vrei sa blochezi in reteaua ta la “Custom filtering rules”.

[mai mult...]

How to create a free and secure Cloudflare DDNS on Linux

DDNS is a service that automatically updates your DNS records whenever your public IP address changes. It ensures that your chosen domain name always points to your current public IP address, even if it’s dynamic.

You need it if you want to acces your home LAN from outside, most ISPs provide you with a dynamic IP (and IP that changes periodically). That means that everytime your home’s IP it’s changed, you need to find your new IP, manually change every DNS entry. You can get a permanent static IP from your provider, but that comes with an extra-cost.
We don’t want that.
Cloudflare provide us with their awesome free plan.
All we need is a domain, they don’t accept some very specific “free” TLDs that are historically associated with high abuse rates (e.g., .tk, .ml, .ga, .cf, .gq). A bought domain could be a few dollars a year, way less expensive that a static IP and more useful (DDNS is just one of the free benefits provided by Cloudflare).

1. Generate token in My profile → API Tokens → Create Token with Zone.DNS permission for selected DNS zone.*

Note that it is not possible to select only one subdomain and token will have permission to change all DNS entries.

2. Use the following script to update DNS entry automatically on your server and save it as /usr/local/bin/ddns:

#!/bin/bash

# Check for current external IP
IP=`dig +short txt ch whoami.cloudflare @1.0.0.1| tr -d '"'`

# Set Cloudflare API
URL="https://api.cloudflare.com/client/v4/zones/DNS_ZONE_ID/dns_records/DNS_ENTRY_ID"
TOKEN="YOUR_TOKEN_HERE"
NAME="DNS_ENTRY_NAME"

# Connect to Cloudflare
cf() {
curl -X ${1} "${URL}" \
     -H "Content-Type: application/json" \
     -H "Authorization: Bearer ${TOKEN}" \
      ${2} ${3}
}

# Get current DNS data
RESULT=$(cf GET)
IP_CF=$(jq -r '.result.content' <<< ${RESULT})

# Compare IPs
if [ "$IP" = "$IP_CF" ]; then
    echo "No change."
else
    RESULT=$(cf PUT --data "{\"type\":\"A\",\"name\":\"${NAME}\",\"content\":\"${IP}\"}")
    echo "DNS updated."
fi

3. Add script to crontab, so it will be executed every minute:

# crontab -e
* * * * * /usr/local/bin/ddns > /dev/null 2>&1
[mai mult...]

How to create a Proxmox cluster

Once you have all the nodes ready, access them in a web browser, and log in to the Proxmox VE web GUI. I connected to my first Proxmox server (pmox-host1). Click the Datacenter option at the top, choose Cluster, and then click the Create Cluster button.

Create Cluster option in Proxmox VE web interface

Give a unique name to your Proxmox cluster, and select a link for the cluster network. I will keep the default link (0) selected. It is a good idea to choose a network link that isn’t used for other high-traffic needs, such as network storage.

Create a cluster in Proxmox VE

You can click the Add button to add a failover link if multiple network links are available on your node. Click the Create button, and you will see a task viewer window showing the status of cluster creation.

Viewing cluster creation progress in Proxmox VE

Your Proxmox cluster has now been created.

Add nodes to the cluster

When adding nodes to your Proxmox cluster, make sure no VM or container is running on the node. When you add a node to the cluster, it will inherit the cluster configuration, which will overwrite all current local configurations. If you have an important VM or container running, you can create a backup, remove it, and import it later after joining the cluster.

To join a node to the cluster, you need to copy some information from the first node where you created the cluster. To do that, navigate to Datacenter > Cluster and then click the Join Information button, as shown in the screenshot.

Viewing the join information for a cluster in Proxmox VE

Click the Copy Information button to copy the cluster join information to your clipboard.

Copying the join information for a cluster in Proxmox VE

Now, connect the other Proxmox node that you want to join to the cluster. Navigate to Datacenter > Cluster, and click the Join Cluster button, as shown below:

Join Cluster option in Proxmox VE

In the Information box, paste the join information, type the root password of the first node, and click the Join button.

Add a node to the Proxmox cluster

The second node will now be added to the Proxmox cluster. As soon as the node is added to the cluster, the server certificate is changed, so you need to reload the page and log in again to the Proxmox VE web interface. The screenshot below shows that our proxmox-lab cluster now has two nodes: pmox-host1 and pmox-host2.

Viewing the cluster information in Proxmox VE

The information about cluster nodes is listed under the Cluster option. In addition, any shared storage that is attached to the first node is automatically attached to the new node.

Once you have a Proxmox cluster up and running, you can start migrating your VMs or containers when needed, as shown in the screenshot below:

Performing live VM migration in a Proxmox cluster

The migration is pretty fast, particularly when you’re using shared storage (e.g., NAS or Ceph), so your VM will experience little to no downtime at all. The cool feature you get with a Proxmox cluster is that you can still migrate your VM (or container), even if you’re using local storage. Of course, it takes longer to migrate, but it works.

Problem with a 2-node cluster

It is entirely possible to set up a Proxmox cluster with just two nodes, as we did above. However, as mentioned earlier, the Proxmox cluster depends on quorum votes. For a quorum to exist in a cluster, the majority of nodes need to be online. With just two nodes in a cluster, both get an equal number of votes, so when either node goes down, the quorum will be lost, and the Proxmox cluster switches to read-only mode. As a result, the cluster will stop functioning as expected. This makes server maintenance hard in a two-node cluster setup.

To deal with such a situation, there are two options:

  1. Manually set the expected votes to 1 by running the pvecm expected 1 command. This command changes the expected votes temporarily, which revert after restart. But it will help maintain the quorum so that your Proxmox cluster can continue to work with just a single node while you perform maintenance on the other node.
  2. Alternatively, you could set up a corosync-qdevice in Raspberry Pi or Docker and use it as a tiebreaker for quorum votes when one of two Proxmox nodes is down. The purpose of qdevice is to vote for the active node to maintain a quorum when there are just two nodes (or an even number) in the Proxmox cluster. The use of qdevice is not recommended when there are an odd number of nodes, as it could hurt cluster stability.

Considering the above situation, it is clear that it is a good idea to have at least three nodes in a Proxmox cluster, particularly in a production environment. Furthermore, you will be able to use the additional features, such as high availability, when there are three or more nodes in the cluster.

[mai mult...]

Creating a cloudflare tunnel (for safely accessing your SOHO from outside your LAN)

This is useful if you want to access your internal network applications from outside by creating a secure tunnel linked to your domain

1. Create a tunnel

  • Log in to Zero Trust and go to Networks > Tunnels
  • Select Create a tunnel
  • Choose Cloudflared for the connector type and select Next
  • Enter a name for your tunnel. We suggest choosing a name that reflects the type of resources you want to connect through this tunnel (for example, enterprise-VPC-01)
  • Select Save tunnel
  • Next, you will need to install cloudflared and run it. To do so, check that the environment under Choose an environment reflects the operating system on your machine, then copy the command in the box below and paste it into a terminal window. Run the command.
  • Once the command has finished running, your connector will appear in Zero Trust.Connector appearing in the UI after cloudflared has run
  1. Select Next.

The next steps depend on whether you want to connect an application or connect a network.

2a. Connect an application

Before you connect an application through your tunnel, you must:

Follow these steps to connect an application through your tunnel. If you are looking to connect a network, skip to the Connect a network section.

  1. In the Public Hostnames tab, select Add a public hostname.
  2. Enter a subdomain and select a Domain from the dropdown menu. Specify any subdomain or path information.
  3. Specify a service, for example https://localhost:8000
  4. Under Additional application settings, specify any parameters you would like to add to your tunnel configuration
  5. Select Save hostname.

The application is now publicly available on the Internet. To allow or block specific users, create an Access application.

2b. Connect a network

Follow these steps to connect a private network through your tunnel.

  1. In the Private Networks tab, add an IP or CIDR
  2. Select Save tunnel.

To configure Zero Trust policies and connect as a user, refer to Connect private networks.

3. View your tunnel

After saving the tunnel, you will be redirected to the Tunnels page. Look for your new tunnel to be listed along with its active connector.

Tunnel appearing in the Tunnels table

[mai mult...]

Creating a ZFS pool in Proxmox-VE

We can use ZFS as a local or external storage option for the VMs and containers with Proxmox VE. In order to do so, we must run the following steps:

Steps to add ZFS storage to Proxmox

1. Set Proxmox VE on the servers we intend to use for managing VMs and containers.

2. Create a ZFS pool on the disks we need to assign to VMs and containers using local storage on the VE server. Make that the pool is already established and reachable from the Proxmox VE host if we install external ZFS storage from a different server.

3. Proxmox VE employs LVM as its storage backend by default. In order to use ZFS as a storage option, enable ZFS on the Proxmox VE host. It includes installing the ZFS package and turning on the ZFS service.

4. With ZFS enabled, a ZFS dataset will be created and used by VE as a pool. Typically, this dataset is connected to a particular ZFS pool path.

5. We’ll add this dataset as storage using the VE web interface (GUI). Choose “ZFS” as the storage type by going to “Datacenter” >> “Storage” >> “Add”. After that, give the pertinent information, such as the dataset path and storage choices.

6. In order to give VE the access rights necessary to read and write data to the storage, we need to modify the permissions of the ZFS dataset.

7. After installing the ZFS storage, it’s crucial to make that Proxmox VE can access and use the ZFS dataset as needed.

8. Finally, create Virtual Machines and Containers that Use ZFS Storage. Now that the ZFS storage has been configured, we may produce virtual machines and containerized software that utilizes the ZFS storage. We can pick the ZFS storage pool we previously installed while creating a virtual machine (VM) or container by selecting the storage location option.

[mai mult...]