Configurare Sistem de operare
RetroArch on Raspberry Pi – Complete Installation & Configuration Guide
Flash Raspberry Pi OS:
-
-
Use Raspberry Pi Imager to install Raspberry Pi OS.
-
Enable SSH and Wi-Fi in advanced settings (optional but useful).
-
First Boot:
-
-
Insert the microSD, power up your Pi, and complete the OS setup.
-
Update your system:
Install RetroArch
There are two main options to install RetroArch:
Option A: Install via RetroPie (Recommended for Ease + Full Emulation Suite)
RetroPie bundles RetroArch + EmulationStation and makes configuration easier.
-
Install Git:
-
Clone and install RetroPie:
-
Choose:
-
Basic Install– installs RetroArch, EmulationStation, and core scripts.
-
-
After install, reboot:
Option B: Install RetroArch Standalone from Source
If you want only RetroArch:
-
Install dependencies:
-
Clone RetroArch:
-
Launch RetroArch:
You’ll need to install and manage cores and frontends manually if you choose Option B.
Step 3: Install Emulator Cores
From within RetroArch:
-
Launch RetroArch:
-
Navigate to:
-
Main Menu > Online Updater > Core Downloader
-
Select and download cores (emulators) such as:
-
NES: FCEUmm, Nestopia
-
SNES: SNES9x
-
GBA: mGBA
-
PS1: PCSX ReARMed (best for Raspberry Pi)
-
-
Step 4: Add ROMs
-
Create ROM folders:
-
Transfer ROMs:
-
Use SFTP (via FileZilla) or USB stick.
-
File path:
~/RetroPie/roms/[system]
-
Legal Note: Only use ROMs you legally own.
Step 5: Configure Controllers
Auto-Configuration:
-
On first launch, RetroArch will detect most gamepads.
-
Follow the on-screen prompts to map buttons.
Manual Configuration:
-
Main Menu > Settings > Input > Port 1 Binds
-
Save autoconfig:
-
Input > Save Autoconfig
-
Step 6: Enable Video and Shaders
-
Settings > Video:
-
Enable Threaded Video
-
Set Scaling > Aspect Ratio to
Core Providedor4:3
-
-
Shaders (for CRT filters):
-
Settings > Shaders > Load Shader Preset
-
Try
crt-pi.glslporcrt-geom.glslp
-
Step 7: Save Configurations
Make sure to save settings:
Or save per-core config:
Step 8: Autostart RetroArch
To launch RetroArch on boot:
Add at the end:
Or use EmulationStation (from RetroPie) as the frontend.
Optional Enhancements
Add Hotkeys
-
Assign a “Hotkey Enable” button (e.g., Select)
-
Combine with:
-
Hotkey + Start = Exit
-
Hotkey + R = Reset
-
RetroAchievements
-
Enable in Settings > Achievements if you’re logged into RetroAchievements.org
Overclock (Advanced)
-
Use
raspi-config> Overclock -
Improves performance but watch temps.
Problema instalare sistem de operare prin retea – resincronizarea ceasului la secunda (x64)
In incercarea de instalare a unui sistem de operare folosind bootarea prin retea, putem observa ca aceasta nu demareaza – unul dintre motivele posibile poate fi ora (si mai precis, minutul si secunda) setata pe statia cu probleme. In acest caz se dispune sincronizarea automata sau manuala astfel incat sistemele de retea sa permita conectarea cu succes la serverul pentru instalarea de SO.
[mai mult...]What is Bazzite and why i should install it as a gaming OS
Bazzite is a customized, gaming-focused variant of Fedora Atomic Desktops (specifically Kinoite), developed by the open-source team at Universal Blue. It is designed to deliver an optimized out-of-the-box gaming experience for both desktop PCs and handheld devices like the Steam Deck.
Create a Bootable USB Drive
Use a tool like:
Rufus (Windows)
dd (Linux/macOS)
balenaEtcher (Cross-platform)
Example with dd:
bash
Copy
Edit
sudo dd if=bazzite-xyz.iso of=/dev/sdX bs=4M status=progress && sync
Replace /dev/sdX with the path to your USB stick.
Boot and Install
- Boot from the USB stick (adjust BIOS boot order if needed).
- Bazzite will boot into a live environment.
- Follow the Anaconda installer process:
- Choose language, disk, and partitions.
- Set up a user and password.
Install.
Warning: This will overwrite your target drive unless you’re dual-booting. Backup important data.
Post-Install Configuration
Once installed and rebooted into Bazzite:
1. First Boot Tasks
Log into your user account.
Perform initial update (if prompted) via GNOME Software or CLI.
bash
Copy
Edit
rpm-ostree upgrade
2. Steam Setup
Steam is preinstalled, but you can:
Log in to your account.
Enable Proton Experimental in settings for broader compatibility.
Add non-Steam games via Lutris/Bottles (already installed).
3. System Management Tools
Flatpak is your default app store:
bash
Copy
Edit
flatpak install flathub com.discordapp.Discord
Nix Package Manager is also supported (optional but powerful):
bash
Copy
Edit
curl -L https://nixos.org/nix/install | sh
4. Rebase to Another Variant (Optional)
Want to switch between KDE, GNOME, etc.?
bash
Copy
Edit
rpm-ostree rebase ostree-unverified:ghcr.io/ublue-os/bazzite-gnome:latest
Then reboot:
bash
Copy
Edit
systemctl reboot
Useful Bazzite CLI Commands
Command Description
rpm-ostree upgrade Check for and apply system updates
rpm-ostree install <pkg> Layer in additional RPM packages
rpm-ostree status View deployment status
rpm-ostree rollback Roll back to previous working deployment
flatpak install <app> Install Flatpak apps
bazzite-device-setup (Steam Deck) Reconfigures device-specific tweaks.
Setting up a Proxmox cluster in your homelab
It is not required but very good practice to keep management networks from VM networks. As I had four ports on each server, I went with following design:
Port 1 on each server — Management Network
This is what I provided to Proxmox at installation time, 192.168.3.x range
Linux Bridge was created by Proxmox
Port 2 on each server — VM Network (Data Plane)
In my case, this is connected to different switch/subnet, 192.168.4.x range
We’ll now create Linux Bridge to use this port
Linux Bridge: “A Linux Bridge allows the physical network interface to act as a virtual switch inside our server. This helps connect our virtual machines to different network segments.”
Log on on to one node and get to Network tab (process needs to be done on all nodes):
I am going to use eno2 that is connected to a separate subnet and use this network for VMs. First bridge created by Proxmox is named as vmbr0, so we’ll name the new one as vmbr1. In comments, I put ‘vm-net4’ to represent usage and subnet but you can put any description here:
Leave Gateway blank here. Click on Apply Configuration, otherwise changes will only take effect on the next restart.
Next time we create VMs, we’ll pick this new network that we just created:
We can change it for existing VMs also:
Even though we have not set up the Cluster yet, we can define Shared storage at Datacenter level and not node level. I have access to NFS share from my Synology but as we can see below, we have many options to pick from:
In Content above, we decide what this shared storage can be used for, like for VM disks, ISO images etc. We have defined one for virtual machines above, let us define one for backups also:
With shared storage configured, we can now leverage it to seamlessly move VMs between nodes in the cluster. This flexibility is a key benefit of clustering, as it allows for workload balancing and high availability.
At this stage, we have three separate Proxmox hosts. We have set up two networks on each and added shared storage to one. At this stage we are good for creating the cluster.
Let us Create the cluster now:
Pick a cluster name and give it node IP:
Now we have a cluster with one node. By clicking on Join Information button, we can grab information that we need to take to other nodes to add them to same cluster:
Now we can refresh page and see all nodes in cluster:
Also, in above we can see that Shared storage is accessible to all nodes even though we added it to only one node.
Virtual Machine Migrations
We can test our cluster by Migrating VM:
This VM was created using local disk. Migrating VM with local disk:
It took a few minutes to migrate over 1Gb network. The VM was responsive during this time.
Now we can see that VM has move to prx-2 node:
We can also verify from logs at bottom of screen:
Now let us Create VM using shared storage. Process is same except that we select disk from shared storage:
Migrate VM with shared storage:
It is very quick:
So far we have migrated VMs manually. There are use cases for doing so. But if want VMs to migrate when there is an issue with a given node, then we need High Availability.
- Click on Add under Resources
- And select VM to add to HA
We can leave Max Restarts and Max Relocate to 1. We do not want migration to be tried multiple times, we would rather know about underlying issue first at that time. VM was stopped when adding, so it is starting now as requested state above was Started:
Now, if something happens to the node on which the VM is running, it’ll be moved to another node automatically (hence HA). I tested it by unplugging the network cable from the server on which VM was running and it moved to another node.
While High Availability (HA) is a powerful feature with clear use cases, it’s important to assess whether it’s necessary based on specific requirements. In scenarios where redundancy is already built into the application layer, such as Elasticsearch or Kafka clusters, we might prioritize high-speed local storage over fast VM migration, even if that means reduced mobility for the virtual machines. Ensuring data consistency often takes precedence in such cases. Conversely, for services like NGINX acting as a load balancer, HA can be a valuable option, as configuration updates are infrequent and the service benefits more from uptime and reliability than instantaneous data replication.
[mai mult...]