Table of Contents
Persistent System Configuration for Proxmox VE 9
Persisting ethtool Settings
Method 1: Using /etc/network/interfaces (Recommended)
# Edit network configuration nano /etc/network/interfaces # Add to your interface configuration (replace eno1 with your interface): auto eno1 iface eno1 inet manual post-up /sbin/ethtool -K eno1 gso off gro off tso off # Apply changes ifdown eno1 && ifup eno1 # or systemctl restart networking
Method 2: Using systemd Service
# Create service file nano /etc/systemd/system/ethtool-settings.service # Add this content (replace eno1 with your interface): [Unit] Description=Apply ethtool settings After=network.target [Service] Type=oneshot ExecStart=/sbin/ethtool -K eno1 gso off gro off tso off RemainAfterExit=yes [Install] WantedBy=multi-user.target # Enable service systemctl daemon-reload systemctl enable ethtool-settings.service systemctl start ethtool-settings.service
Persisting Kernel Boot Parameter
# Edit GRUB configuration nano /etc/default/grub # Modify this line: GRUB_CMDLINE_LINUX_DEFAULT="quiet pcie_aspm=off" # Update GRUB and reboot update-grub reboot
Verification
# Check ethtool settings ethtool -k eno1 | grep -E 'gso|gro|tso' # Check kernel parameters cat /proc/cmdline
Important Notes
- Replace
eno1with your actual network interface name - Test changes in a non-production environment first
- Backup configuration files before modification
- These methods work for Proxmox VE 9 based on Debian Trixie
Interface hang
LXC plex proxmox
proxmox storage
LXC in proxmox
my NUC
passthrought
To perform PCI Express passthrough on a NUC 10 with Proxmox, you must enable Intel's VT-d (IOMMU) in the NUC's BIOS, configure Proxmox by adding intel_iommu=on to the GRUB boot parameters, load the necessary vfio modules, and then assign the desired PCI(e) device to the virtual machine. A NUC 10, with its Intel CPU, is generally compatible with this process. 1. Enable Virtualization in BIOS Access BIOS: Reboot your NUC 10 and enter its BIOS/UEFI settings. Enable VT-d/IOMMU: Look for settings related to Intel VT-d (Virtualization Technology for Directed I/O) or IOMMU and enable them. 2. Configure Proxmox Host Connect via SSH or Shell: Access the Proxmox host's shell or terminal. Edit GRUB Configuration: Open the GRUB configuration file with nano /etc/default/grub. Add intel_iommu=on: Find the GRUB_CMDLINE_LINUX_DEFAULT line and add intel_iommu=on to the existing parameters. Update GRUB: Run update-grub to apply the changes. Edit Modules File: Add the necessary vfio modules to /etc/modules: Code
vfio vfio_iommu_type1 vfio_pci vfio_virqfd
. Update Initramfs: Execute update-initramfs -u -k all to integrate the new kernel modules. Reboot: Reboot the Proxmox VE host to apply all the changes. 3. Verify IOMMU and Device Isolation Check IOMMU status: Use the command dmesg | grep -e DMAR -e IOMMU to verify IOMMU is enabled. List PCI devices: Use the Proxmox command pvesh get /nodes/{nodename}/hardware/pci –pci-class-blacklist “” (replace {nodename} with your node's name) to see available devices and check their IOMMU groups to ensure the device you want to pass through is in its own isolated group. 4. Assign the PCI Express Device to a VM Add PCI Device: In the Proxmox web interface, select the VM, go to the “Hardware” tab, and click “Add” > “PCI Device”. Select Your Device: Choose the desired PCI/PCIe device from the list. Ensure you have selected the correct device and checked “all functions” if needed. Add the Device: Click “Add” to assign the device to the virtual machine.
