Table of Contents
https://ubuntu.com/blog/nested-containers-in-lxd
build openwrt lxd
password
lxc exec first -- /bin/bash passwd ubuntu exit
Vv
- Debian
lxc image info images:debian/11/cloud lxc launch images:debian/11/cloud debian
- hCPU limit
https://www.jamescoyle.net/how-to/2532-setting-cpu-resource-limits-with-lxc
backup
move lxc container to different storage
lxc stop container-name lxc move container_name temp-container-name -s new_storage_pool lxc move temp_container_name container-name lxc start container_name
clustering
https://discuss.linuxcontainers.org/t/ha-failover-replication/8628
iptables not loading
- iptables not loading
use builds create from https://github.com/mikma/lxd-openwrt
security
root@ubuntu:/home/ubuntu# lxc config set openwrt01 security.privileged true root@ubuntu:/home/ubuntu# lxc config set openwrt01 security.nesting=true
arm raspberry pi
10.11.13.244
important use static ip
root@ubuntu:/home/ubuntu# cat /etc/netplan/50-cloud-init.yaml
# This file is generated from information provided by the datasource. Changes
# to it will not persist across an instance reboot. To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
ethernets:
eth0:
dhcp4: no
optional: true
eth1:
dhcp4: no
optional: true
bridges:
br0:
macaddress: 00:e0:4c:12:09:83
interfaces: [eth0]
dhcp4: false
addresses: [10.11.13.244/24]
gateway4: 10.11.13.1
nameservers:
addresses: [10.11.13.1]
br1:
interfaces: [eth1]
dhcp4: false
version: 2
* profile
config: {}
description: 2 interfaces
devices:
eth0:
name: eth0
nictype: bridged
parent: stageingbr0
type: nic
eth1:
name: eth1
nictype: bridged
parent: stageingbr1
type: nic
eth2:
name: eth2
nictype: bridged
parent: stageingmgmt
type: nic
root:
path: /
pool: default
type: disk
name: twointf
used_by:
https://downloads.openwrt.org/releases/19.07.4/targets/armvirt/64/openwrt-19.07.4-armvirt-64-default-rootfs.tar.gz sudo apt-get install snapd bridge-utils sudo snap install core lxd lxc image import openwrt-19.07.4-armvirt-64-default-rootfs.tar.gz --alias openwrt-19.07.04 Permanent solution: Edit /etc/environment and add /snap/bin
Headline
apt install lxc apt install bridge-utils mkdir lxd cd lxd/ wget https://archive.openwrt.org/releases/19.07.2/targets/x86/generic/openwrt-19.07.2-x86-generic-generic-rootfs.tar.gz or wget https://downloads.openwrt.org/releases/19.07.3/targets/x86/generic/openwrt-19.07.3-x86-generic-generic-rootfs.tar.gz ls -la nano metadata.yaml architecture: "amd64" creation_date: 1592666616 tar cvf openwrt-meta.tar metadata.yaml lxc image import openwrt-meta.tar openwrt-19.07.2-x86-generic-generic-rootfs.tar.gz --alias openwrt_19.07 lxc launch openwrt_19.07 openwrt lxc exec openwrt ash
LXD networking
- lxd shoud not change the iptables
lxc network set lxdbr0 ipv4.nat false lxc network set lxdbr0 ipv6.nat false lxc network set lxdbr0 ipv6.firewall false lxc network set lxdbr0 ipv4.firewall false
openwrt setup
Linux Containers with OpenWrt by Craig Miller Traffic Virtual Network in the Palm of your hand In Linux Containers on the Pi I described how to run LXC/LXD on SBCs (Small Board Computers), including the Raspberry Pi.
Linux Containers Part 2 Although you can turn your Pi into an OpenWrt router, it never appealed to me since the Pi has so few (2) interfaces. But playing with LXD, and a transparent bridge access for the containers, it made sense that it might be useful. But after creating a server farm on a Raspberry Pi, I can see where there are those who would want to have a firewall in front of the servers to reduce the threat surface.
Docker attempts this, by fronting the containers with the dockerd daemon, but the networking is klugy at best. If you choose to go it on your own, and use Docker's routing, you will quickly find yourself in the 90s where everything must be manually configured (address range, gateway addresses, static routes to get into and out of the container network). The other option is to use NAT44 and NAT66, which is just wrong, and results in a losing true peer to peer connectivity, limited server access (since only 1 can be on port 80 or 443), and the other host of brokenness of NAT.
OpenWrt is, on the other hand, a widely used open-source router software project, running on hundreds of different routers. It includes excellent IPv6 support, including DHCPv6-PD (prefix delegation for automatic addressing of the container network, plus route insertion), an easy to use Firewall web interface, and full routing protocol support (such as RIPng or OSPF) if needed.
Going Virtual The goal is to create a virtual environment which not only has excellent network management of LXC, but also an easy to use router/firewall via the OpenWrt web inteface (called LuCI), all running on the Raspberry Pi (or any Linux machine).
Virtual router Network
Motivation OpenWrt project does an excellent job of creating images for hundreds of routers. I wanted to take a generic existing image and make it work on LXD without recompiling, or building OpenWrt from source.
Additionally, I wanted it to run on a Raspberry Pi (ARM processor). Most implementations of OpenWrt in virtual environments run on x86 machines.
If you would rather build OpenWrt, please see the github project https://github.com/mikma/lxd-openwrt (x86 support only)
Installing LXD on the Raspberry Pi Unfortunately the default Raspian image does not support name spaces or cgroups which are used to isolate the Linux Containers. Fortunately, there is a Ubuntu 18.04 image available for the Pi which does.
UPDATE: Sept 2019: Raspian Buster now supports Linux Containers (LXD) by using snapd. To install LXD on Raspian:
sudo apt-get install snapd bridge-utils sudo snap install core lxd
Add your userid to the lxd group, and run lxd init. Done!
If you haven't already installed LXD on your Raspberry Pi, please look at Linux Containers on the Pi blog post.
Creating a LXD Image NOTE: Unless otherwise stated, all commands are run on the Raspberry Pi
Using lxc image import an image can pulled into LXD. The steps are: Download the OpenWrt rootfs tarball Create a metadata.yaml file, and place into a tar file Import the rootfs tarball and metadata tarball to create an image Getting OpenWrt rootfs The OpenWrt project not only provides squashfs and ext4 images, but also simple tar.gz files of the rootfs. The current release is 18.06.1, and I recommend starting with it.
The ARM-virt rootfs tarball can be found at OpenWrt
Download the OpenWrt 18.06.1 rootfs tarball for Arm.
The x86 rootfs is here
Create a metadata.yaml file
A lthough the yaml file can contain quite a bit of information the minimum requirement is architecture and creation_date. Use your favourite editor to create a file named metadata.yaml
architecture: "armhf" creation_date: 1544922658
The creation date is the current time (in seconds) since the unix epoch (1 Jan 1970). Easiest way to get this value it to find it on the web, such as the EpochConverter
Once the metadata.yaml file is created, tar it up and name it anything that makes sense to you.
tar cvf openwrt-meta.tar metadata.yaml Import the image into LXD Place both tar files (metadata & rootfs) in the same directory on the Raspberry Pi. And use the following command to import the image:
lxc image import openwrt-meta.tar default-root.tar.gz –alias openwrt_armhf Starting up Virtual OpenWrt Unfortunately, the OpenWrt image won't boot with the imported image. So a helper script has been developed to create devices in /dev before OpenWrt will boot properly.
The steps to get your virtual OpenWrt up and running are (init.sh is not required if you are using OpenWrt 19.07 or later, omit steps 3-6):
Create the container Adjust some of the parameters of the container Download init.sh script from github Copy the init.sh script to /root on the image Log into the OpenWrt container and execute sh init.sh Validate that OpenWrt has completed booting Create the OpenWrt Container I use router as the name of the OpenWrt container
lxc init local:openwrt_armhf router lxc config set router security.privileged true In order for init.sh to run the mknod command the container must run as privileged.
Adjust some parameters for the OpenWrt container Since this is going to be a router, it is useful to have two interfaces (for WAN & LAN), and therefore a profile for this network config must be created. Create the profile, and edit to match the config below (assuming you have br0 as a WAN and lxdbr0 as LAN).
lxc profile create twointf
lxc profile edit twointf
config: {}
description: 2 interfaces
devices:
eth0:
name: eth0
nictype: bridged
parent: lxdbr0
type: nic
eth1:
name: eth1
nictype: bridged
parent: br0
type: nic
root:
path: /
pool: default
type: disk
name: twointf
And then edit the router container to have 2 interfaces. The only line you need to add is the eth1 line, and be sure to have a unique MAC address (or just increment the eth0 MAC).
lxc config edit router architecture: armv7l config: image.architecture: armhf image.description: 'OpenWrt 18.06.1 from armvirt/32 ' image.os: openwrt image.release: 18.06.1 ... volatile.eth0.hwaddr: 00:16:3e:72:44:b5 volatile.eth1.hwaddr: 00:16:3e:72:44:b6 ...
Now assign the twointf profile to the router container, and remove the default profile (which only has one interface)
lxc profile assign router twointf lxc profile remove router default
Download init.sh from the OpenWrt-LXD open source project The init.sh script is open source and resides on github. To download it on your Pi, use curl (you may have to install curl)
curl https://raw.githubusercontent.com/cvmiller/openwrt-lxd/master/init.sh > init.sh UPDATE: Sept 2019: OpenWrt release 19.07 no longer requires the use of init.sh. Skip down to Managing the Virtual OpenWrt router
Copy the init.sh to the OpenWrt container In order to use the lxc push command the container must be running, so we'll start the router.
lxc start router
Then copy the `init.sh script to the container
lxc file push init.sh router/root/ Log into the OpenWrt container and execute the init.sh script With the container started, the OpenWrt container boot will stall after running procd (think init in linux). By running init.sh the boot process will continue, and OpenWrt should be up and running.
Log into the router container using the lxc exec command, and run the init.sh script.
lxc exec router sh # # sh init.sh
Validating OpenWrt is up and running You can see if OpenWrt is up and running by looking at the processes. An unhappy container will only have three. A happy container will have about 12. Type ps inside the container should look like this:
~ # ps
PID USER VSZ STAT COMMAND 1 root 1324 S /sbin/procd 78 root 1064 S sh 107 root 1000 S /sbin/ubusd 196 root 1016 S /sbin/logd -S 64 213 root 1328 S /sbin/rpcd 322 root 1512 S /sbin/netifd 357 root 1228 S /usr/sbin/odhcpd 409 root 828 S /usr/sbin/dropbear -F -P /var/run/dropbear.1.pid -p 22 -K 300 -T 3 467 root 820 S odhcp6c -s /lib/netifd/dhcpv6.script -Ntry -P0 -t120 eth1 469 root 1064 S udhcpc -p /var/run/udhcpc-eth1.pid -s /lib/netifd/dhcp.script -f -t 0 -i eth1 -x hostname:router 508 root 1116 S /usr/sbin/uhttpd -f -h /www -r OpenWrt -x /cgi-bin -t 60 -T 30 -k 20 -A 1 -n 3 -N 100 -R -p 0.0. 850 dnsmasq 1152 S /usr/sbin/dnsmasq -C /var/etc/dnsmasq.conf.cfg01411c -k -x /var/run/dnsmasq/dnsmasq.cfg01411c.pi
1191 root 1064 R ps
Additionally, if you have connected the router up the right way (e.g. WAN=eth1/br0 LAN=eth0,lxdbr0) then the WAN and LAN should have addresses. Use the ip addr to view them. (note the ip address of the WAN interface for management later)
Managing the Virtual OpenWrt router The LuCI web interface by default is blocked on the WAN interface. In order to manage the router from the outside, a firewall rule allowing web access from the WAN must be inserted.
The standard way it to add the following to bottom of the /etc/config/firewall file within the OpenWrt container.
lxc exec router sh
# vi /etc/config/firewall
...
config rule
option target 'ACCEPT'
option src 'wan'
option proto 'tcp'
option dest_port '80'
option name 'ext_web'
Save the file and then restart the firewall within the OpenWrt container.
/etc/init.d/firewall restart
Now you should be able to point your web browser to the WAN address (see output of ip addr eth1 address). and login, password is blank.
http://[2001:db8:ebbd:2080::93b]/
Follow the instructions to set a password, and configure the firewall as you like.
auto start lxc containers
$ lxc config set {vm-name} {key} {value}
$ lxc config set {vm-name} boot.autostart {true|false}
$ lxc config set {vm-name} boot.autostart.priority integer
$ lxc config set {vm-name} boot.autostart.delay integer
storage
zfs list
- new storage
root@ubupi4node01:~# truncate -s 50G /mnt/ssd/lxd/pool-ssd.img root@ubupi4node01:~# zpool create pool-ssd /mnt/ssd/lxd/pool-ssd.img -m none root@ubupi4node01:~# lxc storage create pool-ssd zfs source=pool-ssd
lxd bind address
lxc config set core.https_address “[::]:8443”
remove lxd
lxc list
lxc delete <whatever came from list>
lxc image list
lxc image delete <whatever came from list>
lxc network list
lxc network delete <whatever came from list>
echo ‘{“config”: {}}’ | lxc profile edit default
lxc storage volume list default
lxc storage volume delete default <whatever came from list>
lxc storage delete default

