Forester is in early stage of development. Only unsecure HTTP protocol is supported, no SecureBoot support yet and passwords are stored in clear text!
Documentation of the Forester Project.
This is the multi-page printable view of this section. Click here to print.
Forester is in early stage of development. Only unsecure HTTP protocol is supported, no SecureBoot support yet and passwords are stored in clear text!
Documentation of the Forester Project.
Forester is a bare-metal image-based unattended provisioning service for Red Hat Anaconda (Fedora, RHEL, CentOS Stream, Alma Linux…) with simplicity of configuration and use in mind. It utilizes Redfish API and PXE BIOS/EFI or UEFI-HTTP Boot to deploy images created by Image Builder through Anaconda.
An example workflow for installation and operation.
The installation workflow:
The provisioning workflow with Forester:
The following talks give a brief overview of Forester. Quick introduction to Forester for Red Hat Console Q3 Hackathon 2023:
Demo of Forester provisioning libvirt VMs (this is only useful for development purposes):
Full lightning talk from DevConf 2023 (poor audio quality).
Download and extract the CLI for your architecture and try to connect to controller:
forester-cli --url http://forester:8000 appliance list
If the service is running on localhost port 8000, you can omit the --url
argument.
The recommended way is Podman pod.
The following commands create two new volumes for postgres database and images, a new pod named forester
with two containers named forester-pg
and forester-api
exposing port 8000 for the REST API and files.
podman volume create forester-pg
podman volume create forester-img
podman volume create forester-log
podman pod create --name forester -p 8000:8000 -p 8514:8514
podman run -d --name forester-pg --pod forester \
-e POSTGRESQL_USER=forester -e POSTGRESQL_PASSWORD=forester -e POSTGRESQL_DATABASE=forester \
-v forester-pg:/var/lib/pgsql/data:Z quay.io/fedora/postgresql-15; sleep 5s
podman run -d --name forester-api --pod forester \
-e DATABASE_USER=forester -e DATABASE_PASSWORD=forester \
-v forester-img:/images:Z -v forester-log:/logs:Z quay.io/forester/controller:latest
TFTP service, which is required for PXE, must be started separately with --network host
option. Since the service port is lower than 1024, it must be executed as root or unprivileged port boundary can be configured:
sudo sysctl net.ipv4.ip_unprivileged_port_start=69
To start the TFTP-HTTP proxy which will proxy TFTP downloads to http://localhost:8000
by default (configurable via options):
podman run -d --name forester-proxy --network host quay.io/forester/controller:latest /forester-proxy --verbose
In order for TFTP to work through NAT, connection tracking modules must be loaded too:
sudo modprobe nf_conntrack_tftp
sudo modprobe nf_nat_tftp
To make both changes permanent on Fedora or Red Hat system:
echo "net.ipv4.ip_unprivileged_port_start=69" > /etc/sysctl.d/10-unprivileged_ports.conf
echo "install nf_conntrack_tftp" > /etc/modprobe.d/tftp.conf
echo "install nf_nat_tftp" >> /etc/modprobe.d/tftp.conf
To uninstall everything up including data and images:
podman rm -f forester-pg forester-app
podman pod rm -f forester
podman volume rm forester-img forester-pg
To start the Forester via docker-compose or podman-compose:
curl https://raw.githubusercontent.com/foresterorg/forester/main/compose.yaml > compose.yaml
docker-compose up -d
Compilation is described in the Contributing chapter.
Images can be built on Red Hat Console or locally. Follow the instructions to perform build on your own hardware.
Install composer and composer CLI and start the build service:
sudo dnf install osbuild-composer composer-cli
sudo systemctl enable --now osbuild-composer.socket
For more information on how to setup permissions for non-root accounts, read the project documentation.
Create a blueprint file named base-image-with-vim.toml
with the following contents:
name = "base-image-with-vim"
description = "A base system with vim"
version = "0.0.1"
[[packages]]
name = "vim"
version = "*"
Read the osbuild documentation for more information on customization of the image. Now, upload the blueprint file into the service:
composer-cli blueprints push base-image-with-vim.toml
To build a Fedora/RHEL installation image:
composer-cli compose start base-image-with-vim image-installer
Wait until it is done and then download the result tarball:
composer-cli compose results UUID
You may proceed to the next chapter. To build a Fedora IoT (or RHEL for Edge) installation image, create a blueprint file named empty.toml
with the following contents:
cat empty.toml
name = "empty"
description = "Empty blueprint"
version = "0.0.1"
Push the empty blueprint:
composer-cli blueprints push empty.toml
Build and download a new ostree repository:
composer-cli compose start base-image-with-vim iot-commit
composer-cli compose results UUID
The ostree repository needs to be published over HTTP protocol, this is not part of this turorial.
composer-cli compose start-ostree --ref "fedora/38/x86_64/iot" --url http://zzzap.tpb.lab.eng.brq.redhat.com/~lzap/f38-iot-commit/repo/ empty iot-installer
composer-cli compose results UUID
When building RHEL instead of Fedora, replace iot-installer
with edge-installer
. Also the ref
will be different, typically rhel/8/x86_64/edge
.
Use the following command to watch the progress of the build:
composer-cli compose info UUID
Once the build is finished, download the ISO file with:
composer-cli compose results UUID
Composes can be deleted to save disk space using delete
command.
The first step is to upload the image:
forester-cli image upload --name Fedora37 f37-minimal-image.iso
Check it:
forester-cli image list
Image ID Image Name
1 RHEL9
2 Fedora37
forester-cli image show Fedora37
Attribute Value
ID 2
Name Fedora37
To create a Redfish appliance use kind named “redfish”:
forester-cli appliance create --kind redfish --name dellr350 --uri https://root:calvin@dr350-a14.local
To create appliance for hacking and development a good appliance type is libvirt through local UNIX socket, an example for system session:
forester-cli appliance create --kind libvirt --name system --uri qemu:///system
An example for user session (URI accepts both unix
socket or qemu
paths):
sudo usermod -a -G libvirt $(whoami)
forester-cli appliance create --kind libvirt --name session --uri qemu:///session
Or via TCP connection (TLS is not supported):
forester-cli appliance create --kind libvirt --name remote --uri tcp://host.containers.internal:16509
Replace host.containers.internal
with the Forester hostname, this is a special name that will work for Podman to access host system. To access libvirt over TCP, it must be configured to do so:
grep '^[^#]' /etc/libvirt/libvirtd.conf
auth_tcp = "none"
And:
systemctl enable --now libvirtd-tcp.socket
It is possible to create no operation appliance which does nothing or redfish_manual
appliance which performs detection of systems, however, it does not perform any power operations. With noop
appliance, systems need to be registered manually whereas with redfish_manual
systems can be detected automatically.
Warning: username and password are currently stored as clear text and fully readable through the API.
forester-cli appliance list
ID Name Kind URI
1 system 1 unix:///var/run/libvirt/libvirt-sock
2 session 1 qemu:///session
3 dellr350 2 https://root:calvin@dr350-a14.local
Discover the system or multiple blades in chassis:
forester-cli appliance enlist dellr350
One or more systems are available now, each system has an unique ID, one or more MAC addresses and randomly generated name. A system can be referenced via both MAC address and random name:
forester-cli system list
ID Name Hw Addresses Acquired Facts
1 Lynn Viers 6c:fe:54:70:60:10 (4) false Dell Inc. PowerEdge R350
To show more details of a system:
forester-cli system show Viers
Attribute Value
ID 1
Name Lynn Viers
Acquired false
Acquired at Mon Sep 4 14:40:50 2023
Image ID 1
MAC 6c:fe:54:70:60:10
MAC c4:5a:b1:a0:f2:b5
MAC 6c:fe:54:70:60:11
MAC c4:5a:b1:a0:f2:b4
Appliance Name dell
Appliance Kind 2
Appliance URI https://root:calvin@dr350-a14.local
UID 4c4c4544-004c-3510-804c-c4c04f435731
Fact Value
baseboard-asset-tag
baseboard-manufacturer Dell Inc.
baseboard-product-name 0MTYYT
baseboard-serial-number .DL5XXXX.MXWSG0000000HE.
baseboard-version A02
bios-release-date 11/14/2022
bios-revision 1.5
bios-vendor Dell Inc.
bios-version 1.5.1
chassis-asset-tag Not Specified
chassis-manufacturer Dell Inc.
chassis-serial-number DL5XXXX
chassis-type Rack Mount Chassis
chassis-version Not Specified
cpuinfo-processor-count 4
firmware-revision
memory-bytes 8201367552
processor-family Xeon
processor-frequency 2800 MHz
processor-manufacturer Intel
processor-version Intel(R) Xeon(R) E-2314 CPU @ 2.80GHz
redfish_asset_tag
redfish_description Computer System which represents a machine (physical or virtual) and the local resources such as memory, cpu and other devices that can be accessed from that machine.
redfish_manufacturer Dell Inc.
redfish_memory_bytes 8589934592
redfish_model PowerEdge R350
redfish_name System
redfish_oid /redfish/v1/Systems/System.Embedded.1
redfish_part_number 0MTYYTA02
redfish_pcie_dev_count 9
redfish_processor_cores 4
redfish_processor_count 1
redfish_processor_model Intel(R) Xeon(R) E-2314 CPU @ 2.80GHz
redfish_serial_number MXWSJ0032100HI
redfish_sku DL5XXXX
serial DL5XXXX
system-family PowerEdge
system-manufacturer Dell Inc.
system-product-name PowerEdge R350
system-serial-number DL5XXXX
system-sku-number SKU=0A94;ModelName=PowerEdge R350
system-uuid 4c4c4544-004c-3510-804c-c4c04f435731
system-version Not Specified
Facts which start with redfish
were recognized via Redfish API, other facts can be discovered by booting the system into Anaconda in a released state:
forester-cli appliance bootnet lynn
An alternative way of registering systems is to enter them manually, this is useful for noop
appliance:
forester-cli system register --name my-system-13 --hwaddrs AA:BB:CC:DD:EE:FF --facts test=1 --appliance noop --uid unique_uuid
Every system must have an appliance assigned, a noop
(no operation) appliance can be used and unique string (typically UUID or random number).
Installation can be further customized using Anaconda Kickstart syntax via snippets. There are several kinds of snippets which can be optionally associated with a system:
%post
installation snippet)To list all snippets:
forester-cli snippet list
ID Name Kind
1 SingleVolume disk
2 SharedPass rootpw
3 InstallAnsible post
To create a snippet:
forester-cli snippet create --name MySnippet --kind locale
An editor (depending on the $EDITOR
environment variable) is launched, when saved the snippet is uploaded to the service. An example content:
lang cs_CZ.UTF-8
keyboard 'cs'
timezone Europe/Prague --utc
To edit or delete snippet, use appropriate edit
or delete
CLI subcommands.
A system can have zero to any number of snippets associated and one extra “custom snippet” which can be used to provide one or more ad-hoc Kickstart lines without storing anything into the database.
Deploying is done via deploy
command:
forester-cli system deploy lynn --image RHEL9
To customize installation with snippets:
forester-cli system deploy lynn --image RHEL9 --snippets SingleVolume SharedPass --customsnippet "%pre\necho Hello\n%end\n"
Warning: There is no authentication or authorization in the API, anyone can deploy or even add new appliances.
Both hardware discovery and installation are handled by Anaconda which takes Kickstart as the input. To view current Kickstart of an existing system do:
forester-cli system kickstart lynn
Depending on the state of the system, it will be either discovery Kickstart, or installation kickstart. Kickstart can be validated using ksvalidator
command for syntax errors.
Anaconda installer is configured to send all logs via syslog protocol into Forester. To list all available logs of a system:
forester-cli system logs lynn
To view log contents of an installation:
forester-cli system logs lynn -d f-1-06fe9588-b50e-4ee7-8e81-1f87a8-b265e1.log
Logs can be viewed as they arrive in the service, there is no “watch” feature available tho.
Both service and CLI are written in Go, we format the code with go fmt
and stick with the widely adopted Go best practices.
Build the project, the script will also install required CLI tools for code generation and database migration:
git clone https://github.com/foresterorg/forester
cd forester
./build.sh
When you start the backend for the first time, it will migrate database (create tables). By default, it connect to “localhost” database “forester” and user “postgres”.
./forester-controller
Configure libvirt environment for booting from network via UEFI HTTP Boot, add the five “dnsmasq” options into the “default” libvirt network. Also, optionally configure PXEv4 and IPEv6 to return a non-existing file (“USE_HTTP” in the example) to speed up OVMF firmware to fallback to HTTPv4:
Edit the default network
sudo virsh net-edit default
To the following (keep your uuid and IP configuration)
<network xmlns:dnsmasq='http://libvirt.org/schemas/network/dnsmasq/1.0'>
<name>default</name>
<uuid>9f3e4377-3d33-42df-b34c-7880295d24ee</uuid>
<forward mode='nat'/>
<bridge name='virbr0' zone='trusted' stp='on' delay='0'/>
<mac address='52:54:00:7a:00:01'/>
<ip address='192.168.122.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.122.2' end='192.168.122.254'/>
<bootp file='USE_HTTP'/>
</dhcp>
</ip>
<dnsmasq:options>
<dnsmasq:option value='dhcp-vendorclass=set:bios,PXEClient:Arch:00000'/>
<dnsmasq:option value='dhcp-vendorclass=set:efi,PXEClient:Arch:00007'/>
<dnsmasq:option value='dhcp-vendorclass=set:efix64,PXEClient:Arch:00009'/>
<dnsmasq:option value='dhcp-vendorclass=set:efihttp,HTTPClient:Arch:00016'/>
<dnsmasq:option value='dhcp-option-force=tag:efihttp,60,HTTPClient'/>
<dnsmasq:option value='dhcp-match=set:ipxe-http,175,19'/>
<dnsmasq:option value='dhcp-match=set:ipxe-https,175,20'/>
<dnsmasq:option value='dhcp-match=set:ipxe-menu,175,39'/>
<dnsmasq:option value='dhcp-match=set:ipxe-pxe,175,33'/>
<dnsmasq:option value='dhcp-match=set:ipxe-bzimage,175,24'/>
<dnsmasq:option value='dhcp-match=set:ipxe-iscsi,175,17'/>
<dnsmasq:option value='dhcp-match=set:ipxe-efi,175,36'/>
<dnsmasq:option value='tag-if=set:ipxe-ok,tag:ipxe-http,tag:ipxe-menu,tag:ipxe-iscsi,tag:ipxe-pxe,tag:ipxe-bzimage'/>
<dnsmasq:option value='tag-if=set:ipxe-ok,tag:ipxe-http,tag:ipxe-menu,tag:ipxe-iscsi,tag:ipxe-efi'/>
<dnsmasq:option value='dhcp-boot=tag:bios,bootstrap/ipxe/undionly.kpxe,,192.168.122.1'/>
<dnsmasq:option value='dhcp-boot=tag:!ipxe-ok,tag:efi,bootstrap/ipxe/ipxe-snponly-x86_64.efi,,192.168.122.1'/>
<dnsmasq:option value='dhcp-boot=tag:!ipxe-ok,tag:efi64,bootstrap/ipxe/ipxe-snponly-x86_64.efi,,192.168.122.1'/>
<dnsmasq:option value='dhcp-boot=tag:!ipxe-ok,tag:efihttp,http://192.168.122.1:8000/bootstrap/ipxe/ipxe-snponly-x86_64.efi'/>
<dnsmasq:option value='dhcp-boot=tag:ipxe-ok,tag:!efihttp,bootstrap/ipxe/chain.ipxe,,192.168.122.1'/>
<dnsmasq:option value='dhcp-boot=tag:ipxe-ok,tag:efihttp,http://192.168.122.1:8000/bootstrap/ipxe/chain.ipxe'/>
</dnsmasq:options>
</network>
Make sure to update the HTTP address in case you want to use different network than “defalut” (which is 192.168.122.0). Restart the network to make the DHCP server settings effective:
sudo virsh net-destroy default
sudo virsh net-start default
When testing, create three VMs:
Note you cannot use libvirt for real provisioning as there are some issues with libvirt driver, it does not boot into the OS after restart: https://github.com/foresterorg/forester/issues/6
The built-in TFTP server is running on port 6969 by default to allow development on non-root accounts. To start and debug the application’s built in TFTP listener as a regular (developer) account, simply forward the traffic:
sudo iptables -t nat -I PREROUTING -p udp --dport 69 -j REDIRECT --to-ports 6969
When using firewalld
sudo firewall-cmd --add-forward-port=port=69:proto=udp:toport=6969
Make sure to add --zone=libvirt
when testing Forester via libvirt and --permanent
to set it permanently
Note this will not work when running the application via podman or docker, you can only use this for local development.
There are several emulators available.
The official one unfortunatelly does not allow updates (reboot), you can still try it out:
podman run --rm -p 5000:5000 dmtf/redfish-interface-emulator:latest
However, there is another one from DMTF, which does support updates:
podman run --rm -p 5000:8000 dmtf/redfish-mockup-server:latest