Re: [yocto-autobuilder2][RFC][PATCH] Add multi-node content, extra config info

Trevor Gamblin

On 2021-04-05 2:55 p.m., Michael Halstead wrote:

[Please note: This e-mail is from an EXTERNAL e-mail address]

On Mon, Apr 5, 2021 at 9:23 AM Trevor Gamblin <trevor.gamblin@...> wrote:
Signed-off-by: Trevor Gamblin <trevor.gamblin@...>
--- | 94 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 94 insertions(+)

diff --git a/ b/
index 21dd7c1..8558c48 100644
--- a/
+++ b/
@@ -43,6 +43,16 @@ yocto-controller/yoctoabb

+Before proceeding, make sure that the following is added to the
+pokybuild3 user's exports (e.g. in .bashrc), or builds will fail after
+being triggered:
+export LC_ALL=en_US.UTF-8
+export LANG=en_US.UTF-8
+export LANGUAGE=en_US.UTF-8

On the AB at only LANG=en_US.UTF-8 is set. I don't know why LC_ALL or LANGUAGE need to be set on your cluster for builds to succeed.
You're right, I don't need the others. Fixing this for the next revision.

 Next, we need to update the `yocto-controller/yoctoabb/master.cfg` towards the bottom where the `title`, `titleURL`, and `buildbotURL` are all set.  This is also where you would specify a different password for binding workers to the master.

 Then, we need to update the `yocto-controller/yoctoabb/` to include our worker.  In that file, find the line where `workers` is set and add: ["example-worker"].  _NOTE:_ if your worker's name is different, use that here.  Section 3.1 discusses how to further refine this list of workers.
@@ -112,6 +122,90 @@ sudo /home/pokybuild3/yocto-worker/qemuarm/build/scripts/runqemu-gen-tapdevs \

 In the above command, we assume the a build named qemuarm failed.  The value of 8 is the number of tap interfaces to create on the worker.

+### 1.3) Adding Dedicated Worker Nodes
+Running both the controller and the worker together on a single machine
+can quickly result in long build times and an unresponsive web UI,
+especially if you plan on running any of the more comprehensive builders
+(e.g. a-full). Additional workers can be added to the cluster by
+following the steps given above, except that the yocto-controller steps
+do not need to be repeated. For example, to add a new worker
+"ala-blade51" to an Autobuilder cluster with a yocto-controller at the
+IP address
+1. On the yocto-controller host, add the name of the new worker to a worker
+list (or create a new one) e.g. 'workers_wrlx = ["ala-blade51"]' and
+make sure that it is added to the "workers" list.
+2. On the new worker node:
+sudo apt-get install gawk wget git-core diffstat unzip texinfo \
+gcc-multilib build-essential chrpath socat cpio python python3 \
+python3-pip python3-pexpect xz-utils debianutils iputils-ping \
+libsdl1.2-dev xterm

Should we link to for the current package set as well as listing this information here?
The beginning of mentions that the user should reference the Yocto Manual for the required packages, so maybe copying the list here is inconsistent. I'll put the link near the top of the doc and we can look at a better way to do this if/when a new version of this guide makes it into the Manual.
+sudo pip3 install buildbot buildbot-www buildbot-waterfall-view \
+buildbot-console-view buildbot-grid-view buildbot-worker
+useradd -m --system pokybuild3
+cd /home/pokybuild3
+mkdir -p git/trash
+buildbot-worker create-worker -r --umask=0o22 yocto-worker ala-blade51 pass
+chown -R pokybuild3:pokybuild3 /home/pokybuild3
+ > Note 1: The URL/IP given to the create-worker command must match the
+host running the yocto-controller.
+ > Note 2: The "pass" argument given to the create-worker command must
+match the common "worker_pass" variable set in yocto-controller/yoctoabb/
+### 1.4) Configuring NFS for the Autobuilder Cluster
+The Yocto Autobuilder relies on NFS to distribute a common sstate cache
+and other outputs between nodes. A similar configuration can be
+deployed by performing the steps given below, which were written for
+Ubuntu 18.04.In order for both the controller and worker nodes to be able
+to access the NFS share without issue, the "pokybuild3" user on all
+systems must have the same UID/GID, or sufficient permissions must be
+granted on the /srv/autobuilder path (or wherever you modified the config
+files to point to). The following instructions assume a controller node
+at and a single worker node at, but
+additional worker nodes can be added as needed (see the previous
+1. On the NFS host:
+sudo apt install -y nfs-kernel-server
+sudo mkdir -p /srv/autobuilder/
+sudo chown -R pokybuild3:pokybuild3 /srv

Let's only chown the directories we intend to export. Other data may be present in /srv and leaving its owner intact is desirable.

Good point. Fixing this for the next patch.

Thanks again for your review!

- Trevor

+2. Add the following to /etc/exports, replacing the path and IP fields
+   as necessary for each client node:
+3. Run
+sudo systemctl restart nfs-kernel-server
+4. Adjust the firewall (if required). Example:
+sudo ufw allow from to any port nfs
+5. On the client node(s):
+sudo mkdir -p /srv/autobuilder/
+sudo chown -R pokybuild3:pokybuild3 /srv/autobuilder/
+sudo mount /srv/autobuilder/
 ## 2) Basics

 This section is an overview of operation and a few basic configuration file relationships.  See Section 3 for more detailed instructions.

Michael Halstead
Linux Foundation / Yocto Project
Systems Operations Engineer

Join to automatically receive all group messages.