Date   

[meta-cgl][PATCH 10/20] core-image-cgl-*: Move to recipe directory

Jeremy Puhlman
 

* lsb content has been moved out to meta-lsb.
* Configure image to build with or with out lsb image present.
* Add warning about CGL compliance and lsb requirement and option
to squelch warning.

Signed-off-by: Jeremy A. Puhlman <jpuhlman@...>
---
.../{ => recipes-core}/images/core-image-cgl-initramfs.bb | 0
meta-cgl-common/{ => recipes-core}/images/core-image-cgl.bb | 10 +++++++++-
2 files changed, 9 insertions(+), 1 deletion(-)
rename meta-cgl-common/{ => recipes-core}/images/core-image-cgl-initramfs.bb (100%)
rename meta-cgl-common/{ => recipes-core}/images/core-image-cgl.bb (54%)

diff --git a/meta-cgl-common/images/core-image-cgl-initramfs.bb b/meta-cgl-common/recipes-core/images/core-image-cgl-initramfs.bb
similarity index 100%
rename from meta-cgl-common/images/core-image-cgl-initramfs.bb
rename to meta-cgl-common/recipes-core/images/core-image-cgl-initramfs.bb
diff --git a/meta-cgl-common/images/core-image-cgl.bb b/meta-cgl-common/recipes-core/images/core-image-cgl.bb
similarity index 54%
rename from meta-cgl-common/images/core-image-cgl.bb
rename to meta-cgl-common/recipes-core/images/core-image-cgl.bb
index 86bf7d4..4a7d4f7 100644
--- a/meta-cgl-common/images/core-image-cgl.bb
+++ b/meta-cgl-common/recipes-core/images/core-image-cgl.bb
@@ -1,6 +1,14 @@
-require recipes-extended/images/core-image-lsb.bb
+require ${@bb.utils.contains("BBFILE_COLLECTIONS", "lsb", "recipes-lsb/images/core-image-lsb.bb", "recipes-core/images/core-image-base.bb", d)}


+LSB_WARN ?= "1"
+python () {
+ lsb_warn = d.getVar("LSB_WARN")
+ if bb.utils.contains("BBFILE_COLLECTIONS", "lsb", "1", "0", d) == "0" and lsb_warn == "1":
+ bb.warn("CGL compliance requires lsb, and meta-lsb is not included.\n" + \
+ "To disable this warning set LSB_WARN='0'")
+}
+
VALGRIND ?= ""
VALGRIND_powerpc ?= "valgrind"
VALGRIND_e500v2 ?= ""
--
2.13.3


[meta-cgl][PATCH 07/20] cluster-glue: Update to current

Jeremy Puhlman
 

From: Jeremy Puhlman <jpuhlman@...>

* Fix various mutlilib issues.
* Update python3 issues
* License updates were change of address for FSF

Signed-off-by: Jeremy A. Puhlman <jpuhlman@...>
---
.../cluster-glue/0001-Update-for-python3.patch | 260 +++++++++++++++++++++
.../cluster-glue/cluster-glue_1.0.12.bb | 20 +-
2 files changed, 273 insertions(+), 7 deletions(-)
create mode 100644 meta-cgl-common/recipes-cgl/cluster-glue/cluster-glue/0001-Update-for-python3.patch

diff --git a/meta-cgl-common/recipes-cgl/cluster-glue/cluster-glue/0001-Update-for-python3.patch b/meta-cgl-common/recipes-cgl/cluster-glue/cluster-glue/0001-Update-for-python3.patch
new file mode 100644
index 0000000..e089dc4
--- /dev/null
+++ b/meta-cgl-common/recipes-cgl/cluster-glue/cluster-glue/0001-Update-for-python3.patch
@@ -0,0 +1,260 @@
+From 3ac95d9da4e207f5d1db14ecbf9c10c13247dd45 Mon Sep 17 00:00:00 2001
+From: Jeremy Puhlman <jpuhlman@...>
+Date: Wed, 19 Feb 2020 22:35:51 +0000
+Subject: [PATCH] Update for python3
+
+Upstream-Status: Inappropriate
+---
+ lib/plugins/stonith/external/dracmc-telnet | 10 +++++-----
+ lib/plugins/stonith/external/ibmrsa-telnet | 8 ++++----
+ lib/plugins/stonith/external/riloe | 30 +++++++++++++++---------------
+ lib/plugins/stonith/ribcl.py.in | 20 ++++++++++----------
+ 4 files changed, 34 insertions(+), 34 deletions(-)
+
+diff --git a/lib/plugins/stonith/external/dracmc-telnet b/lib/plugins/stonith/external/dracmc-telnet
+index 78c01453..7fbed86b 100644
+--- a/lib/plugins/stonith/external/dracmc-telnet
++++ b/lib/plugins/stonith/external/dracmc-telnet
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/env python3
+ # vim: set filetype=python
+ #######################################################################
+ #
+@@ -74,7 +74,7 @@ class DracMC(telnetlib.Telnet):
+
+ def _get_timestamp(self):
+ ct = time.time()
+- msecs = (ct - long(ct)) * 1000
++ msecs = (ct - int(ct)) * 1000
+ return "%s,%03d" % (time.strftime("%Y-%m-%d %H:%M:%S",
+ time.localtime(ct)), msecs)
+
+@@ -170,7 +170,7 @@ class DracMCStonithPlugin:
+
+ def _get_timestamp(self):
+ ct = time.time()
+- msecs = (ct - long(ct)) * 1000
++ msecs = (ct - int(ct)) * 1000
+ return "%s,%03d" % (time.strftime("%Y-%m-%d %H:%M:%S",
+ time.localtime(ct)), msecs)
+
+@@ -200,7 +200,7 @@ class DracMCStonithPlugin:
+ self._parameters['cyclades_port'])
+ c.login(self._parameters['username'],
+ self._parameters['password'])
+- except Exception, args:
++ except Exception as args:
+ if "Connection reset by peer" in str(args):
+ self._echo_debug("Someone is already logged in... retry=%s" % tries)
+ c.close()
+@@ -362,7 +362,7 @@ class DracMCStonithPlugin:
+ func = getattr(self, cmd, self.not_implemented)
+ rc = func()
+ return(rc)
+- except Exception, args:
++ except Exception as args:
+ self.echo_log("err", 'Exception raised:', str(args))
+ if self._connection:
+ self.echo_log("err", self._connection.get_history())
+diff --git a/lib/plugins/stonith/external/ibmrsa-telnet b/lib/plugins/stonith/external/ibmrsa-telnet
+index adb2a3eb..0a3ce3c2 100644
+--- a/lib/plugins/stonith/external/ibmrsa-telnet
++++ b/lib/plugins/stonith/external/ibmrsa-telnet
+@@ -1,4 +1,4 @@
+-#!/usr/bin/python
++#!/usr/bin/env python3
+ # vim: set filetype=python
+ #######################################################################
+ #
+@@ -71,7 +71,7 @@ class RSABoard(telnetlib.Telnet):
+
+ def _get_timestamp(self):
+ ct = time.time()
+- msecs = (ct - long(ct)) * 1000
++ msecs = (ct - int(ct)) * 1000
+ return "%s,%03d" % (time.strftime("%Y-%m-%d %H:%M:%S",
+ time.localtime(ct)), msecs)
+
+@@ -149,7 +149,7 @@ class RSAStonithPlugin:
+
+ def _get_timestamp(self):
+ ct = time.time()
+- msecs = (ct - long(ct)) * 1000
++ msecs = (ct - int(ct)) * 1000
+ return "%s,%03d" % (time.strftime("%Y-%m-%d %H:%M:%S",
+ time.localtime(ct)), msecs)
+
+@@ -305,7 +305,7 @@ class RSAStonithPlugin:
+ func = getattr(self, cmd, self.not_implemented)
+ rc = func()
+ return(rc)
+- except Exception, args:
++ except Exception as args:
+ self.echo_log("err", 'Exception raised:', str(args))
+ if self._connection:
+ self.echo_log("err", self._connection.get_history())
+diff --git a/lib/plugins/stonith/external/riloe b/lib/plugins/stonith/external/riloe
+index 412873f5..370fd57f 100644
+--- a/lib/plugins/stonith/external/riloe
++++ b/lib/plugins/stonith/external/riloe
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/env python3
+ #
+ # Stonith module for RILOE Stonith device
+ #
+@@ -35,7 +35,7 @@ import os
+ import socket
+ import subprocess
+ import xml.dom.minidom
+-import httplib
++import http.client
+ import time
+ import re
+
+@@ -163,12 +163,12 @@ info = {
+ }
+
+ if cmd in info:
+- print info[cmd]
++ print(info[cmd])
+ sys.exit(0)
+
+ if cmd == 'getconfignames':
+ for arg in [ "hostlist", "ilo_hostname", "ilo_user", "ilo_password", "ilo_can_reset", "ilo_protocol", "ilo_powerdown_method", "ilo_proxyhost", "ilo_proxyport"]:
+- print arg
++ print(arg)
+ sys.exit(0)
+
+ if not rihost:
+@@ -257,7 +257,7 @@ def read_resp(node):
+ '''
+ msg = ""
+ str_status = ""
+- for attr in node.attributes.keys():
++ for attr in list(node.attributes.keys()):
+ if attr == A_STATUS:
+ str_status = node.getAttribute(attr)
+ elif attr == A_MSG:
+@@ -285,7 +285,7 @@ def read_power(node):
+ variable correspondingly.
+ '''
+ global power
+- for attr in node.attributes.keys():
++ for attr in list(node.attributes.keys()):
+ if attr == A_POWER_STATE:
+ power_state = node.getAttribute(attr).upper()
+ else:
+@@ -339,18 +339,18 @@ def open_ilo(host):
+ fatal("Error status=: %s" %(response))
+ import ssl
+ sock = ssl.wrap_socket(proxy)
+- h=httplib.HTTPConnection('localhost')
++ h=http.client.HTTPConnection('localhost')
+ h.sock=sock
+ return h
+ else:
+- return httplib.HTTPSConnection(host)
+- except socket.gaierror, msg:
++ return http.client.HTTPSConnection(host)
++ except socket.gaierror as msg:
+ fatal("%s: %s" %(msg,host))
+- except socket.sslerror, msg:
++ except socket.sslerror as msg:
+ fatal("%s for %s" %(msg,host))
+- except socket.error, msg:
++ except socket.error as msg:
+ fatal("%s while talking to %s" %(msg,host))
+- except ImportError, msg:
++ except ImportError as msg:
+ fatal("ssl support missing (%s)" %msg)
+
+ def send_request(req,proc_f):
+@@ -364,7 +364,7 @@ def send_request(req,proc_f):
+ c = open_ilo(rihost)
+ try:
+ c.send(req+'\r\n')
+- except socket.error, msg:
++ except socket.error as msg:
+ fatal("%s, while talking to %s" %(msg,rihost))
+ t_end = time.time()
+ my_debug("request sent in %0.2f s" % ((t_end-t_begin)))
+@@ -377,7 +377,7 @@ def send_request(req,proc_f):
+ if not reply:
+ break
+ result.append(reply)
+- except socket.error, msg:
++ except socket.error as msg:
+ if msg[0] == 6: # connection closed
+ break
+ my_err("%s, while talking to %s" %(msg,rihost))
+@@ -393,7 +393,7 @@ def send_request(req,proc_f):
+ reply = re.sub("<(RIBCL.*)/>", r"<\1>", reply)
+ try:
+ doc = xml.dom.minidom.parseString(reply)
+- except xml.parsers.expat.ExpatError,msg:
++ except xml.parsers.expat.ExpatError as msg:
+ fatal("malformed response: %s\n%s"%(msg,reply))
+ rc = proc_f(doc)
+ doc.unlink()
+diff --git a/lib/plugins/stonith/ribcl.py.in b/lib/plugins/stonith/ribcl.py.in
+index 0733bb24..3533dee3 100644
+--- a/lib/plugins/stonith/ribcl.py.in
++++ b/lib/plugins/stonith/ribcl.py.in
+@@ -1,4 +1,4 @@
+-#!@TRAGET_PYTHON@
++#!/usr/bin/env python3
+
+
+ #
+@@ -18,7 +18,7 @@
+
+ import sys
+ import socket
+-from httplib import *
++from http.client import *
+ from time import sleep
+
+
+@@ -29,7 +29,7 @@ try:
+ host = argv[1].split('.')[0]+'-rm'
+ cmd = argv[2]
+ except IndexError:
+- print "Not enough arguments"
++ print("Not enough arguments")
+ sys.exit(1)
+
+
+@@ -66,7 +66,7 @@ try:
+ else:
+ acmds.append(login + todo[cmd] + logout)
+ except KeyError:
+- print "Invalid command: "+ cmd
++ print("Invalid command: "+ cmd)
+ sys.exit(1)
+
+
+@@ -88,13 +88,13 @@ try:
+ sleep(1)
+
+
+-except socket.gaierror, msg:
+- print msg
++except socket.gaierror as msg:
++ print(msg)
+ sys.exit(1)
+-except socket.sslerror, msg:
+- print msg
++except socket.sslerror as msg:
++ print(msg)
+ sys.exit(1)
+-except socket.error, msg:
+- print msg
++except socket.error as msg:
++ print(msg)
+ sys.exit(1)
+
+--
+2.13.3
+
diff --git a/meta-cgl-common/recipes-cgl/cluster-glue/cluster-glue_1.0.12.bb b/meta-cgl-common/recipes-cgl/cluster-glue/cluster-glue_1.0.12.bb
index 749ce8c..d9df83b 100644
--- a/meta-cgl-common/recipes-cgl/cluster-glue/cluster-glue_1.0.12.bb
+++ b/meta-cgl-common/recipes-cgl/cluster-glue/cluster-glue_1.0.12.bb
@@ -4,8 +4,8 @@ is not the cluster messaging layer (Heartbeat), nor the cluster resource manager
(Pacemaker), nor a Resource Agent."
HOMEPAGE = "http://clusterlabs.org/"
LICENSE = "GPLv2 & LGPLv2.1"
-LIC_FILES_CHKSUM = "file://COPYING;md5=751419260aa954499f7abaabaa882bbe \
- file://COPYING.LIB;md5=243b725d71bb5df4a1e5920b344b86ad \
+LIC_FILES_CHKSUM = "file://COPYING;md5=b70d30a00a451e19d7449d7465d02601 \
+ file://COPYING.LIB;md5=c386bfabdebabbdc1f28e9fde4f4df6d \
"

DEPENDS = "libxml2 libtool glib-2.0 bzip2 util-linux net-snmp openhpi"
@@ -14,14 +14,15 @@ SRC_URI = " \
git://github.com/ClusterLabs/${BPN}.git \
file://0001-don-t-compile-doc-and-Error-Fix.patch \
file://0001-ribcl.py.in-Warning-Fix.patch \
+ file://0001-Update-for-python3.patch \
file://volatiles \
file://tmpfiles \
"
SRC_URI_append_libc-uclibc = " file://kill-stack-protector.patch"

-SRCREV = "1bc77825c0cfb0c80f9c82a061af7ede68676cb4"
+SRCREV = "fd5a3befacd23d056a72cacd2b8ad6bba498e56b"

-inherit autotools useradd pkgconfig systemd
+inherit autotools useradd pkgconfig systemd multilib_script multilib_header

SYSTEMD_SERVICE_${PN} = "logd.service"
SYSTEMD_AUTO_ENABLE = "disable"
@@ -30,6 +31,7 @@ HA_USER = "hacluster"
HA_GROUP = "haclient"

S = "${WORKDIR}/git"
+PV = "1.0.12+git${SRCPV}"

PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'systemd', d)}"
PACKAGECONFIG[systemd] = "--with-systemdsystemunitdir=${systemd_system_unitdir},--without-systemdsystemunitdir,systemd"
@@ -48,6 +50,8 @@ USERADD_PARAM_${PN} = "--home-dir=${localstatedir}/lib/heartbeat/cores/${HA_USER
"
GROUPADD_PARAM_${PN} = "-r ${HA_GROUP}"

+MULTILIB_SCRIPTS = "${PN}:${sbindir}/cibsecret"
+
do_configure_prepend() {
ln -sf ${PKG_CONFIG_SYSROOT_DIR}/usr/include/libxml2/libxml ${PKG_CONFIG_SYSROOT_DIR}/usr/include/libxml
}
@@ -57,6 +61,8 @@ do_install_append() {
install -m 0644 ${WORKDIR}/volatiles ${D}${sysconfdir}/default/volatiles/04_cluster-glue
install -d ${D}${sysconfdir}/tmpfiles.d
install -m 0644 ${WORKDIR}/tmpfiles ${D}${sysconfdir}/tmpfiles.d/${PN}.conf
+
+ oe_multilib_header heartbeat/glue_config.h
}

pkg_postinst_${PN} () {
@@ -86,9 +92,9 @@ PACKAGES =+ "\
${PN}-plugin-interfacemgr-dbg \
${PN}-plugin-interfacemgr-staticdev \
${PN}-lrmtest \
- ${PN}-plugin-compress \
- ${PN}-plugin-compress-dbg \
- ${PN}-plugin-compress-staticdev \
+ ${PN}-plugin-compress \
+ ${PN}-plugin-compress-dbg \
+ ${PN}-plugin-compress-staticdev \
"

FILES_${PN} = "${sysconfdir} /var ${libdir}/lib*.so.* ${sbindir} ${datadir}/cluster-glue/*sh ${datadir}/cluster-glue/*pl\
--
2.13.3


[meta-cgl][PATCH 06/20] pacemaker: fix depend issues for py2 removal

Jeremy Puhlman
 

From: Jeremy Puhlman <jpuhlman@...>

Signed-off-by: Jeremy A. Puhlman <jpuhlman@...>
---
meta-cgl-common/recipes-cgl/pacemaker/pacemaker_1.1.21.bb | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/meta-cgl-common/recipes-cgl/pacemaker/pacemaker_1.1.21.bb b/meta-cgl-common/recipes-cgl/pacemaker/pacemaker_1.1.21.bb
index df02f40..3a8db77 100644
--- a/meta-cgl-common/recipes-cgl/pacemaker/pacemaker_1.1.21.bb
+++ b/meta-cgl-common/recipes-cgl/pacemaker/pacemaker_1.1.21.bb
@@ -11,7 +11,7 @@ HOMEPAGE = "http://www.clusterlabs.org"
LICENSE = "GPLv2+ & LGPLv2.1+"
LIC_FILES_CHKSUM = "file://COPYING;md5=000212f361a81b100d9d0f0435040663"

-DEPENDS = "corosync libxslt libxml2 gnutls resource-agents libqb python-native"
+DEPENDS = "corosync libxslt libxml2 gnutls resource-agents libqb python3-native"

SRC_URI = "git://github.com/ClusterLabs/${BPN}.git;branch=1.1 \
file://0001-pacemaker-fix-xml-config.patch \
@@ -95,7 +95,7 @@ FILES_${PN} += " ${datadir}/snmp \
${libdir}/${PYTHON_DIR}/site-packages \
"
FILES_${PN}-dbg += "${libdir}/corosync/lcrso/.debug"
-RDEPENDS_${PN} = "bash python perl libqb ${PN}-cli-utils"
+RDEPENDS_${PN} = "bash python3-core perl libqb ${PN}-cli-utils"

SYSTEMD_AUTO_ENABLE = "disable"

--
2.13.3


[meta-cgl][PATCH 04/20] pacemaker: fix parse errors due to python2 removal

Jeremy Puhlman
 

From: Jeremy Puhlman <jpuhlman@...>

Signed-off-by: Jeremy A. Puhlman <jpuhlman@...>
---
meta-cgl-common/recipes-cgl/pacemaker/pacemaker_1.1.21.bb | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/meta-cgl-common/recipes-cgl/pacemaker/pacemaker_1.1.21.bb b/meta-cgl-common/recipes-cgl/pacemaker/pacemaker_1.1.21.bb
index feed53d..df02f40 100644
--- a/meta-cgl-common/recipes-cgl/pacemaker/pacemaker_1.1.21.bb
+++ b/meta-cgl-common/recipes-cgl/pacemaker/pacemaker_1.1.21.bb
@@ -30,7 +30,7 @@ SRC_URI_append_libc-musl = "file://0001-pacemaker-fix-compile-error-of-musl-libc

SRCREV = "f14e36fd4336874705b34266c7cddbe12119106c"

-inherit autotools-brokensep pkgconfig systemd python-dir useradd
+inherit autotools-brokensep pkgconfig systemd python3-dir useradd

S = "${WORKDIR}/git"

--
2.13.3


[meta-cgl][PATCH 03/20] crmsh: fix parse errors due to python2 removal

Jeremy Puhlman
 

From: Jeremy Puhlman <jpuhlman@...>

Signed-off-by: Jeremy A. Puhlman <jpuhlman@...>
---
meta-cgl-common/recipes-cgl/crmsh/crmsh_3.0.3.bb | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/meta-cgl-common/recipes-cgl/crmsh/crmsh_3.0.3.bb b/meta-cgl-common/recipes-cgl/crmsh/crmsh_3.0.3.bb
index 040b4d3..6d2902c 100644
--- a/meta-cgl-common/recipes-cgl/crmsh/crmsh_3.0.3.bb
+++ b/meta-cgl-common/recipes-cgl/crmsh/crmsh_3.0.3.bb
@@ -20,7 +20,7 @@ SRC_URI = "git://github.com/ClusterLabs/${BPN}.git;branch=crmsh-3.0 \

SRCREV = "41845ca5511b844593cf25ae4eb7f307aa78c5be"

-inherit autotools-brokensep distutils-base
+inherit autotools-brokensep distutils3-base

export HOST_SYS
export BUILD_SYS
--
2.13.3


[meta-cgl][PATCH 05/20] cluster-glue: fix depend issues for py2 removal

Jeremy Puhlman
 

From: Jeremy Puhlman <jpuhlman@...>

Signed-off-by: Jeremy A. Puhlman <jpuhlman@...>
---
meta-cgl-common/recipes-cgl/cluster-glue/cluster-glue_1.0.12.bb | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/meta-cgl-common/recipes-cgl/cluster-glue/cluster-glue_1.0.12.bb b/meta-cgl-common/recipes-cgl/cluster-glue/cluster-glue_1.0.12.bb
index e0aa2b1..749ce8c 100644
--- a/meta-cgl-common/recipes-cgl/cluster-glue/cluster-glue_1.0.12.bb
+++ b/meta-cgl-common/recipes-cgl/cluster-glue/cluster-glue_1.0.12.bb
@@ -137,6 +137,6 @@ FILES_${PN}-lrmtest = "${datadir}/cluster-glue/lrmtest/"

RDEPENDS_${PN} += "perl"
RDEPENDS_${PN}-plugin-stonith2 += "bash"
-RDEPENDS_${PN}-plugin-stonith-external += "bash python perl"
-RDEPENDS_${PN}-plugin-stonith2-ribcl += "python"
+RDEPENDS_${PN}-plugin-stonith-external += "bash python3-core perl"
+RDEPENDS_${PN}-plugin-stonith2-ribcl += "python3-core"
RDEPENDS_${PN}-lrmtest += "${VIRTUAL-RUNTIME_getopt} ${PN}-plugin-raexec"
--
2.13.3


[meta-cgl][PATCH 01/20] monit: upgrade 5.25.2 -> 5.26.0

Jeremy Puhlman
 

From: Changqing Li <changqing.li@...>

Signed-off-by: Changqing Li <changqing.li@...>
Signed-off-by: Adrian Dudau <adrian.dudau@...>
---
.../recipes-cgl/monit/{monit_5.25.2.bb => monit_5.26.0.bb} | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
rename meta-cgl-common/recipes-cgl/monit/{monit_5.25.2.bb => monit_5.26.0.bb} (90%)

diff --git a/meta-cgl-common/recipes-cgl/monit/monit_5.25.2.bb b/meta-cgl-common/recipes-cgl/monit/monit_5.26.0.bb
similarity index 90%
rename from meta-cgl-common/recipes-cgl/monit/monit_5.25.2.bb
rename to meta-cgl-common/recipes-cgl/monit/monit_5.26.0.bb
index ab9e922..6ec1a21 100644
--- a/meta-cgl-common/recipes-cgl/monit/monit_5.25.2.bb
+++ b/meta-cgl-common/recipes-cgl/monit/monit_5.26.0.bb
@@ -9,7 +9,7 @@ HOMEPAGE = "http://mmonit.com/monit/"
LICENSE = "AGPLv3"
LIC_FILES_CHKSUM = "file://COPYING;md5=ea116a7defaf0e93b3bb73b2a34a3f51"

-DEPENDS = "openssl zlib"
+DEPENDS = "openssl zlib virtual/crypt"

SRC_URI = "\
http://mmonit.com/monit/dist/${BP}.tar.gz \
@@ -17,8 +17,8 @@ SRC_URI = "\
file://init \
"

-SRC_URI[md5sum] = "890df599d6c1e9cfbbdd3edbacb7db81"
-SRC_URI[sha256sum] = "aa0ce6361d1155e43e30a86dcff00b2003d434f221c360981ced830275abc64a"
+SRC_URI[md5sum] = "9f7dc65e902c103e4c5891354994c3df"
+SRC_URI[sha256sum] = "87fc4568a3af9a2be89040efb169e3a2e47b262f99e78d5ddde99dd89f02f3c2"

INITSCRIPT_NAME = "monit"
INITSCRIPT_PARAMS = "defaults 99"
--
2.13.3


[meta-cgl][PATCH 02/20] Add zeus to compat list

Jeremy Puhlman
 

From: Jeremy Puhlman <jpuhlman@...>

Signed-off-by: Jeremy A. Puhlman <jpuhlman@...>
---
meta-cgl-common/conf/layer.conf | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/meta-cgl-common/conf/layer.conf b/meta-cgl-common/conf/layer.conf
index 894d6c4..de64205 100644
--- a/meta-cgl-common/conf/layer.conf
+++ b/meta-cgl-common/conf/layer.conf
@@ -13,6 +13,6 @@ BBFILE_PRIORITY_cgl-common = "7"

LAYERDEPENDS_cgl-common = "core openembedded-layer networking-layer perl-layer filesystems-layer security selinux"

-LAYERSERIES_COMPAT_cgl-common = "warrior"
+LAYERSERIES_COMPAT_cgl-common = "warrior zeus"

require conf/distro/include/cgl_common_security_flags.inc
--
2.13.3


Re: What are the key factors for yocto build speed?

Ross Burton <ross@...>
 

On 18/03/2020 14:09, Mike Looijmans wrote:
Harddisk speed has very little impact on your build time. It helps with the "setscene" parts, but doesn't affect actual compile time at all. I recall someone did a build from RAM disks only on a rig, and it was only about 1 minute faster on a one hour build compared to rotating disks.
My build machine has lots of RAM and I do builds in a 32GB tmpfs with rm_work (and no, I don't build webkit, which would make this impractical).

As you say, with sufficient RAM the build speed is practically the same as on disks due to the caching (especially if you tune the mount options), so I'd definitely spend money on more RAM instead of super-fast disks. I just prefer doing tmpfs builds because it saves my spinning rust. :)

Ross


Re: How to PROVIDE boost-python

Emily
 

Hi Laurent and Quentin - 

Thank you both so much for your help! 

I did just end up patching the source code for my recipe - I had to both add the 3 and remove the -mt, and the OS does build now! If the sw does what I need it to is another question, but we shall see. 

Thanks again! 
Emily

On Wed, Mar 18, 2020 at 10:37 AM Quentin Schulz <quentin.schulz@...> wrote:
Hi Emily,

On Wed, Mar 18, 2020 at 10:16:23AM -0500, Emily wrote:
> Hi Laurent -
>
> Unfortunately I don't have full control over the repo that's using
> boost_python-mt so I'm not sure I can switch it right now.
>

You can create a patch for it. You can use devtool modify <your-recipe>
and create the patch from there, you have access to the sources with
that. devtool build <your-recipe> to check it builds okay.

> I realize this is not a long-term solution, as I'll need to update that
> code (and my OS) to python3 soon, but for now I've just copied the boost
> recipe from this commit
> <https://github.com/openembedded/openembedded-core/commit/ef603f41b5df4772bb598ec9d389dd5f858592af#diff-9c24742c4bfe7eb2853f86cce86b91c6>
> to
> my own layer, and added a BBMASK to the openembedded-core's boost recipe.
> This seems to work, except I get a QA error from the commit I'm using for
> the boost recipe now:
>

I don't think there is a need for BBMASK, you should be able to set
PREFERRED_VERSION_boost = "1.63.0" in local.conf or your machine
configuration file.

> ERROR: boost-1.63.0-r1 do_package: QA Issue: boost: Files/directories were
> installed but not shipped in any package:
>   /usr/lib/libboost_numpy.so.1.63.0
> Please set FILES such that these items are packaged. Alternatively if they
> are unneeded, avoid installing them or delete them within do_install.
> boost: 1 installed and not shipped files. [installed-vs-shipped]
> ERROR: boost-1.63.0-r1 do_package: Fatal QA errors found, failing task.
> ERROR: boost-1.63.0-r1 do_package: Function failed: do_package
>
> I found a log
> <https://www.yoctoproject.org/irc/%23yocto.2017-03-09.log.html> that
> mentions this exact error, and also mentions a patch that fixes it - I've
> tried and thus far failed to find that patch. I'm not sure if this will
> actually work, but I thought I'd check and see if anyone had any ideas.
>

If all of this is really temporary you can install it in some package,
or even create a new package for it. (PACKAGES =+ "boost-numpy",
FILES_${PN}-numpy = "/usr/lib/libboost_numpy.so.1.63.0").

No more ideas tbh. I would try to patch "your" SW first and see if it
brings you somewhere, but that's what I would do, each their own way.

Quentin


Re: <EXT> Re: [yocto] What are the key factors for yocto build speed?

Srini
 

Some of these started 2 years ago. Probably Krogoth timeframe thru rocko. (Did I get the names right!)

 

Not downloading from the net. But we had an inhouse git mirror of yocto.

 

We tried being a bit more adventurous but bitbake did not always seem to pick up the fact that some sources or options changed. Do not remember the details but we didn’t have time to investigate.

 

The same system now builds in about 2.5 hours the last time I looked. Different server farm with a much better storage architecture!

 

The key learning for me – which I recommend highly is to invest the time in the sdk! And tools to update your final image without having to rebuild. These factors alone improved our team productivity manyfold.

 

Again YMMV. srini

 

From: Stewart, David C <david.c.stewart@...>
Sent: Wednesday, March 18, 2020 1:48 PM
To: Srinivasan, Raja <rsrinivasan@...>; mikko.rapeli@...; mike.looijmans@...
Cc: yocto@...
Subject: Re: <EXT> Re: [yocto] What are the key factors for yocto build speed?

 

4 hours seems extremely long to me for a 8 core system, but I have not tried this in a while. Were you removing all the sources and re-downloading them for every build?

 

From: <yocto@...> on behalf of Srini <rsrinivasan@...>
Date: Wednesday, March 18, 2020 at 10:13 AM
To: "mikko.rapeli@..." <mikko.rapeli@...>, "mike.looijmans@..." <mike.looijmans@...>
Cc: "yocto@..." <yocto@...>
Subject: Re: <EXT> Re: [yocto] What are the key factors for yocto build speed?

 

My own experience (pardon me if already discussed)

Fought the build times for several months - ending up eventually at 8 cores (but specifying 16 threads in poky builds). Best times for my build about 4 hours. Clearly impractical during engineering.

Generated an sdk and used it for app development. Each build is now a minute or 2.

Using a homegrown utility, updated the image file with applications in a jiffy - to produce burnable sdcard image.

Complete build required only for the final release -- or made major changes like python2 to python3!

YMMV.

srini

-----Original Message-----
From: yocto@... <yocto@...> On Behalf Of Mikko Rapeli via Lists.Yoctoproject.Org
Sent: Wednesday, March 18, 2020 11:52 AM
To: mike.looijmans@...
Cc: yocto@...
Subject: <EXT> Re: [yocto] What are the key factors for yocto build speed?

On Wed, Mar 18, 2020 at 04:09:39PM +0100, Mike Looijmans wrote:
> On 18-03-2020 15:49, Adrian Bunk via Lists.Yoctoproject.Org wrote:
> > On Wed, Mar 18, 2020 at 10:12:26AM -0400, Jean-Marie Lemetayer wrote:
> > > ...
> > > For example one of our build servers is using:
> > > - AMD Ryzen 9 3900X
> > > ...
> > > - 32Go DDR4 3200 MHZ CL14
> > > ...
> > > It is a really good price / build time ratio configuration.
> >
> > Depends on what you are building.
> >
> > Building non-trivial C++ code (e.g. webkitgtk) with 24 cores but
> > only 32 GB RAM will not work, for such code you need more than 2
> > GB/core.
>
> Seems a bit excessive to buy hardware just to handle a particular
> corner case. Most of OE/Yocto code is plain C, not even C++.
>
> My rig only has 8GB but doesn't run into memory issues during big GUI
> builds. The only thing that made it swap was the populate_sdk task
> that created a 1.1GB fiel and needed 20GB of RAM to compress that.
> Took a few minutes more due to swapping.
> I submitted a patch today to fix that in OE.
>
> Your mileage may vary. But RAM is easy to add.

Well, I can't build with under 2 gigs per core or I run out of physical memory and kernel oom-killer kicks in to kill the build. Also can't run with yocto default parallel settings which only take into account the number of cores and thus have a custom script which does caps the threads so that 2 gigs of RAM for each are available.

Though I'm sure plain C and plain poky projects have less requirements for RAM.

-Mikko

________________________________

CONFIDENTIALITY NOTICE: This email message and any attachments are confidential and may be privileged and are meant to be read by the intended recipient only. If you are not the intended recipient, please notify sender immediately and destroy all copies of this message and any attachments without reading or disclosing their contents. Thank you




CONFIDENTIALITY NOTICE: This email message and any attachments are confidential and may be privileged and are meant to be read by the intended recipient only. If you are not the intended recipient, please notify sender immediately and destroy all copies of this message and any attachments without reading or disclosing their contents. Thank you



Re: <EXT> Re: [yocto] What are the key factors for yocto build speed?

Yann Dirson
 

anyway this sounds like a complete rebuild not taking advantage of shard-state cache

Le mer. 18 mars 2020 à 18:47, David Stewart <david.c.stewart@...> a écrit :

4 hours seems extremely long to me for a 8 core system, but I have not tried this in a while. Were you removing all the sources and re-downloading them for every build?

 

From: <yocto@...> on behalf of Srini <rsrinivasan@...>
Date: Wednesday, March 18, 2020 at 10:13 AM
To: "mikko.rapeli@..." <mikko.rapeli@...>, "mike.looijmans@..." <mike.looijmans@...>
Cc: "yocto@..." <yocto@...>
Subject: Re: <EXT> Re: [yocto] What are the key factors for yocto build speed?

 

My own experience (pardon me if already discussed)

Fought the build times for several months - ending up eventually at 8 cores (but specifying 16 threads in poky builds). Best times for my build about 4 hours. Clearly impractical during engineering.

Generated an sdk and used it for app development. Each build is now a minute or 2.

Using a homegrown utility, updated the image file with applications in a jiffy - to produce burnable sdcard image.

Complete build required only for the final release -- or made major changes like python2 to python3!

YMMV.

srini

-----Original Message-----
From: yocto@... <yocto@...> On Behalf Of Mikko Rapeli via Lists.Yoctoproject.Org
Sent: Wednesday, March 18, 2020 11:52 AM
To: mike.looijmans@...
Cc: yocto@...
Subject: <EXT> Re: [yocto] What are the key factors for yocto build speed?

On Wed, Mar 18, 2020 at 04:09:39PM +0100, Mike Looijmans wrote:
> On 18-03-2020 15:49, Adrian Bunk via Lists.Yoctoproject.Org wrote:
> > On Wed, Mar 18, 2020 at 10:12:26AM -0400, Jean-Marie Lemetayer wrote:
> > > ...
> > > For example one of our build servers is using:
> > > - AMD Ryzen 9 3900X
> > > ...
> > > - 32Go DDR4 3200 MHZ CL14
> > > ...
> > > It is a really good price / build time ratio configuration.
> >
> > Depends on what you are building.
> >
> > Building non-trivial C++ code (e.g. webkitgtk) with 24 cores but
> > only 32 GB RAM will not work, for such code you need more than 2
> > GB/core.
>
> Seems a bit excessive to buy hardware just to handle a particular
> corner case. Most of OE/Yocto code is plain C, not even C++.
>
> My rig only has 8GB but doesn't run into memory issues during big GUI
> builds. The only thing that made it swap was the populate_sdk task
> that created a 1.1GB fiel and needed 20GB of RAM to compress that.
> Took a few minutes more due to swapping.
> I submitted a patch today to fix that in OE.
>
> Your mileage may vary. But RAM is easy to add.

Well, I can't build with under 2 gigs per core or I run out of physical memory and kernel oom-killer kicks in to kill the build. Also can't run with yocto default parallel settings which only take into account the number of cores and thus have a custom script which does caps the threads so that 2 gigs of RAM for each are available.

Though I'm sure plain C and plain poky projects have less requirements for RAM.

-Mikko

________________________________

CONFIDENTIALITY NOTICE: This email message and any attachments are confidential and may be privileged and are meant to be read by the intended recipient only. If you are not the intended recipient, please notify sender immediately and destroy all copies of this message and any attachments without reading or disclosing their contents. Thank you




--
Yann Dirson <yann@...>
Blade / Shadow -- http://shadow.tech


Re: <EXT> Re: [yocto] What are the key factors for yocto build speed?

David Stewart
 

4 hours seems extremely long to me for a 8 core system, but I have not tried this in a while. Were you removing all the sources and re-downloading them for every build?

 

From: <yocto@...> on behalf of Srini <rsrinivasan@...>
Date: Wednesday, March 18, 2020 at 10:13 AM
To: "mikko.rapeli@..." <mikko.rapeli@...>, "mike.looijmans@..." <mike.looijmans@...>
Cc: "yocto@..." <yocto@...>
Subject: Re: <EXT> Re: [yocto] What are the key factors for yocto build speed?

 

My own experience (pardon me if already discussed)

Fought the build times for several months - ending up eventually at 8 cores (but specifying 16 threads in poky builds). Best times for my build about 4 hours. Clearly impractical during engineering.

Generated an sdk and used it for app development. Each build is now a minute or 2.

Using a homegrown utility, updated the image file with applications in a jiffy - to produce burnable sdcard image.

Complete build required only for the final release -- or made major changes like python2 to python3!

YMMV.

srini

-----Original Message-----
From: yocto@... <yocto@...> On Behalf Of Mikko Rapeli via Lists.Yoctoproject.Org
Sent: Wednesday, March 18, 2020 11:52 AM
To: mike.looijmans@...
Cc: yocto@...
Subject: <EXT> Re: [yocto] What are the key factors for yocto build speed?

On Wed, Mar 18, 2020 at 04:09:39PM +0100, Mike Looijmans wrote:
> On 18-03-2020 15:49, Adrian Bunk via Lists.Yoctoproject.Org wrote:
> > On Wed, Mar 18, 2020 at 10:12:26AM -0400, Jean-Marie Lemetayer wrote:
> > > ...
> > > For example one of our build servers is using:
> > > - AMD Ryzen 9 3900X
> > > ...
> > > - 32Go DDR4 3200 MHZ CL14
> > > ...
> > > It is a really good price / build time ratio configuration.
> >
> > Depends on what you are building.
> >
> > Building non-trivial C++ code (e.g. webkitgtk) with 24 cores but
> > only 32 GB RAM will not work, for such code you need more than 2
> > GB/core.
>
> Seems a bit excessive to buy hardware just to handle a particular
> corner case. Most of OE/Yocto code is plain C, not even C++.
>
> My rig only has 8GB but doesn't run into memory issues during big GUI
> builds. The only thing that made it swap was the populate_sdk task
> that created a 1.1GB fiel and needed 20GB of RAM to compress that.
> Took a few minutes more due to swapping.
> I submitted a patch today to fix that in OE.
>
> Your mileage may vary. But RAM is easy to add.

Well, I can't build with under 2 gigs per core or I run out of physical memory and kernel oom-killer kicks in to kill the build. Also can't run with yocto default parallel settings which only take into account the number of cores and thus have a custom script which does caps the threads so that 2 gigs of RAM for each are available.

Though I'm sure plain C and plain poky projects have less requirements for RAM.

-Mikko

________________________________

CONFIDENTIALITY NOTICE: This email message and any attachments are confidential and may be privileged and are meant to be read by the intended recipient only. If you are not the intended recipient, please notify sender immediately and destroy all copies of this message and any attachments without reading or disclosing their contents. Thank you


Re: <EXT> Re: [yocto] What are the key factors for yocto build speed?

Srini
 

My own experience (pardon me if already discussed)

Fought the build times for several months - ending up eventually at 8 cores (but specifying 16 threads in poky builds). Best times for my build about 4 hours. Clearly impractical during engineering.

Generated an sdk and used it for app development. Each build is now a minute or 2.

Using a homegrown utility, updated the image file with applications in a jiffy - to produce burnable sdcard image.

Complete build required only for the final release -- or made major changes like python2 to python3!

YMMV.

srini

-----Original Message-----
From: yocto@... <yocto@...> On Behalf Of Mikko Rapeli via Lists.Yoctoproject.Org
Sent: Wednesday, March 18, 2020 11:52 AM
To: mike.looijmans@...
Cc: yocto@...
Subject: <EXT> Re: [yocto] What are the key factors for yocto build speed?

On Wed, Mar 18, 2020 at 04:09:39PM +0100, Mike Looijmans wrote:
> On 18-03-2020 15:49, Adrian Bunk via Lists.Yoctoproject.Org wrote:
> > On Wed, Mar 18, 2020 at 10:12:26AM -0400, Jean-Marie Lemetayer wrote:
> > > ...
> > > For example one of our build servers is using:
> > > - AMD Ryzen 9 3900X
> > > ...
> > > - 32Go DDR4 3200 MHZ CL14
> > > ...
> > > It is a really good price / build time ratio configuration.
> >
> > Depends on what you are building.
> >
> > Building non-trivial C++ code (e.g. webkitgtk) with 24 cores but
> > only 32 GB RAM will not work, for such code you need more than 2
> > GB/core.
>
> Seems a bit excessive to buy hardware just to handle a particular
> corner case. Most of OE/Yocto code is plain C, not even C++.
>
> My rig only has 8GB but doesn't run into memory issues during big GUI
> builds. The only thing that made it swap was the populate_sdk task
> that created a 1.1GB fiel and needed 20GB of RAM to compress that.
> Took a few minutes more due to swapping.
> I submitted a patch today to fix that in OE.
>
> Your mileage may vary. But RAM is easy to add.

Well, I can't build with under 2 gigs per core or I run out of physical memory and kernel oom-killer kicks in to kill the build. Also can't run with yocto default parallel settings which only take into account the number of cores and thus have a custom script which does caps the threads so that 2 gigs of RAM for each are available.

Though I'm sure plain C and plain poky projects have less requirements for RAM.

-Mikko

________________________________

CONFIDENTIALITY NOTICE: This email message and any attachments are confidential and may be privileged and are meant to be read by the intended recipient only. If you are not the intended recipient, please notify sender immediately and destroy all copies of this message and any attachments without reading or disclosing their contents. Thank you



Re: What are the key factors for yocto build speed?

Mikko Rapeli
 

On Wed, Mar 18, 2020 at 04:09:39PM +0100, Mike Looijmans wrote:
On 18-03-2020 15:49, Adrian Bunk via Lists.Yoctoproject.Org wrote:
On Wed, Mar 18, 2020 at 10:12:26AM -0400, Jean-Marie Lemetayer wrote:
...
For example one of our build servers is using:
- AMD Ryzen 9 3900X
...
- 32Go DDR4 3200 MHZ CL14
...
It is a really good price / build time ratio configuration.
Depends on what you are building.

Building non-trivial C++ code (e.g. webkitgtk) with 24 cores
but only 32 GB RAM will not work, for such code you need
more than 2 GB/core.
Seems a bit excessive to buy hardware just to handle a particular corner
case. Most of OE/Yocto code is plain C, not even C++.

My rig only has 8GB but doesn't run into memory issues during big GUI
builds. The only thing that made it swap was the populate_sdk task that
created a 1.1GB fiel and needed 20GB of RAM to compress that. Took a few
minutes more due to swapping.
I submitted a patch today to fix that in OE.

Your mileage may vary. But RAM is easy to add.
Well, I can't build with under 2 gigs per core or I run out of physical memory
and kernel oom-killer kicks in to kill the build. Also can't run
with yocto default parallel settings which only take into account the
number of cores and thus have a custom script which does caps the threads
so that 2 gigs of RAM for each are available.

Though I'm sure plain C and plain poky projects have less requirements for RAM.

-Mikko


Re: What are the key factors for yocto build speed?

Martin Jansa
 

On Wed, Mar 18, 2020 at 05:52:37AM -0700, Oliver Westermann wrote:
Hey,

We're currently using a VM on Windows and it's a lot slower than the native linux build (which is expected).
We're looking into getting a dedicated build server for our team (basically a self-build tower PC). Any suggestions what to put in that build to get the most out of it?

Currently we're looking at a big Ryzen, 64G of RAM and one or multiple SSDs on a "consumer grade" board like the X570.

Suggestions, hints and links welcome :)
Other replies look good to me, here are few additions:

If you want to compare how terrible your current VM compares with some
other builders, you can use:
https://github.com/shr-project/test-oe-build-time
I wouldn't be surprised if your VM performs even worse than +- 200USD
ryzen 1600 system.

I would be happy to apply pull requests from other people in this
thread with their suggestions.

You didn't mention the budget, but big Ryzen is definitely good choice.

You also didn't mention how "big" your typical builds are, if you're
building something as big as the webengines used in test-oe-build-time,
then it might be worth to spend a bit extra on 3970X Threadripper if
budget allows.

I'm still looking for someone with access to Epyc (ideally 7702P or
7502P), because it's only a bit more expensive than corresponding
Threadripper, but without the unfortunate limitation to 8 DIMM slots
with difficult to buy 256GB sets:
https://www.gskill.com/community/1502239313/1574739775/G.SKILL-Announces-New-High-Performance,-Ultra-Capacity-DDR4-Memory-Kits-for-HEDT-Platforms
will be nice when it gets finally available. On Epyc on the other hand
you'll get 8 channels instead of 4 and much less issues to find
compatible kit (even with ECC support). If the performance is
significantly better than 3990X, then 7702P might be much better option
for "professional" builder - as long as you can cool server
mother board in tower PC efficiently enough.

Cheers,


Re: How to PROVIDE boost-python

Quentin Schulz
 

Hi Emily,

On Wed, Mar 18, 2020 at 10:16:23AM -0500, Emily wrote:
Hi Laurent -

Unfortunately I don't have full control over the repo that's using
boost_python-mt so I'm not sure I can switch it right now.
You can create a patch for it. You can use devtool modify <your-recipe>
and create the patch from there, you have access to the sources with
that. devtool build <your-recipe> to check it builds okay.

I realize this is not a long-term solution, as I'll need to update that
code (and my OS) to python3 soon, but for now I've just copied the boost
recipe from this commit
<https://github.com/openembedded/openembedded-core/commit/ef603f41b5df4772bb598ec9d389dd5f858592af#diff-9c24742c4bfe7eb2853f86cce86b91c6>
to
my own layer, and added a BBMASK to the openembedded-core's boost recipe.
This seems to work, except I get a QA error from the commit I'm using for
the boost recipe now:
I don't think there is a need for BBMASK, you should be able to set
PREFERRED_VERSION_boost = "1.63.0" in local.conf or your machine
configuration file.

ERROR: boost-1.63.0-r1 do_package: QA Issue: boost: Files/directories were
installed but not shipped in any package:
/usr/lib/libboost_numpy.so.1.63.0
Please set FILES such that these items are packaged. Alternatively if they
are unneeded, avoid installing them or delete them within do_install.
boost: 1 installed and not shipped files. [installed-vs-shipped]
ERROR: boost-1.63.0-r1 do_package: Fatal QA errors found, failing task.
ERROR: boost-1.63.0-r1 do_package: Function failed: do_package

I found a log
<https://www.yoctoproject.org/irc/%23yocto.2017-03-09.log.html> that
mentions this exact error, and also mentions a patch that fixes it - I've
tried and thus far failed to find that patch. I'm not sure if this will
actually work, but I thought I'd check and see if anyone had any ideas.
If all of this is really temporary you can install it in some package,
or even create a new package for it. (PACKAGES =+ "boost-numpy",
FILES_${PN}-numpy = "/usr/lib/libboost_numpy.so.1.63.0").

No more ideas tbh. I would try to patch "your" SW first and see if it
brings you somewhere, but that's what I would do, each their own way.

Quentin


Re: How to PROVIDE boost-python

Emily
 

Hi Laurent - 

Unfortunately I don't have full control over the repo that's using boost_python-mt so I'm not sure I can switch it right now. 

I realize this is not a long-term solution, as I'll need to update that code (and my OS) to python3 soon, but for now I've just copied the boost recipe from this commit  to my own layer, and added a BBMASK to the openembedded-core's boost recipe. This seems to work, except I get a QA error from the commit I'm using for the boost recipe now:

ERROR: boost-1.63.0-r1 do_package: QA Issue: boost: Files/directories were installed but not shipped in any package:
  /usr/lib/libboost_numpy.so.1.63.0
Please set FILES such that these items are packaged. Alternatively if they are unneeded, avoid installing them or delete them within do_install.
boost: 1 installed and not shipped files. [installed-vs-shipped]
ERROR: boost-1.63.0-r1 do_package: Fatal QA errors found, failing task.
ERROR: boost-1.63.0-r1 do_package: Function failed: do_package

I found a log that mentions this exact error, and also mentions a patch that fixes it - I've tried and thus far failed to find that patch. I'm not sure if this will actually work, but I thought I'd check and see if anyone had any ideas. 

Thanks,
Emily

On Tue, Mar 17, 2020 at 5:24 PM Laurent Gauthier <laurent.gauthier@...> wrote:
Also as Quentin suggested it might be needed to remove the "-mt"
suffix, but I see it in the do_install() method of the "boost" recipe
for revision 1.64.

Kind regards, Laurent.

On Tue, Mar 17, 2020 at 11:17 PM Laurent Gauthier via
Lists.Yoctoproject.Org
<laurent.gauthier=soccasys.com@...> wrote:
>
> Hi again Emily,
>
> You have put your finger on the core issue there.
>
> The name of the boost python library which is linked in your
> "opc-ua-server-gfex" recipe is "boost_python-mt" where it seems that
> it should be "boost_python3-mt".
>
> I guess you should have a look at the CMakeLists.txt of the
> "opc-ua-server-gfex" software package to see how to switch to linking
> with "boost_python3-mt" instead as a quick review of the boost recipe
> seems to reveal no python2 support.
>
> Kind Regards, Laurent.
>
>
> On Tue, Mar 17, 2020 at 7:19 PM Emily S <easmith5555@...> wrote:
> >
> > Hi Laurent -
> >
> > Thanks for the suggestion. It looks like there are a few packages that provide libboost_python3:
> >
> > find tmp/work/*/*/*/packages-split -name libboost_python\*
> > tmp/work/aarch64-poky-linux/boost/1.64.0-r0/packages-split/boost-dev/usr/lib/libboost_python3.so
> > tmp/work/aarch64-poky-linux/boost/1.64.0-r0/packages-split/boost-python/usr/lib/libboost_python3.so.1.64.0
> > tmp/work/aarch64-poky-linux/boost/1.64.0-r0/packages-split/boost-staticdev/usr/lib/libboost_python3.a
> > tmp/work/aarch64-poky-linux/boost/1.64.0-r0/packages-split/boost-dbg/usr/lib/.debug/libboost_python3.so.1.64.0
> >
> > Adding boost-python to RDEPENDS doesn't help, I have tried it before also. This sort of made sense to me as I thought RDEPENDS was for run time dependencies, but perhaps I have misunderstood.
> >
> > I am working with the rocko branch and python2, so perhaps that's the problem?
> >
> > Thanks,
> > Emily
> >
> >
> > On Tue, Mar 17, 2020 at 1:02 PM Laurent Gauthier <laurent.gauthier@...> wrote:
> >>
> >> Hi Emily,
> >>
> >> To find the solution to your issue in a rational and deterministic way
> >> we need to start from the error message.
> >>
> >> What I understand is that while build recipe "opc-ua-server-gfex" you
> >> get an error message that says in short "ld: cannot find
> >> -lboost_python-mt".
> >>
> >> Therefore you need to determine which recipe (and which package in
> >> that recipe) provides the "boost_python-mt" library.
> >>
> >> One way to determine this is to run something like this:
> >>
> >>     find /local/d6/easmith5/rocko_bitbake/poky/build/tmp/work/*/*/*/packages-split
> >> -name libboost_python\*
> >>
> >> This should show you which recipe, and which package produced by that
> >> recipe has the library you are looking for.
> >>
> >> The names of the first-level directories inside "packages-split" are
> >> packages names.
> >>
> >> Based on things you have mentioned in your previous e-mail my guess
> >> would be that adding an RDEPENDS = "boost-python" should help, but I
> >> might be wrong.
> >>
> >> I hope this will help you move in the right direction.
> >>
> >> Kind regards, Laurent.
> >>
> >>
> >> On Tue, Mar 17, 2020 at 6:07 PM Emily <easmith5555@...> wrote:
> >> >
> >> > Hi Quentin -
> >> >
> >> > That's what I tried originally! DEPENDS on boost and not boost-python gives me the following error when trying to build my original recipe:
> >> >
> >> > /local/d6/easmith5/rocko_bitbake/poky/build/tmp/work/aarch64-poky-linux/opc-ua-server-gfex/1.0+gitAUTOINC+921c563309-r0/recipe-sysroot-native/usr/bin/aarch64-poky-linux/../../libexec/aarch64-poky-linux/gcc/aarch64-poky-linux/7.3.0/ld:
> >> > cannot find -lboost_python-mt
> >> >
> >> > So I tried a bunch of other this which also didn't seem to work.
> >> >
> >> > Thanks for the response!
> >> > Emily
> >> >
> >> >
> >> > On Tue, Mar 17, 2020 at 11:04 AM Quentin Schulz <quentin.schulz@...> wrote:
> >> >>
> >> >> Hi Emily,
> >> >>
> >> >> On Tue, Mar 17, 2020 at 10:44:10AM -0500, Emily wrote:
> >> >> > Hi all -
> >> >> >
> >> >> > I'm trying to build an opca recipe (
> >> >> > https://github.com/kratsg/meta-l1calo/blob/add/opcServer/recipes-core/opc-ua/opc-ua-server-gfex_git.bb)
> >> >> > and it's giving me a build error like:
> >> >> >
> >> >> > |
> >> >> > /local/d6/easmith5/rocko_bitbake/poky/build/tmp/work/aarch64-poky-linux/opc-ua-server-gfex/1.0+gitAUTOINC+921c563309-r0/recipe-sysroot-native/usr/bin/aarch64-poky-linux/../../libexec/aarch64-poky-linux/gcc/aarch64-poky-linux/7.3.0/ld:
> >> >> > cannot find -lboost_python-mt
> >> >> >
> >> >> > Which seems to indicated I need to add boost-python to my list of DEPENDS.
> >> >> > When I do that I get the error:
> >> >> >
> >> >> > ERROR: Nothing PROVIDES 'boost-python' (but
> >> >> > /local/d6/easmith5/rocko_bitbake/meta-l1calo/recipes-core/opc-ua/
> >> >> > opc-ua-server-gfex_git.bb DEPENDS on or otherwise requires it). Close
> >> >> > matches:
> >> >> >   boost RPROVIDES boost-python
> >> >> >
> >> >> > I've tried adding PACKAGECONFIG_pn-boost="python" to both my local.conf and
> >> >> > to the image definition as I saw this online, but neither seems to work. It
> >> >> > seems like boost-python is being built (I can see it in the list when I run
> >> >> > oe-pkgdata-utils list-pkgs -p boost) but it doesn't seem to be available at
> >> >> > build for this recipe.
> >> >> >
> >> >> > If I remove boost from my list of DEPENDS I get an error about that, so
> >> >> > obviously boost itself is available at build for the original recipe. I've
> >> >> > also tried adding boost-native to DEPENDS, also did not work.
> >> >> >
> >> >> > Is there something obvious I'm missing, or some trick to making
> >> >> > boost-python available at build for this other recipe?
> >> >> >
> >> >>
> >> >> Ok so two things.
> >> >>
> >> >> Yes, I'd say you need python in PACKAGECONFIG for boost. However, it
> >> >> seems it's already part of the default value of PACKAGECONFIG, c.f.
> >> >> http://cgit.openembedded.org/openembedded-core/tree/meta/recipes-support/boost/boost.inc?h=master#n45
> >> >> so if no bbappend is overriding it, I'd say you're safe. Better check
> >> >> with the version in your layer that it's there.
> >> >> If you don't want to check manually, try `bitbake -e boost | grep -e
> >> >> "^PACKAGECONFIG="`
> >> >>
> >> >> DEPENDS contains only recipes (well, PROVIDES but PROVIDES has the name
> >> >> of the recipe in it at all times) while RDEPENDS_* (usually ${PN}*) contains
> >> >> only packages.
> >> >>
> >> >> So you need to DEPENDS on boost, not boost-python. That should be enough
> >> >> hopefully!
> >> >>
> >> >> Quentin
> >> >
> >> >
> >>
> >>
> >>
> >> --
> >> Laurent Gauthier
> >> Phone: +33 630 483 429
> >> http://soccasys.com
>
>
>
> --
> Laurent Gauthier
> Phone: +33 630 483 429
> http://soccasys.com
>



--
Laurent Gauthier
Phone: +33 630 483 429
http://soccasys.com


Re: What are the key factors for yocto build speed?

Mike Looijmans
 

On 18-03-2020 15:49, Adrian Bunk via Lists.Yoctoproject.Org wrote:
On Wed, Mar 18, 2020 at 10:12:26AM -0400, Jean-Marie Lemetayer wrote:
...
For example one of our build servers is using:
- AMD Ryzen 9 3900X
...
- 32Go DDR4 3200 MHZ CL14
...
It is a really good price / build time ratio configuration.
Depends on what you are building.
Building non-trivial C++ code (e.g. webkitgtk) with 24 cores
but only 32 GB RAM will not work, for such code you need
more than 2 GB/core.
Seems a bit excessive to buy hardware just to handle a particular corner case. Most of OE/Yocto code is plain C, not even C++.

My rig only has 8GB but doesn't run into memory issues during big GUI builds. The only thing that made it swap was the populate_sdk task that created a 1.1GB fiel and needed 20GB of RAM to compress that. Took a few minutes more due to swapping.
I submitted a patch today to fix that in OE.

Your mileage may vary. But RAM is easy to add.

On Wed, Mar 18, 2020 at 05:52:37AM -0700, Oliver Westermann wrote:
...
Any suggestions what to put in that build to get the most out of it?

Currently we're looking at a big Ryzen, 64G of RAM and one or multiple
SSDs on a "consumer grade" board like the X570.
...
I would buy 128 GB RAM to not run into problems due to lack of RAM,
and Linux will also automatically use unused RAM as disk cache.
As long as you aren't running out of RAM or disk space all that matters
is CPU speed, Ryzen 9 3950X with 128 GB RAM would be my choice unless
you are on a tight budget.
Of course he's on a tight budget. He wouldn't need to ask for advice otherwise...

Most consumer boards support up to 64GB RAM. Pushing to 128 may suddenly double the price of the mobo as well. I'd go for 32 (as 2x16GB) and do an easy upgrade to 64 when there's trouble. Even with 4x16GB that's not a bad investment, if it turns out to be a bottleneck 16GB modules will be easy to sell (contrary to smaller modules).


Re: What are the key factors for yocto build speed?

Adrian Bunk
 

On Wed, Mar 18, 2020 at 10:12:26AM -0400, Jean-Marie Lemetayer wrote:
...
For example one of our build servers is using:
- AMD Ryzen 9 3900X
...
- 32Go DDR4 3200 MHZ CL14
...
It is a really good price / build time ratio configuration.
Depends on what you are building.

Building non-trivial C++ code (e.g. webkitgtk) with 24 cores
but only 32 GB RAM will not work, for such code you need
more than 2 GB/core.

On Wed, Mar 18, 2020 at 05:52:37AM -0700, Oliver Westermann wrote:
...
Any suggestions what to put in that build to get the most out of it?

Currently we're looking at a big Ryzen, 64G of RAM and one or multiple
SSDs on a "consumer grade" board like the X570.
...
I would buy 128 GB RAM to not run into problems due to lack of RAM,
and Linux will also automatically use unused RAM as disk cache.

As long as you aren't running out of RAM or disk space all that matters
is CPU speed, Ryzen 9 3950X with 128 GB RAM would be my choice unless
you are on a tight budget.

cu
Adrian

8961 - 8980 of 57789