Mikko Rapeli
On Tue, Feb 18, 2020 at 07:25:15AM +0000, Georgi Georgiev via Lists.Yoctoproject.Org wrote:
OK,Looks good to me. We have only one svn repo in the whole project :-)Lucky you :) -Mikko
|
|
Re: Issue while adding the support for TLS1.3 in existing krogoth yocto
#yocto
#apt
#raspberrypi
Mikko Rapeli
Hi,
On Tue, Feb 18, 2020 at 01:20:25PM +0530, amaya jindal wrote: Thanks for your prompt reply. But is not there any way similar to addopenssl is tricky to update and requires backporting fixes for many, many recipes to get builds passing etc. Depending on project size, it may be possible to update only those components which you use, e.g. backport commits from poky master or release branches like warrior. The number of backported changes will be large. I've ported openssl 1.1.1d patches to yocto 2.5 sumo but it wasn't pretty. A strategy with regular yocto updates is much better and forces you to think of your dependencies and patches much harder. Hope this helps, -Mikko
|
|
Re: Debugging gdb built by Yocto
Richard Purdie
On Tue, 2020-02-18 at 11:26 -0500, Patrick Doyle wrote:
Does anybody have any tips or tricks for how I might debug theDo you perhaps want gdb-cross-mipsel ? cross-canadian is designed to be run as part of the SDK. Cheers, Richard
|
|
Re: Creating a build system which can scale.
#yocto
Mikko Rapeli
Hi,
Good pointers in this thread already. Here are mine: * Share sstate mirror and download cache from release builds to developer topic builds. NFS, web server or rsync before calling bitbake will work. * I've added buildhistory and prserv database as extra files to sstate mirror and use that to initiate new developer topic and release builds. This way we don't add prserv or buildhistory git trees to critical path in builds but get the benefits of QA checks, binary package versions, full history etc. * Don't use virtual machines or clouds to build. Bare metal throw away machines are much faster and more reliable. We've broken all clouds. * Use rm_work to reduce disk space usage during builds. * Tune build machines to bind things into memory and to not flush things to disk all the time since bitbake tmp, images etc are anyways going to be tar'ed as build output. If they fit to page cache in RAM, you can avoid a lot of IO and save disks/ssd. Linux kernel vm tuning does this: $ cat /etc/sysctl.d/99-build_server_fs_ops_to_memory.conf # fs cache can use 90% of memory before system starts io to disk, # keep as much as possible in RAM vm.dirty_background_bytes = 0 vm.dirty_background_ratio = 90 # keep stuff for 12h in memory before writing to disk, # allows reusing data as much as possible between builds vm.dirty_expire_centisecs = 4320000 vm.dirtytime_expire_seconds = 432000 # allow single process to use 60% of system RAM for file caches, e.g. image build vm.dirty_bytes = 0 vm.dirty_ratio = 60 # disable periodic background writes, only write when running out of RAM vm.dirty_writeback_centisecs = 0 * Finding optimal cost and power combination for build slaves is tricky. Track CPU, memory, IO and network usage for your project and find out which one is the bottle neck. For us it was RAM. CPUs are not effectly used by bitbake builds except when all hell breaks loose with C++ projects and their templates. Lots of CPU time is wasted when running single threaded bitbake tasks and creating images. Avoiding IO to disk and caching to RAM helps. I've not seen benefits of having more than 64 gigs of RAM or more than 32 CPUs (with hyper threading). Also project evolve over time and suddenly may start eating more RAM and triggering the kernel OOM killer, shivers.. Hope this helps, -Mikko
|
|
Re: Debugging gdb built by Yocto
On 2/18/20 8:26 AM, Patrick Doyle wrote:
Does anybody have any tips or tricks for how I might debug theits perhaps due to the fact that host where this will run is sdkhost and not your normal buildhost, so perhaps you can use uninative tarball provided glibc to run it if your sdkhost is similar to buildhost and that might work. --wpd
|
|
Re: do_rootfs task took long time to finish
On 2/17/20 11:55 PM, Marek Belisko wrote:
Hi,how big is the image its creating, might affect the time. Secondly use some system perf monitor to see which tool is taking so long. BR,
|
|
Mikko Rapeli
Hi,
(lets keep this on the list too) On Wed, Feb 19, 2020 at 04:51:18PM +0100, Armando Hernandez wrote: Hi Mikko,You can add _class-[target|native|nativesdk] to all variables to override defaults. Verify with "bitbake -e". Hope this helps, -Mikko Is it possible to do so? Or do I come up with another recipe of the sama
|
|
[meta-updater] [meta-updater-raspberrypi] ERROR: No recipes available for: ..../meta-updater-raspberrypi/recipes-bsp/u-boot/libubootenv_%.bbappend
Greg Wilson-Lindberg
I'm trying to add support for ostree to out boot2qt yocto warrior build for raspberry pi4. I've added meta-updater & meta-updater-raspberrypi to the build and when I start bitbake I get the following error:
ERROR: No recipes available for: .../sources/meta-updater-raspberrypi/recipes-bsp/u-boot/libubootenv_%.bbappend I've tried downloading the HereOtaConnect sample project but it doesn't have the libubootenv recipe. I researched where libubootenv is from and it is part of the sbabic swupddate system. Does this mean that I need to install meta-swupdate to get the libuboot? Does libubootenv automatically disable the fw* utilities from uboot? Is there something else that I' Regards, Greg Wilson-Lindberg
|
|
Martin Jansa
> DEPENDS_class-target += "systemd" You surely meant DEPENDS_append_class-target = " systemd" here
On Wed, Feb 19, 2020 at 10:48 PM Mikko Rapeli <mikko.rapeli@...> wrote: Hi,
|
|
Mikko Rapeli
Hi,
On Wed, Feb 19, 2020 at 01:37:19AM -0800, Armando Hernandez wrote: Hello,Make the systemd dependency for target only, e.g. DEPENDS_class-target += "systemd" etc. There may be relevant use cases to build some of systemd components or tools to native or nativesdk targets too. In that case add BBCLASSEXTEND += "nativesdk" etc in a bbappend to systemd. Hope this helps, -Mikko
|
|
Re: Opentk Support
Ross Burton
On Wed, 19 Feb 2020 at 21:45, Sheraz Ali <sheraz.ali@iwavesystems.com> wrote:
Does anyone know how to enable opentk in yocto ( i.e is it available )https://layers.openembedded.org/layerindex/branch/master/recipes/?q=opentk says that there are not any known recipes, so you'll have to write one yourself. Ross
|
|
Re: [OE-core] oe-core recipe for defining directories in /
Quentin Schulz
Hi JH,
On Wed, Feb 19, 2020 at 09:12:08PM +1100, JH wrote: Hi,None. Or all of them, depends on how one sees it. You just create a directory in do_install of a recipe. You then make sure this directory is part of a package by checking it's in one of the recipe's generated packages's FILES_<PACKAGE>. Quentin
|
|
Zeus failed DHCP
JH
Hi,
My build connman on Thud works on WiFi, but Zeus does not work, the connman could not get WiFi DHCP response, it puts a local IP address 169.254.24.188 to my WiFi interface. Has anyone found that the problem in Zeus or it just me may be miss some packages or configuration? What are packages could cause DHCP not working? Here is Thud working log: # systemctl status connman -l ��● connman.service - Connection service Loaded: loaded (8;;file://solar/lib/systemd/system/connman.service/lib/systemd/system/connman.service8;;; enabl) Active: active (running) since Thu 2020-02-13 03:18:51 UTC; 6 days ago Main PID: 131 (connmand) CGroup: /system.slice/connman.service ��└��─131 /usr/sbin/connmand -n Feb 18 22:37:16 solar connmand[131]: rp_filter set to 2 (loose mode routing), old value was 1 Feb 18 22:37:16 solar connmand[131]: mlan0 {add} address 192.168.0.100/24 label mlan0 family 2 Feb 18 22:37:16 solar connmand[131]: mlan0 {add} route 192.168.0.0 gw 0.0.0.0 scope 253 <LINK> Feb 18 22:37:16 solar connmand[131]: mlan0 {add} route 192.168.0.1 gw 0.0.0.0 scope 253 <LINK> Feb 18 22:37:16 solar connmand[131]: mlan0 {add} route 212.227.81.55 gw 192.168.0.1 scope 0 <UNIVERSE> Feb 18 22:37:17 solar connmand[131]: mlan0 {del} route 212.227.81.55 gw 192.168.0.1 scope 0 <UNIVERSE> Feb 18 22:37:17 solar connmand[131]: wwan0 {del} route 0.0.0.0 gw 10.114.57.126 scope 0 <UNIVERSE> Feb 18 22:37:17 solar connmand[131]: mlan0 {add} route 0.0.0.0 gw 192.168.0.1 scope 0 <UNIVERSE> Feb 18 22:37:17 solar connmand[131]: mlan0 {add} route 212.227.81.55 gw 192.168.0.1 scope 0 <UNIVERSE> Feb 18 22:37:17 solar connmand[131]: mlan0 {del} route 212.227.81.55 gw 192.168.0.1 scope 0 <UNIVERSE> # systemctl status wpa_supplicant -l ��● wpa_supplicant.service - WPA supplicant Loaded: loaded (8;;file://solar/lib/systemd/system/wpa_supplicant.service/lib/systemd/system/wpa_supplicant.service8;;; disabled; vendor preset: enabled) Active: active (running) since Thu 2020-02-13 03:18:53 UTC; 6 days ago Main PID: 503 (wpa_supplicant) CGroup: /system.slice/wpa_supplicant.service ��└��─503 /usr/sbin/wpa_supplicant -u Feb 19 07:37:29 solar wpa_supplicant[503]: mlan0: WPA: Group rekeying completed with 34:08:04:12:b1:a2 [GTK=TKIP] Feb 19 07:47:29 solar wpa_supplicant[503]: mlan0: WPA: Group rekeying completed with 34:08:04:12:b1:a2 [GTK=TKIP] Feb 19 07:57:29 solar wpa_supplicant[503]: mlan0: WPA: Group rekeying completed with 34:08:04:12:b1:a2 [GTK=TKIP] Feb 19 08:07:29 solar wpa_supplicant[503]: mlan0: WPA: Group rekeying completed with 34:08:04:12:b1:a2 [GTK=TKIP] Feb 19 08:17:29 solar wpa_supplicant[503]: mlan0: WPA: Group rekeying completed with 34:08:04:12:b1:a2 [GTK=TKIP] Feb 19 08:19:01 solar wpa_supplicant[503]: mlan0: CTRL-EVENT-SIGNAL-CHANGE above=0 signal=-85 noise=-97 txrate=1000 Feb 19 08:27:29 solar wpa_supplicant[503]: mlan0: WPA: Group rekeying completed with 34:08:04:12:b1:a2 [GTK=TKIP] Feb 19 08:37:29 solar wpa_supplicant[503]: mlan0: WPA: Group rekeying completed with 34:08:04:12:b1:a2 [GTK=TKIP] Feb 19 08:47:29 solar wpa_supplicant[503]: mlan0: WPA: Group rekeying completed with 34:08:04:12:b1:a2 [GTK=TKIP] Feb 19 08:57:29 solar wpa_supplicant[503]: mlan0: WPA: Group rekeying completed with 34:08:04:12:b1:a2 [GTK=TKIP] Here is Zenu build did not work: # systemctl status connman -l * connman.service - Connection service Loaded: loaded (8;;file://solar/lib/systemd/system/connman.service/lib/systemd/system/connman.service8;;; enabled; vendor preset: enabled) Active: active (running) since Tue 2020-02-18 00:47:43 UTC; 1 day 9h ago Main PID: 184 (connmand) CGroup: /system.slice/connman.service `-184 /usr/sbin/connmand -n Feb 19 10:27:33 solar connmand[184]: mlan0 {newlink} index 3 operstate 5 <DORMANT> Feb 19 10:27:33 solar connmand[184]: mlan0 {add} route ff00:: gw :: scope 0 <UNIVERSE> Feb 19 10:27:33 solar connmand[184]: mlan0 {add} route fe80:: gw :: scope 0 <UNIVERSE> Feb 19 10:27:33 solar connmand[184]: mlan0 {RX} 10 packets 1650 bytes Feb 19 10:27:33 solar connmand[184]: mlan0 {TX} 256 packets 96668 bytes Feb 19 10:27:33 solar connmand[184]: mlan0 {update} flags 102467 <UP,RUNNING,LOWER_UP> Feb 19 10:27:33 solar connmand[184]: mlan0 {newlink} index 3 address D4:CA:6E:9A:7E:29 mtu 1500 Feb 19 10:27:33 solar connmand[184]: mlan0 {newlink} index 3 operstate 6 <UP> Feb 19 10:28:13 solar connmand[184]: mlan0 {add} address 169.254.241.106/16 label mlan0 family 2 Feb 19 10:28:14 solar connmand[184]: mlan0 {add} route 169.254.0.0 gw 0.0.0.0 scope 253 <LINK> # systemctl status wpa_supplicant -l * wpa_supplicant.service - WPA supplicant Loaded: loaded (8;;file://solar/lib/systemd/system/wpa_supplicant.service/lib/systemd/system/wpa_supplicant.service8;;; disabled; vendor preset: disabled) Active: active (running) since Tue 2020-02-18 00:47:48 UTC; 1 day 9h ago Main PID: 263 (wpa_supplicant) CGroup: /system.slice/wpa_supplicant.service `-263 /usr/sbin/wpa_supplicant -u Feb 19 09:57:33 solar wpa_supplicant[263]: mlan0: CTRL-EVENT-SIGNAL-CHANGE above=0 signal=-47 noise=-92 txrate=65000 Feb 19 10:05:36 solar wpa_supplicant[263]: mlan0: CTRL-EVENT-SIGNAL-CHANGE above=0 signal=-52 noise=-92 txrate=72200 Feb 19 10:07:32 solar wpa_supplicant[263]: mlan0: CTRL-EVENT-DISCONNECTED bssid=34:08:04:12:b1:a2 reason=2 Feb 19 10:07:32 solar wpa_supplicant[263]: dbus: wpa_dbus_property_changed: no property SessionLength in object /fi/w1/wpa_supplicant1/Interfaces/0 Feb 19 10:07:33 solar wpa_supplicant[263]: mlan0: Trying to associate with 34:08:04:12:b1:a2 (SSID='Jupiter' freq=2437 MHz) Feb 19 10:07:33 solar wpa_supplicant[263]: mlan0: Associated with 34:08:04:12:b1:a2 Feb 19 10:07:33 solar wpa_supplicant[263]: mlan0: CTRL-EVENT-SUBNET-STATUS-UPDATE status=0 Feb 19 10:07:33 solar wpa_supplicant[263]: mlan0: WPA: Key negotiation completed with 34:08:04:12:b1:a2 [PTK=CCMP GTK=TKIP] Feb 19 10:07:33 solar wpa_supplicant[263]: mlan0: CTRL-EVENT-CONNECTED - Connection to 34:08:04:12:b1:a2 completed [id=0 id_str=] Feb 19 10:07:33 solar wpa_supplicant[263]: mlan0: CTRL-EVENT-SIGNAL-CHANGE above=0 signal=-51 noise=-92 txrate=65000 Thank
|
|
Re: [OE-core] [yocto] Change RO rootfs failed RF Kill Switch Status and Failed to start Run pending postinsts
Mikko Rapeli
On Tue, Feb 18, 2020 at 08:43:01PM +1100, JH wrote:
Hi Mikko,Well I have zeus and am using read-only rootfs with volatile binds and I did not need anything extra. I would dig into this /var/log thing and patch it away. I use systemd journal so no need for syslogs. Cheers, -Mikko
|
|
Re: [opkg-devel] [opkg-utils PATCH] Makefile: add opkg-feed to UTILS
Alejandro del Castillo
LGTM, merged!
toggle quoted messageShow quoted text
On 2/17/20 5:57 PM, Alex Stewart wrote:
* Add the opkg-feed script to UTILS so that it is installed with a `make --
Cheers, Alejandro
|
|
Re: Change RO rootfs failed RF Kill Switch Status and Failed to start Run pending postinsts
JH
It also seems mwifiex_sdio tried to write to RO rootfs and failed and
toggle quoted messageShow quoted text
triggled RF Killm, does mwifiex_sdio needs some system directories for RW? [ 26.636845] mwifiex_sdio mmc0:0001:1: mwifiex_process_cmdresp: cmd 0x242 fain Starting Load/Save RF Kill Switch Status... [ 26.852990] mwifiex_sdio mmc0:0001:1: info: MWIFIEX VERSION: mwifiex 1.0 (14 [ 26.861518] mwifiex_sdio mmc0:0001:1: driver_version = mwifiex 1.0 (14.68.36 [FAILED] Failed to start Load/Save RF Kill Switch Status. See 'systemctl status systemd-rfkill.service' for details. Starting Load/Save RF Kill Switch Status... [FAILED] Failed to start Load/Save RF Kill Switch Status. See 'systemctl status systemd-rfkill.service' for details. Starting Load/Save RF Kill Switch Status... [FAILED] Failed to start Load/Save RF Kill Switch Status. See 'systemctl status systemd-rfkill.service' for details. Starting Load/Save RF Kill Switch Status...
On 2/18/20, JH <jupiter.hce@gmail.com> wrote:
Hi,
|
|
Re: Change RO rootfs failed RF Kill Switch Status and Failed to start Run pending postinsts
Marek Belisko
Hi,
On Tue, Feb 18, 2020 at 2:00 AM JH <jupiter.hce@gmail.com> wrote: Can you pls provide output of systemctl status systemd-rfkill There should be some more info what issue is. Pls this one also: systemctl status run-postinsts ...............BR, marek
|
|
Re: Change RO rootfs failed RF Kill Switch Status and Failed to start Run pending postinsts
JH
Hi Belisko,
Thanks for your resonse. On 2/18/20, Belisko Marek <marek.belisko@gmail.com> wrote: Can you pls provide output of systemctl status systemd-rfkillFailed at step STATE_DIRECTORY spawning /lib/systemd/systemd-rfkill: Read-only file system, did it try to write something in /lib/systemd? How should I fix it? # systemctl status systemd-rfkill -l * systemd-rfkill.service - Load/Save RF Kill Switch Status Loaded: loaded (8;;file://solar/lib/systemd/system/systemd-rfkill.service/lib/systemd/system/systemd-rfkill.service8;;; static; vendor preset: disabled) Active: failed (Result: exit-code) since Tue 2020-02-18 00:47:30 UTC; 1min 59s ago Docs: 8;;man:systemd-rfkill.service(8)man:systemd-rfkill.service(8)8;; Process: 149 ExecStart=/lib/systemd/systemd-rfkill (code=exited, status=238/STATE_DIRECTORY) Main PID: 149 (code=exited, status=238/STATE_DIRECTORY) Feb 18 00:47:30 solar systemd[1]: Starting Load/Save RF Kill Switch Status... Feb 18 00:47:30 solar systemd[149]: systemd-rfkill.service: Failed to set up special execution directory in /var/lib: Read-only file system Feb 18 00:47:30 solar systemd[149]: systemd-rfkill.service: Failed at step STATE_DIRECTORY spawning /lib/systemd/systemd-rfkill: Read-only file system Feb 18 00:47:30 solar systemd[1]: systemd-rfkill.service: Main process exited, code=exited, status=238/STATE_DIRECTORY Feb 18 00:47:30 solar systemd[1]: systemd-rfkill.service: Failed with result 'exit-code'. Feb 18 00:47:30 solar systemd[1]: Failed to start Load/Save RF Kill Switch Status. Feb 18 00:47:30 solar systemd[1]: systemd-rfkill.service: Start request repeated too quickly. Feb 18 00:47:30 solar systemd[1]: systemd-rfkill.service: Failed with result 'exit-code'. Feb 18 00:47:30 solar systemd[1]: Failed to start Load/Save RF Kill Switch Status. # systemctl status run-postinsts -l[FAILED] Failed to start Run pending postinsts.Pls this one also: systemctl status run-postinsts * run-postinsts.service - Run pending postinsts Loaded: loaded (8;;file://solar/lib/systemd/system/run-postinsts.service/lib/systemd/system/run-postinsts.service8;;; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Tue 2020-02-18 00:47:37 UTC; 6min ago Process: 153 ExecStart=/usr/sbin/run-postinsts (code=exited, status=0/SUCCESS) Process: 159 ExecStartPost=/bin/systemctl --no-reload disable run-postinsts.service (code=exited, status=1/FAILURE) Main PID: 153 (code=exited, status=0/SUCCESS) Feb 18 00:47:36 solar systemd[1]: Starting Run pending postinsts... Feb 18 00:47:36 solar run-postinsts[153]: Configuring packages on first boot.... Feb 18 00:47:36 solar run-postinsts[153]: (This may take several minutes. Please do not power off the machine.) Feb 18 00:47:36 solar run-postinsts[153]: /usr/sbin/run-postinsts: eval: line 1: can't create /var/log/postinstall.log: nonexistent directory Feb 18 00:47:36 solar run-postinsts[153]: Removing any system startup links for run-postinsts ... Feb 18 00:47:37 solar systemctl[159]: Failed to disable unit: File /etc/systemd/system/sysinit.target.wants/run-postinsts.service: Read-only file system Feb 18 00:47:37 solar systemd[1]: run-postinsts.service: Control process exited, code=exited, status=1/FAILURE Feb 18 00:47:37 solar systemd[1]: run-postinsts.service: Failed with result 'exit-code'. Feb 18 00:47:37 solar systemd[1]: Failed to start Run pending postinsts. Was the problem to write to /var/log, the /var/volatile does not have a log? # ls -l /var drwxr-xr-x 2 1000 1000 160 Feb 18 2020 backups drwxr-xr-x 5 1000 1000 100 Feb 18 00:47 cache drwxr-xr-x 9 1000 1000 180 Feb 18 00:47 lib drwxr-xr-x 3 1000 1000 224 Feb 18 2020 local lrwxrwxrwx 1 1000 1000 11 Feb 18 2020 lock -> ../run/lock lrwxrwxrwx 1 1000 1000 12 Feb 18 00:52 log -> volatile/log lrwxrwxrwx 1 1000 1000 6 Feb 18 2020 run -> ../run drwxr-xr-x 3 1000 1000 60 Feb 18 2020 spool lrwxrwxrwx 1 1000 1000 12 Feb 18 2020 tmp -> volatile/tmp drwxrwxrwt 8 root root 160 Feb 18 00:47 volatile # ls -l /var/volatile/ drwxr-xr-x 5 1000 1000 100 Feb 18 00:47 cache drwxr-xr-x 9 1000 1000 180 Feb 18 00:47 lib drwxr-xr-x 3 1000 1000 60 Feb 18 2020 spool All system mount is the same as the original RW rootfs, did both write to none standard RW system mount? Here is defined system mount in fstab: proc /proc proc defaults 0 0 devpts /dev/pts devpts mode=0620,gid=5 0 0 tmpfs /run tmpfs mode=0755,nodev,nosuid,strictatime 0 0 tmpfs /var/volatile tmpfs defaults 0 0 Here is the mount: # mount ubi0:rootfs-volume on / type ubifs (ro,relatime,assert=read-only,ubi=0,vol=2) devtmpfs on /dev type devtmpfs (rw,relatime,size=84564k,nr_inodes=21141,mode=755) sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,relatime) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd) tmpfs on /etc/machine-id type tmpfs (ro,mode=755) tmpfs on /tmp type tmpfs (rw,nosuid,nodev) debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime) fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime) tmpfs on /var/volatile type tmpfs (rw,relatime) ubi0:data-volume on /data type ubifs (rw,noatime,assert=read-only,ubi=0,vol=3) tmpfs on /var/spool type tmpfs (rw,relatime) tmpfs on /var/cache type tmpfs (rw,relatime) tmpfs on /var/lib type tmpfs (rw,relatime) tracefs on /sys/kernel/debug/tracing type tracefs (rw,nosuid,nodev,noexec,relatime) How should I fix it? Thank you. Kind regards, - jh
|
|
Re: [OE-core] [yocto] Change RO rootfs failed RF Kill Switch Status and Failed to start Run pending postinsts
Mikko Rapeli
(trimming lists to yocto only)
Hi, I think you may be missing volatile-binds package and service from your image. See poky/meta/recipes-core/volatile-binds/volatile-binds.bb It may be missing /var/log but with systemd there should not be needs to write to that location after image builds... -Mikko
|
|
Debugging gdb built by Yocto
Patrick Doyle <wpdster@...>
Does anybody have any tips or tricks for how I might debug the
(cross-canadian) gdb built by Yocto's SDK? I need to add some printf's to the gdb code to help track down why something isn't working, but none of my traditional get-ready-to-debug-this-code techniques are working. How can I run the gdb that I just built? Note that I am presuming that I can $ bitbake gdb-cross-canadian-mipsel -ccompile -f to rebuild gdb after I add a printf or two to it... but I can't figure out how to run gdb without going through the sdk installation step. $ bitbake gdb-cross-canadian-mipsel -cdevshell # ../build-mipsel-poky-linux/gdb/gdb bash: ../build-mipsel-poky-linux/gdb/gdb: No such file or directory # file ../build-mipsel-poky-linux/gdb/gdb ../build-mipsel-poky-linux/gdb/gdb: ELF 64-bit LSB shared object, x86-64, version 1 (GNU/Linux), dynamically linked, interpreter /opt/iro, BuildID[sha1]=7f985bbe4cb6c97558b159860b2498f6389b254e, for GNU/Linux 3.2.0, not stripped # ldd ../build-mipsel-poky-linux/gdb/gdb ../build-mipsel-poky-linux/gdb/gdb: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by ../build-mipsel-poky-linux/gdb/gdb) linux-vdso.so.1 => (0x00007fff8a0c2000) ... # LD_LIBRARY_PATH=../recipe-sysroot-native/usr/libexec:../recipe-sysroot-native/usr/lib ../build-mipsel-poky-linux/gdb/gdb bash: ../build-mipsel-poky-linux/gdb/gdb: No such file or directory None of the techniques from my bag-of-tricks works. I guess I could go grab the source myself, manually apply the patches myself, build it, and see if that works. Or I could sit down real hard and think about why I am trying to debug the canadian-cross built tool on my development host... perhaps I should try debugging the native (cross)-gdb on my native host. I'll go try that now, but, in the mean time, I thought it was about time for me to ask others for some clues. Any clues or pointers? --wpd
|
|