Date   

Re: Bugzilla Changes

Xu, Jiajun <jiajun.xu@...>
 


We need to review the existing Bugzilla and update the Products and
Categories to reflect the projects correctly. Please review this
email and make comments, suggestions for moving forward with a better
Bugzilla categorization.

Currently we have "Core OS" with the following Components:
General
Graphics Driver
Kernel
Tool Chain
Along with "Poky" which contains:
General
SDK Tools
There are also product categories for "Runtime Distribution", "Sato"
and "SDK Plugins". Along with other infrastructure items.

I would propose that we clearly define the some new products and move
bugs as appropriate:

Poky Build System - for Poky class and configuration issues User Space
- for user space, patching and runtime failures Tool Chain - break it
down to compiler, tools, libraries, and general Kernel - Break it down
to Arch / Config components SDK - For all SDK related issues, have
components for plugin, tools, ...
Sato - as it exists today
Runtime Distribution- Delete this, we are not a distro (no bugs now)

Additionally, there is other discussion about Poky Test components for
the standards tests such as LSB, LTP, Posix.
These test suites can help to find bugs from user level to kernel. It's hard to put them as components under just one product. User can decide the product and add note in bug title that the bug is for LSB or LTP.
If we want to make it easier to report bugs for such test suites, how about adding a Standard Test Suite product, which includes LSB, LTP, POSIX and Others as components.

We will need to add Product Categories for other Yocto Projects that
do not have bugzilla yet.

Finally we need to update the Bugzilla Interface to be Yocto Project,
changing naming as appropriate.

Please take a few minutes to review this and give some feedback.


Thanks

Sau!
Saul Wold
Yocto Component Wrangler @ Intel
Yocto Project / Poky Build System

_______________________________________________
yocto mailing list
yocto@...
https://lists.pokylinux.org/listinfo/yocto
Best Regards,
Jiajun


Re: nightly-release takes more than 24 hours to build.

Xu, Jiajun <jiajun.xu@...>
 

Hello,

Leading up to our 0.9 release, our autobuilder has been building an
increasing number of targets for our nightly-release buildset. We've
now reached the point that the nightly build takes more than 24 hours
to run (> 26 hours, in fact)
- which is clearly a problem on a build that we'd like to generate on
a daily basis.

The following is a list of everything which is built within nightly-release:

The following targets are built for qemux86, qemux86-64, qemuarm,
qemumips, and qemuppc:

* poky-image-minimal
* poky-image-sato
* poky-image-lsb
* poky-image-sdk
* meta-toolchain-sdk (SDKMACHINE=i586 and also x86_64)

For emenlow and atom-pc, we build:

* poky-image-minimal-live
* poky-image-sato-live
* poky-image-sdk-live
* meta-toolchain-sdk (SDKMACHINE=i586 and also x86_64)

Finally, we also build the Eclipse plugin, and copy the shared state
prebuilds and RPM output at the end of the build.

I was going to post build times for some of these targets for
reference, but it would be misleading as we build the targets in
succession (e.g, we start with poky-image-sdk which takes the bulk of
the time, and then the other targets can largely rely on the shared state builds).

Ideally I think our nightly build should take much less than 24 hours
to build. The question is what we can move out of the nightly build
and do on perhaps a weekly basis instead?
How about customizing the nightly build to build part of targets for every day? For example, x86/x86_64 for Monday, atom-pc/emenlow for Tuesday..... etc.
And we can schedule a new build task for weekly, which is only triggered on Wednesday(or maybe Tuesday night) for QA weekly testing and with all targets built out.

Our buildserver hardware is a dual quad-core Xeon server with 12 GB of RAM.
Throwing hardware at the problem is another solution, but not an
inexpensive one (we'd be looking at a 4-socket machine filled with
quad-cores and 32 GB of RAM).

I'm open to ideas on how to address this issue. QA will be driving a
lot of the requirements and I'm especially interested to hear your thoughts.

Scott


_______________________________________________
yocto mailing list
yocto@...
https://lists.pokylinux.org/listinfo/yocto
Best Regards,
Jiajun


Re: zypper and poky architectures

Mark Hatle <mark.hatle@...>
 

(Sorry for the late response, today's my first day back from CELF)

On 10/21/10 8:47 PM, Qing He wrote:
On Thu, 2010-10-21 at 23:18 +0800, Mark Hatle wrote:
On 10/21/10 3:33 AM, Qing He wrote:
1. what uses for independent packages is called "noarch", "all" is not
recognized, something depends on update-rc.d won't be installed
because of missing dependency
We can certainly look into translating "all" to "noarch" post 0.9. That might
make it easier for people coming from the RPM world, to understand what is in
the package.

1. rename *.all.rpm to *.noarch.rpm
We can certainly do this easily.
If noarch is universally used in RPM word, I think we should use it.
My preference is staying with the Poky 'arch' naming... but renaming to noarch is fine, and unless Richard or someone else sees an issue it could be used as a temporary workaround. (There are a few places like rootfs generation that we'll have to translate "all" to "noarch".. if we decide to do this.)


2. the arch automatic detection system uses "uname -m", thus producing
armv5tejl, which can only be resolved as armv5tel, conflicting with
"armv5te" in rpm
This is a bug in Zypper. The machine names should come from somewhere other
then uname -m. (The value of uname -m is very much ia32 specific for the most
part.. other architectures have way too many possible namings for it to be
useful.) There is a line in "/etc/rpm/platform" that contains the name of Poky
architecture. This file should be referenced (instead of -m) for all cases.

3. enhance zypper arch module, make the addition more flexible,
allowing arch alias (e.g. armv5te = armv5tel = armel = arm)
Zypper should read the rpm platform file.
Sounds reasonable. After all, zypper is only intended to be a frontend
utility to the lower end package tool. Then we won't need to worry about
alias and different naming, and this detaches zypper from hardware.


3. many archs are missing in zypper, like mips, armeb, etc.
4. there is no concept of machine-dependent packages (task-base) in
zypper, although we can work around.
Generally speaking, this is true of most RPM installations. However, within RPM
itself.. there really isn't any concept of "arch" anymore.. They're really only
used for grouping and ordering. So Zypper may need to be updated to query the
arch of a package and use it for it's various operations.

2. removing the concept of machine-dependent packages, change all
*.qemuarm.rpm to *.armv5te.rpm
I'm a bit worried about doing this, as we'll end up with (potentially)
incompatible packages with exactly the same name and versions... Perhaps we
need to think about embedding the machine type into the name of the packages
instead?
Thanks for the info. If we are going for dynamic platform specs, it
doesn't really matter whether we have things like qemuarm or not, does it?
Ya, if we are able to do things dynamically, then the naming is no longer important. That's really my hope as to how we implement the RPM components.



That would be some work to do, maybe 1.0 is a good time to get zypper
and package upgrade truely working.
Yes, we also need to get multi-arch as well.. (i.e. 32-bit and 64-bit at the
same time) working. I'm guessing there will be some Zypper interactions there
as well.
I don't really have ideas how this is done. I think on debian this is
actually avoided and i386 packages are repackaged as lib32xxx for x86_64
platform.
Since Poky does not yet have the ability to deal with Multiarch builds, this is something we will have to work on designing as we get closer to Yocto 1.0.

Within RPM, the rpm package manager understands all of the "types" of each file in the system. When you ask to install (note, not upgrade) two packages of the same name the system checks the files.

When a conflict is identified, if the contents of the files are the same, nothing is done -- no error is generated.

If the contents of the file are different, and the file is tagged as a configuration file, then either first or last in wins (I don't remember which) -- no error is generated.

If the contents of the file are different, and the file type is NOT ELF (and the above has no already detected), then an error is generated and installation stops.

if the contents of the file are different, and the file type is ELF... then there is a weighting algorithm that is used. Depending on the configuration the following could happen:

multiarch is not allowed -- an error is generated

multiarch is allowed -- one of the components though is not an allowed multiarch -- an error is generated (this could be the mips case of o32, n32 and n64 on the same system. You could prevent someone from installing say o32 binaries.)

multiarch is allowed -- a 'winner' is chosen based on the system configuration. The winner is installed, and the loser is not installed -- no error is generated.

---

If DEB and IPK don't support this (which I wouldn't be surprised if they don't), then we'll need to: extend them, modify the package naming to include some type of architecture keying to avoid conflicts, or simply state multiarch support isn't available if the rootfs type is DEB or IPK. (I suspect the middle case is what we'll end up with.)

--Mark

Thanks,
Qing


Re: Some photos of the four architecture demo

Richard Purdie <rpurdie@...>
 

On Mon, 2010-11-01 at 08:07 +0100, Frans Meulenbroeks wrote:
Dave, the demo looked indeed quite impressive, and actually I was
trying to fetch the sources, but ....
git clone does not work for me. I've tried a few times and from
different systems/locations.
the poky master clones fine.

frans@frans-desktop:~/yocto$ git clone git://git.pokylinux.org/meta-demo
Initialized empty Git repository in /home/frans/yocto/meta-demo/.git/
fatal: The remote end hung up unexpectedly
These was a problem with anonymous clones of this repository but this
should be fixed now, sorry about that.

Cheers,

Richard


Re: nightly-release takes more than 24 hours to build.

David Stewart
 

From: yocto-bounces@... [mailto:yocto-
bounces@...] On Behalf Of Richard Purdie
Sent: Monday, November 01, 2010 4:03 AM

On Mon, 2010-11-01 at 02:38 -0700, Scott Garman wrote:

Our buildserver hardware is a dual quad-core Xeon server with 12 GB of
RAM. Throwing hardware at the problem is another solution, but not an
inexpensive one (we'd be looking at a 4-socket machine filled with
quad-cores and 32 GB of RAM).
There doesn't just have to be one build machine, we are going to end up
needing multiple machines and we can split the load between them quite
easily. I think there is going to be a second machine needed alongside
the existing one regardless of what other optimisations we make.
I can buy another server and contribute it to the build effort. I had intended to buy one this quarter to begin hosting yoctoproject.org and a source mirror, but we could offload part of the builds to it as well.

As you know, I want us to get to the root of the problem, which is efficiency of the build process. Since RP has already committed to addressing this, I'm fine with a short-term fix if it will help. :-)

Scott - please put in an order, will tell you the max amount today when we're f2f.

Dave


Re: nightly-release takes more than 24 hours to build.

Richard Purdie <rpurdie@...>
 

On Mon, 2010-11-01 at 02:38 -0700, Scott Garman wrote:
Leading up to our 0.9 release, our autobuilder has been building an
increasing number of targets for our nightly-release buildset. We've now
reached the point that the nightly build takes more than 24 hours to run
(> 26 hours, in fact) - which is clearly a problem on a build that we'd
like to generate on a daily basis.

The following is a list of everything which is built within nightly-release:

The following targets are built for qemux86, qemux86-64, qemuarm,
qemumips, and qemuppc:

* poky-image-minimal
* poky-image-sato
* poky-image-lsb
* poky-image-sdk
* meta-toolchain-sdk (SDKMACHINE=i586 and also x86_64)

For emenlow and atom-pc, we build:

* poky-image-minimal-live
* poky-image-sato-live
* poky-image-sdk-live
* meta-toolchain-sdk (SDKMACHINE=i586 and also x86_64)

Finally, we also build the Eclipse plugin, and copy the shared state
prebuilds and RPM output at the end of the build.

I was going to post build times for some of these targets for reference,
but it would be misleading as we build the targets in succession (e.g,
we start with poky-image-sdk which takes the bulk of the time, and then
the other targets can largely rely on the shared state builds).

Ideally I think our nightly build should take much less than 24 hours to
build. The question is what we can move out of the nightly build and do
on perhaps a weekly basis instead?

Our buildserver hardware is a dual quad-core Xeon server with 12 GB of
RAM. Throwing hardware at the problem is another solution, but not an
inexpensive one (we'd be looking at a 4-socket machine filled with
quad-cores and 32 GB of RAM).
There doesn't just have to be one build machine, we are going to end up
needing multiple machines and we can split the load between them quite
easily. I think there is going to be a second machine needed alongside
the existing one regardless of what other optimisations we make.

The changes in development at the moment will have a mixed effect on the
autobuilder's load. On the plus side we know there are performance
regressions with 0.9 which we're about to investigate. I can think of
one change I have in mind which on its own should get the builds back
under the 24 hour time.

On the down side there are going to be changes that need increased
horsepower from the build machines such as the runtime testing we're
close to enabling or enabling builds of world.

Cheers,

Richard


nightly-release takes more than 24 hours to build.

Scott Garman <scott.a.garman@...>
 

Hello,

Leading up to our 0.9 release, our autobuilder has been building an increasing number of targets for our nightly-release buildset. We've now reached the point that the nightly build takes more than 24 hours to run (> 26 hours, in fact) - which is clearly a problem on a build that we'd like to generate on a daily basis.

The following is a list of everything which is built within nightly-release:

The following targets are built for qemux86, qemux86-64, qemuarm, qemumips, and qemuppc:

* poky-image-minimal
* poky-image-sato
* poky-image-lsb
* poky-image-sdk
* meta-toolchain-sdk (SDKMACHINE=i586 and also x86_64)

For emenlow and atom-pc, we build:

* poky-image-minimal-live
* poky-image-sato-live
* poky-image-sdk-live
* meta-toolchain-sdk (SDKMACHINE=i586 and also x86_64)

Finally, we also build the Eclipse plugin, and copy the shared state prebuilds and RPM output at the end of the build.

I was going to post build times for some of these targets for reference, but it would be misleading as we build the targets in succession (e.g, we start with poky-image-sdk which takes the bulk of the time, and then the other targets can largely rely on the shared state builds).

Ideally I think our nightly build should take much less than 24 hours to build. The question is what we can move out of the nightly build and do on perhaps a weekly basis instead?

Our buildserver hardware is a dual quad-core Xeon server with 12 GB of RAM. Throwing hardware at the problem is another solution, but not an inexpensive one (we'd be looking at a 4-socket machine filled with quad-cores and 32 GB of RAM).

I'm open to ideas on how to address this issue. QA will be driving a lot of the requirements and I'm especially interested to hear your thoughts.

Scott


Re: Some photos of the four architecture demo

Scott Garman <scott.a.garman@...>
 

On 11/01/2010 12:07 AM, Frans Meulenbroeks wrote:
Dave, the demo looked indeed quite impressive, and actually I was
trying to fetch the sources, but ....
git clone does not work for me. I've tried a few times and from
different systems/locations.
the poky master clones fine.

frans@frans-desktop:~/yocto$ git clone git://git.pokylinux.org/meta-demo
Initialized empty Git repository in /home/frans/yocto/meta-demo/.git/
fatal: The remote end hung up unexpectedly

Best regards, Frans
Hi Frans,

It looks like anonymous access to the git repository isn't enabled. I can clone it using ssh:// but not get the same error as you when using git://

We'll have someone resolve this later today. Thanks for reporting it.

Scott


Re: Some photos of the four architecture demo

Frans Meulenbroeks <fransmeulenbroeks@...>
 

2010/11/1 Stewart, David C <david.c.stewart@...>:
Check out
http://www.yoctoproject.org/blogs/davest/2010/10/7/four-architectures-grooving-together,
my latest post to the Yocto Project blog. Some photos there of the UPnP
demo.



I also plan to post one up showing a little of the sausage-making… just a
little…



Let me know what you think.
Dave, the demo looked indeed quite impressive, and actually I was
trying to fetch the sources, but ....
git clone does not work for me. I've tried a few times and from
different systems/locations.
the poky master clones fine.

frans@frans-desktop:~/yocto$ git clone git://git.pokylinux.org/meta-demo
Initialized empty Git repository in /home/frans/yocto/meta-demo/.git/
fatal: The remote end hung up unexpectedly

Best regards, Frans


Re: yocto tools on a nokia n800

Bruce Ashfield <bruce.ashfield@...>
 

On 10-10-30 12:20 PM, Tom Zanussi wrote:
On Fri, 2010-10-29 at 04:50 -0700, tiagoprn wrote:
Hello everyone!

I have a question. I have just read about the yocto/poky projects and
here I am with a hope. :)

I have a nokia n800 (an internet tablet abandoned by Nokia just about
2 years ago).

With poky/yocto project tools, is it possible to install a linux
toolchain with X and the touchscreen/wireless components working, for
an example, in a SD Card and boot my device into it?

I'm anxious for that.
Hi,

It looks like yocto already has support for the n800 - there's this
entry in local.conf:

# Other supported machines
.
.






#MACHINE ?= "nokia800"


and see this section in README.hardware:

'Nokia 770/N800/N810 Internet Tablets (nokia770 and nokia800)'

which says:

"The nokia800 images and kernel will run on both the N800 and N810"

Sounds like what you're looking for...
The rest of the config has been moved to meta-extras, and
we'd need to do a bit of kernel work, but it has worked in the past, and could work again in the future.

In other words 'I agree'.

Cheers,

Bruce


Tom




_______________________________________________
yocto mailing list
yocto@...
https://lists.pokylinux.org/listinfo/yocto


Some photos of the four architecture demo

David Stewart
 

Check out http://www.yoctoproject.org/blogs/davest/2010/10/7/four-architectures-grooving-together, my latest post to the Yocto Project blog. Some photos there of the UPnP demo.

 

I also plan to post one up showing a little of the sausage-making… just a little…

 

Let me know what you think.

 

Dave


Re: yocto tools on a nokia n800

Tom Zanussi <tom.zanussi@...>
 

On Fri, 2010-10-29 at 04:50 -0700, tiagoprn wrote:
Hello everyone!

I have a question. I have just read about the yocto/poky projects and
here I am with a hope. :)

I have a nokia n800 (an internet tablet abandoned by Nokia just about
2 years ago).

With poky/yocto project tools, is it possible to install a linux
toolchain with X and the touchscreen/wireless components working, for
an example, in a SD Card and boot my device into it?

I'm anxious for that.
Hi,

It looks like yocto already has support for the n800 - there's this
entry in local.conf:

# Other supported machines
.
.






#MACHINE ?= "nokia800"


and see this section in README.hardware:

'Nokia 770/N800/N810 Internet Tablets (nokia770 and nokia800)'

which says:

"The nokia800 images and kernel will run on both the N800 and N810"

Sounds like what you're looking for...

Tom


World build state

Saul Wold <sgw@...>
 

Folks,

I just completed a world build and the situation is not that bad currently, the summary of the failures are listed below.


libsndfile1_1.0.17
- Failed in compilation of flac.c
loudmouth_1.4.0
- Failed in compilation of asyncns.c
- asyncns.c:159:14: error: static declaration of 'strndup' follows non-static declaration
plugin-evolution2_0.36
- Failed in do_configure
CMake Error at cmake/modules/FindEPackage.cmake:166 (MESSAGE):
Evolution Data Server not found.
Call Stack (most recent call first):
CMakeLists.txt:9 (FIND_PACKAGE)
clutter-gtk-0.10_git
- Failed during linking
ohm_git
- Failed during compilation of ohm-plugin-dpms.c
- missing X11/extensions/dpmsstr.h
git_1.7.2.1
- Failed during do_configure
- Need a configure update for checking of "fopen'ed directory"
openswan_2.4.7
- Failed during compilation of optionsfrom.c
- optionsfrom.c:34:14: error: conflicting types for 'getline'
clutter-gtk-1.0_git
- Failed during linking
gnome-terminal_2.26.3
- Failed during linking
matchbox-themes-extra_svn
- Failed during install with an overwrite error
- will not overwrite just-created `/intel/poky/demo/build/tmp/work/core2-poky-linux/matchbox-themes-extra-0.3+svnr1524-r0/image/usr/share/themes/mbcrystal/matchbox/theme.desktop' with `theme.desktop'
libapplewm_1.0.0
- Failed during compilation of applewm.c
- missing X11/extensions/applewmstr.h
telepathy-idle_0.1.2
- Failed during compilation of idle-connection.c


Bugs will be files soon on these items.

Sau!

Saul Wold
Yocto Component Wrangler @ Intel
Yocto Project / Poky Build System


Bugzilla Changes

Saul Wold <sgw@...>
 

We need to review the existing Bugzilla and update the Products and Categories to reflect the projects correctly. Please review this email and make comments, suggestions for moving forward with a better Bugzilla categorization.

Currently we have "Core OS" with the following Components:
General
Graphics Driver
Kernel
Tool Chain

Along with "Poky" which contains:
General
SDK Tools

There are also product categories for "Runtime Distribution", "Sato" and "SDK Plugins". Along with other infrastructure items.

I would propose that we clearly define the some new products and move bugs as appropriate:

Poky Build System - for Poky class and configuration issues
User Space - for user space, patching and runtime failures
Tool Chain - break it down to compiler, tools, libraries, and general
Kernel - Break it down to Arch / Config components
SDK - For all SDK related issues, have components for plugin, tools, ...
Sato - as it exists today
Runtime Distribution- Delete this, we are not a distro (no bugs now)

Additionally, there is other discussion about Poky Test components for the standards tests such as LSB, LTP, Posix.

We will need to add Product Categories for other Yocto Projects that do not have bugzilla yet.

Finally we need to update the Bugzilla Interface to be Yocto Project, changing naming as appropriate.

Please take a few minutes to review this and give some feedback.


Thanks

Sau!

Saul Wold
Yocto Component Wrangler @ Intel
Yocto Project / Poky Build System


World Package List

Saul G. Wold <sgw@...>
 

Folks,

Please find attached a list of packages that are in the world build but not currently contained in any tasks or images. This is an early warning that we are putting some of these up for consideration for relocation.

As we work through this determination process, we will document it to use as in the future for package relocation. There maybe recipes in this list that will move into some specific tasks or be moved to an external layer.

Please review this spreadsheet and provide any input for these packages.

Thanks
Sau!

Saul Wold
Yocto Component Wrangler @ Intel
Yocto Project / Poky Build System


Re: yocto tools on a nokia n800

Bruce Ashfield <bruce.ashfield@...>
 

On 10-10-29 07:50 AM, tiagoprn wrote:
Hello everyone!

I have a question. I have just read about the yocto/poky projects and here I am with a hope. :)

I have a nokia n800 (an internet tablet abandoned by Nokia just about 2 years ago).

With poky/yocto project tools, is it possible to install a linux toolchain with X and the touchscreen/wireless components working, for an example, in a SD Card and boot my device into it?
I can comment on some of this from the kernel point of view,
I'll leave others to comment on platform/project/userspace
issues.

It is definitely possible to enable (basic or more capabilities)
in the N800 (in particular since most of the required support
is mainline) in the yocto kernel (or other). We could extend
the common set of features and sanity to the n800 and then
enable peripherals or boot methods .. it all depends on the
capabilities and interest.

maintable/doable in the kernel and userspace has the required
capabilities (from my point of view), so this is something
that could come back (others can comment more on the previous
support), with a little bit of assistance.

Cheers,

Bruce


I'm anxious for that.

Thanks for your support.

Regard,


yocto tools on a nokia n800

tiagoprn <tiagoprn@...>
 

Hello everyone!
 
 I have a question. I have just read about the yocto/poky projects and here I am with a hope. :)
 
 I have a nokia n800 (an internet tablet abandoned by Nokia just about 2 years ago).
 
 With poky/yocto project tools, is it possible to install a linux toolchain with X and the touchscreen/wireless components working, for an example, in a SD Card and boot my device into it?
 
 I'm anxious for that.
 
 Thanks for your support.
 
 Regard,
 

--
***
TIAGOPRN
Desenvolvedor Web, Linux e Windows
(Django, Python, Delphi, Lazarus, MySQL, SQLite)
E-mail: tiagoprn@...
Blog: http://tgplima.net84.net
LinkedIn (profile público): http://br.linkedin.com/in/tiagoparanhoslima
twitter: https://twitter.com/tiagoprn


Re: Some data collection and analysis on poky performance

Tian, Kevin <kevin.tian@...>
 

From: Qing He
Sent: Wednesday, October 27, 2010 5:23 PM

As we know, many of us have experienced slow builds of recent poky,
and it also takes larger disk space. This affects user exprience thus
is one of our directions in 1.0.
thanks Qing, that's a great start.


To find the problems leading to performance issues, I tried some
profiling on poky builds, below is a very brief summary of the data.
I profiled poky-image-minimal of both the current master branch and
green release, with similar parameters (4 CPUs) on NHM. Note that
both rpm and ipk packages are built for current master branch, while
oonly ipk packages are built for green release


I. some stats
1. recipes (including -native)
green release:
recipes built: 76
tasks run: 998
master:
recipes built: 133
tasks run: 1600
could you get a compare list which recipes have been newly added? I thought that
minimal image was seldom changed...


2. time
green release:
real: 28m7s
user: 57m45s
sys: 9m41s
user+sys: 67m26s
master:
real: 66m39s
user: 152m17s
sys: 27m37s
user+sys: 179m54s

3. space (haven't analyze though)
green release: 7.8G
master: 26.6G
I recall that RP mentioned debug symbol is enabled in current master, but I'm not sure
whether green has done the same. You may double-confirm that part first.



II. profiling
I tried a brief profile by collecting the time used for every task, so
we can scrutinize the result from a microscopic point of view. I'm still
looking into the full result, but there's something of immediate
attention.

In master, hardly any task consume less than 1.3s, this is quite
surprising, since many tasks like do_patch virtually do nothing, while
in the green release, these tasks may simply consume 0.1~0.2s. A deeper
investigation shows that this large overhead goes to bitbake-runtask,
the bitbake config and cache mechanism is executed for every task,
considerably increased the time. The overhead introduced solely by
this is around 1600*1.3=2080s, approximately 35 minutes (user+sys).
It's said this change comes from pseudo integration. Now it's time for us to
revisit this implementation then. RP can have more insights here.


Also, we should count the additional rpm packaging system,
do_package_write_rpm costs around 1400s, (excluding 1.3s per task,
btw, this is about 50% slower than do_package_write_ipk, in average),
that's around 23 minutes.

Roughly considering that the build time is proportional to recipes
count, we can try to estimate master build time from green release:
67 * (76 / 133) + 35 + 23 = 175
very close to the real time consuming (although somewhat too closed...),
so possibly the above two are the most significant time consumers
in the slowness of current poky
above are all good findings!

Thanks
Kevin


Some data collection and analysis on poky performance

Qing He <qing.he@...>
 

As we know, many of us have experienced slow builds of recent poky,
and it also takes larger disk space. This affects user exprience thus
is one of our directions in 1.0.

To find the problems leading to performance issues, I tried some
profiling on poky builds, below is a very brief summary of the data.
I profiled poky-image-minimal of both the current master branch and
green release, with similar parameters (4 CPUs) on NHM. Note that
both rpm and ipk packages are built for current master branch, while
oonly ipk packages are built for green release


I. some stats
1. recipes (including -native)
green release:
recipes built: 76
tasks run: 998
master:
recipes built: 133
tasks run: 1600

2. time
green release:
real: 28m7s
user: 57m45s
sys: 9m41s
user+sys: 67m26s
master:
real: 66m39s
user: 152m17s
sys: 27m37s
user+sys: 179m54s

3. space (haven't analyze though)
green release: 7.8G
master: 26.6G


II. profiling
I tried a brief profile by collecting the time used for every task, so
we can scrutinize the result from a microscopic point of view. I'm still
looking into the full result, but there's something of immediate
attention.

In master, hardly any task consume less than 1.3s, this is quite
surprising, since many tasks like do_patch virtually do nothing, while
in the green release, these tasks may simply consume 0.1~0.2s. A deeper
investigation shows that this large overhead goes to bitbake-runtask,
the bitbake config and cache mechanism is executed for every task,
considerably increased the time. The overhead introduced solely by
this is around 1600*1.3=2080s, approximately 35 minutes (user+sys).

Also, we should count the additional rpm packaging system,
do_package_write_rpm costs around 1400s, (excluding 1.3s per task,
btw, this is about 50% slower than do_package_write_ipk, in average),
that's around 23 minutes.

Roughly considering that the build time is proportional to recipes
count, we can try to estimate master build time from green release:
67 * (76 / 133) + 35 + 23 = 175
very close to the real time consuming (although somewhat too closed...),
so possibly the above two are the most significant time consumers
in the slowness of current poky

Thanks,
Qing


[PATCH] package.bbclass: make sure 'sysroots' created before lockfile

Tian, Kevin <kevin.tian@...>
 

meta/classes/package.bbclass | 1 +
1 file changed, 1 insertion(+)

commit 2cd6d6d7957cf46114c8b25ed13e6f8030cd9c06
Author: Kevin Tian <kevin.tian@...>
Date: Tue Oct 26 15:54:43 2010 +0800

package.bbclass: make sure 'sysroots' created before lockfile

package sstate requires a lock under sysroots/, which however may not be
created when sstate_setscene functions are executed and then causes failures.
here we make sure 'sysroots' created before do_package_setscene is executed.

Signed-off-by: Kevin Tian <kevin.tian@...>
Pull URL: http://git.pokylinux.org/cgit.cgi/poky-contrib/log/?h=tk/master

Thanks
Kevin