Re: Recommended Hardware for building


Bryan Evenson
 

Oliver,

-----Original Message-----
From: yocto-bounces@... [mailto:yocto-
bounces@...] On Behalf Of Martin Jansa
Sent: Thursday, October 02, 2014 3:09 PM
To: Chris Tapp
Cc: yocto@...
Subject: Re: [yocto] Recommended Hardware for building

On Thu, Oct 02, 2014 at 05:51:29PM +0100, Chris Tapp wrote:

On 2 Oct 2014, at 11:04, Burton, Ross <ross.burton@...> wrote:

On 2 October 2014 10:36, Oliver Novakovic <Oliver.Novakovic@...>
wrote:
Can anyone recommend a reasonable performant hardware setup to
use ?

What should be considered ? Are there any pitfalls ? What about
bottlenecks in the build system ?
you should start by saying what you're going to build, my experience is quite
different when building "small" images like console-image or even x11-image
and "big" images/feeds which contain whole qt5 stack, 3 webkits and
2 chromium builds.
Agreed, what you are building and what your goals are makes a difference in what you need. I have a build machine setup that is mainly used to verify everything builds correctly after committing changes. It's an Intel i3-3220 with 8GB RAM. The autobuilder is setup on a Linux VM which is given 4GB RAM and does not recognize the extra Hyper-threaded cores, meaning it acts as a dual core machine. Rebuild of my console image typically takes under 20 minutes, and most of that time is packaging/install. After the initial build, there really isn't much for my system that needs to get rebuilt between commits.

So if you are looking for a build machine that is outside your normal workflow, a $400 PC may be enough for you. If this machine is for your development build and you have a have a lot of graphic applications that you need to build, you may want something more in line with what other people are suggesting.

Regards,
Bryan


In general: bitbake will better utilize all available performance with bigger
image (e.g. build time for console image won't change so much if you go from
8 cores to 24, but building e.g. just webkit alone will be more than twice
faster on 24 cores).

Regards,

Specifically:

How many cores are recommended ? And how much cache is necessary
?
How much of the main memory does Yocto really use ? Is 32 GB
sufficient or should I go for 64 ?

Does it make sense to use two SSDs as Raid0 to get builds faster ?
As much of everything as you can afford. :) The build isn't heavy
in any particular metric, so don't sacrifice RAM for SSDs for example.

RAID 0 over SSD would be nice and fast, but I prefer having a good
amount of RAM and a tuned ext4 (no journal, long commit delay) so
data doesn't actually hit the disk as frequently. Keeping the actual
build directories on a separate disk is good for performance and not
causing data loss when you lose a disk.

There are people that have 64GB in machines and then set TMPDIR to a
tmpfs. Surprisingly this isn't that much faster (5% or so), but
it's a lot easier on the hardware and power consumption.
My experience:

I've got a quad core with hyper-threading (so 8 usable cores) running at
about 3.8 GHz, 16GB of RAM and use multiple SSDs - one to hold the meta
data, downloads and top level build areas (local.conf, etc) and have the
TMPDIR on a second SSD (so, as Ross says, I don't get a surprise when it
wears out!).

I can build my images (basically an x11 image) in just under 60 minutes
(once all the files have been fetched). I run with BB_NUMBER_THREADS and
PARALLEL_MAKE both set to 16 to make sure the cores are fully loaded as
much as possible (other says that should be 8 and 8 to reduce scheduling
overhead).

During the build the system is CPU bound quite a bit of the time (so more
cores should help), but there are significant periods where the build
dependency chain means this isn't the case and only two or three cores are
active. Previously I recall comparing results with someone else and finding
that having lots more cores (24, I think) didn't give a significant improvement
in build time (certainly not for the 3x system build cost).

I've never seen peak memory usage go much above 9 GB during a build,
and the peaks generally coincide with linking activities for "big" items (gcc,
eglibc). This is likely to go higher with more active threads.

I started out with a RAID-0 SSD build array, but I didn't really see any
difference over a single high-spec (consumer) SSD. As Ross said, running a
fast file system on the disk is a good idea.

--

Chris Tapp
opensource@...
www.keylevel.com

----
You can tell you're getting older when your car insurance gets real cheap!

--
_______________________________________________
yocto mailing list
yocto@...
https://lists.yoctoproject.org/listinfo/yocto
--
Martin 'JaMa' Jansa jabber: Martin.Jansa@...

Join {yocto@lists.yoctoproject.org to automatically receive all group messages.