Re: Server specs for a continuous integration system

Christian Gagneraud <chgans@...>

On 03/09/13 10:16, Chris Tapp wrote:
On 2 Sep 2013, at 22:45, Christian Gagneraud wrote:

On 03/09/13 00:35, Burton, Ross wrote:

Hi Ross,

On 2 September 2013 06:05, Christian Gagneraud <chgans@...> wrote:
So right now, I'm thinking about:
- CPU: Xeon E5, maybe 2 x E5-2670/90, for a total of 16 cores (32 threads)
- Hard drives: 500GB, 1 TB or 2 TB (ideally with RAID if it can speed up the
RAID-5 seems to be what i am after.
Hi Chris,

Isn't RAID-5 going to be slower, especially if it's software? RAID 1
is probably better as you'll potentially double the write speed to disk.
I use a couple of Vertex SSDs in RAID 1 giving a theoretical write speed
near to 1GBs. Write endurance is possibly a concern, but I've not had
any issues using them on a local build machine. I would probably look at
some higher end models if I was going to run a lot of builds. A lot less
noise than hard drives ;-)
Thanks for the info, i will have a look at RAID-1, as you can see, I know absolutely nothing about RAID! ;)

Does SSD really help with disk throughput? Then what's the point of using ramdisk for TMPDIR/WORKDIR? If you "fully" work in RAM, the disk bottleneck shouldn't be such a problem anymore (basically, on disk, you should only have your yocto source tree and your download directory?).

- RAM: i don't really know, maybe 8 or 16 GB or more?
At least 16GB of RAM for the vast amount of disk cache that will give
you. 32GB or more will mean you can easily put the TMPDIR or WORKDIR
into a tmpfs (there's been discussion about this a few weeks ago).
Yes, I remember that one now, well spotted!

I've 16GB of RAM and a 8GB tmpfs with rm_work was sufficient for
WORKDIR which gave a 10% speedup (and massive reduction on disk wear).
I'm a bit surprise to see only a 10% speedup.
I looked at this a while back on a quad core + hyper-threading
system(so 8 cores). Depending on what you're building, there are significant
periods of the build where even 8 cores aren't maxed out as there's not
enough on the ready list to feed to them - basically there are times
when you're not CPU, memory or I/O bound. I've estimated that being able
to max out the CPUs would cut 20-25% of the build time, but the
build-time dependencies mean this isn't easy/possible. At one point I
inverted the priority scheme used by the bitbake scheduler and it (very
surprisingly) made no difference to the overall build time!
I have the same configuration here (4 cores, 8 threads), although, i didn't try to tweak bitbake, but i've noticed the same phenomenon as you: even with "aggressive" parallelism settings, the machine wasn't optimally loaded over time.

I ran builds with 16 threads and 16 parallel makes and the peak
memory usage I see is something like 8GB during intensive
compile/link phases, so 16GB for RAM and tmpfs sounds like a
reasonable minimum. The tmpfs would reduce SSD wear quite a bit ;-)
The quantity of RAM boils down to the budget, after a (very) quick search, i have estimated the cost of 64GB of RAM to be 1500 to 2000 US$.


Others have machines with 64GB RAM and use it for all of TMPDIR, at
which point you'll be almost entirely CPU-bound.
OK, so 16GB sounds like a minimum, 32GB or 64GB being even better, at that size, this is not that cheap...



yocto mailing list
Chris Tapp


Join to automatically receive all group messages.