Christian Gagneraud <chgans@...>
Hi all,
I'm currently looking at server specs for a "good" continuous integration server to be used for a project using Yocto and other things as well.
The definition of "good" in my context is something that allows me to: - Build 2 images for 2 different products that will interact with each other (kind of client/server architecture), so likely based on the same custom Yocto distro, but likely running on 2 different SoCs (and different vendors). - Each image will need to have two build flavours: production and engineering - Very likely other demo images as well - The client is a "lightweight" measurement system, so i need a small base (connman, systemd, wifi, Qt, ntp client), the application layer (Qt based but no GUI), and a couple of firmwares to be run on auxiliary devices - The server is still kind of lightweight, same base as above, plus sqlite, light http server, ntp server and of course it's own application layer (Qt based again, with a GUI for a wide "session screen").
On top of that: - The server will be part of a continuous integration system with nightly builds and test suite runners/controllers . - The CI will have to build a couple of "Engineering tools" as well (Qt based again), that need to be compile for Gnu/Linux and cross-compiled for Windows. - The CI will have to build a couple of firmware that run on embedded RTOS.
Last thing is I would like to run all build/test activities on a unique Linux server.
Having myself a quad-core i7/3GHz workstation, i still find the yocto build very (very) long, and this server will have way more work to do than me when i'm playing around with Yocto.
So right now, I'm thinking about: - CPU: Xeon E5, maybe 2 x E5-2670/90, for a total of 16 cores (32 threads) - Hard drives: 500GB, 1 TB or 2 TB (ideally with RAID if it can speed up the builds) - RAM: i don't really know, maybe 8 or 16 GB or more?
Budget wise, my feeling is that 10k US$ should be enough...
I'm coming here to see if anyone would have feedback on choosing the right "good enough" specs for a continuous integration server.
Best regards, Chris
|
|
Burton, Ross <ross.burton@...>
On 2 September 2013 06:05, Christian Gagneraud <chgans@...> wrote: So right now, I'm thinking about: - CPU: Xeon E5, maybe 2 x E5-2670/90, for a total of 16 cores (32 threads) - Hard drives: 500GB, 1 TB or 2 TB (ideally with RAID if it can speed up the builds) - RAM: i don't really know, maybe 8 or 16 GB or more? At least 16GB of RAM for the vast amount of disk cache that will give you. 32GB or more will mean you can easily put the TMPDIR or WORKDIR into a tmpfs (there's been discussion about this a few weeks ago). I've 16GB of RAM and a 8GB tmpfs with rm_work was sufficient for WORKDIR which gave a 10% speedup (and massive reduction on disk wear). Others have machines with 64GB RAM and use it for all of TMPDIR, at which point you'll be almost entirely CPU-bound. Ross
|
|
Christian Gagneraud <chgans@...>
On 03/09/13 00:35, Burton, Ross wrote: Hi Ross, On 2 September 2013 06:05, Christian Gagneraud <chgans@...> wrote:
So right now, I'm thinking about: - CPU: Xeon E5, maybe 2 x E5-2670/90, for a total of 16 cores (32 threads) - Hard drives: 500GB, 1 TB or 2 TB (ideally with RAID if it can speed up the builds) RAID-5 seems to be what i am after. - RAM: i don't really know, maybe 8 or 16 GB or more? At least 16GB of RAM for the vast amount of disk cache that will give you. 32GB or more will mean you can easily put the TMPDIR or WORKDIR into a tmpfs (there's been discussion about this a few weeks ago).
Yes, I remember that one now, well spotted! I've 16GB of RAM and a 8GB tmpfs with rm_work was sufficient for WORKDIR which gave a 10% speedup (and massive reduction on disk wear). I'm a bit surprise to see only a 10% speedup. Others have machines with 64GB RAM and use it for all of TMPDIR, at which point you'll be almost entirely CPU-bound. OK, so 16GB sounds like a minimum, 32GB or 64GB being even better, at that size, this is not that cheap... Thanks, Chris Ross
|
|
On 2 Sep 2013, at 22:45, Christian Gagneraud wrote: On 03/09/13 00:35, Burton, Ross wrote:
Hi Ross,
On 2 September 2013 06:05, Christian Gagneraud <chgans@...> wrote:
So right now, I'm thinking about: - CPU: Xeon E5, maybe 2 x E5-2670/90, for a total of 16 cores (32 threads) - Hard drives: 500GB, 1 TB or 2 TB (ideally with RAID if it can speed up the builds) RAID-5 seems to be what i am after.
Isn't RAID-5 going to be slower, especially if it's software? RAID 1 is probably better as you'll potentially double the write speed to disk. I use a couple of Vertex SSDs in RAID 1 giving a theoretical write speed near to 1GBs. Write endurance is possibly a concern, but I've not had any issues using them on a local build machine. I would probably look at some higher end models if I was going to run a lot of builds. A lot less noise than hard drives ;-) - RAM: i don't really know, maybe 8 or 16 GB or more? At least 16GB of RAM for the vast amount of disk cache that will give you. 32GB or more will mean you can easily put the TMPDIR or WORKDIR into a tmpfs (there's been discussion about this a few weeks ago). Yes, I remember that one now, well spotted!
I've 16GB of RAM and a 8GB tmpfs with rm_work was sufficient for WORKDIR which gave a 10% speedup (and massive reduction on disk wear). I'm a bit surprise to see only a 10% speedup.
I looked at this a while back on a quad core + hyper-threading system (so 8 cores). Depending on what you're building, there are significant periods of the build where even 8 cores aren't maxed out as there's not enough on the ready list to feed to them - basically there are times when you're not CPU, memory or I/O bound. I've estimated that being able to max out the CPUs would cut 20-25% of the build time, but the build-time dependencies mean this isn't easy/possible. At one point I inverted the priority scheme used by the bitbake scheduler and it (very surprisingly) made no difference to the overall build time! I ran builds with 16 threads and 16 parallel makes and the peak memory usage I see is something like 8GB during intensive compile/link phases, so 16GB for RAM and tmpfs sounds like a reasonable minimum. The tmpfs would reduce SSD wear quite a bit ;-)
Others have machines with 64GB RAM and use it for all of TMPDIR, at which point you'll be almost entirely CPU-bound. OK, so 16GB sounds like a minimum, 32GB or 64GB being even better, at that size, this is not that cheap...
Thanks,
Chris
Ross
_______________________________________________ yocto mailing list yocto@... https://lists.yoctoproject.org/listinfo/yocto
Chris Tapp opensource@... www.keylevel.com
|
|
Christian Gagneraud <chgans@...>
On 03/09/13 10:16, Chris Tapp wrote: On 2 Sep 2013, at 22:45, Christian Gagneraud wrote:
On 03/09/13 00:35, Burton, Ross wrote:
Hi Ross,
On 2 September 2013 06:05, Christian Gagneraud <chgans@...> wrote:
So right now, I'm thinking about: - CPU: Xeon E5, maybe 2 x E5-2670/90, for a total of 16 cores (32 threads) - Hard drives: 500GB, 1 TB or 2 TB (ideally with RAID if it can speed up the builds) RAID-5 seems to be what i am after.
Hi Chris, Isn't RAID-5 going to be slower, especially if it's software? RAID 1 is probably better as you'll potentially double the write speed to disk. I use a couple of Vertex SSDs in RAID 1 giving a theoretical write speed near to 1GBs. Write endurance is possibly a concern, but I've not had any issues using them on a local build machine. I would probably look at some higher end models if I was going to run a lot of builds. A lot less noise than hard drives ;-) Thanks for the info, i will have a look at RAID-1, as you can see, I know absolutely nothing about RAID! ;) Does SSD really help with disk throughput? Then what's the point of using ramdisk for TMPDIR/WORKDIR? If you "fully" work in RAM, the disk bottleneck shouldn't be such a problem anymore (basically, on disk, you should only have your yocto source tree and your download directory?).
- RAM: i don't really know, maybe 8 or 16 GB or more? At least 16GB of RAM for the vast amount of disk cache that will give you. 32GB or more will mean you can easily put the TMPDIR or WORKDIR into a tmpfs (there's been discussion about this a few weeks ago). Yes, I remember that one now, well spotted!
I've 16GB of RAM and a 8GB tmpfs with rm_work was sufficient for WORKDIR which gave a 10% speedup (and massive reduction on disk wear). I'm a bit surprise to see only a 10% speedup. I looked at this a while back on a quad core + hyper-threading system(so 8 cores). Depending on what you're building, there are significant periods of the build where even 8 cores aren't maxed out as there's not enough on the ready list to feed to them - basically there are times when you're not CPU, memory or I/O bound. I've estimated that being able to max out the CPUs would cut 20-25% of the build time, but the build-time dependencies mean this isn't easy/possible. At one point I inverted the priority scheme used by the bitbake scheduler and it (very surprisingly) made no difference to the overall build time!
I have the same configuration here (4 cores, 8 threads), although, i didn't try to tweak bitbake, but i've noticed the same phenomenon as you: even with "aggressive" parallelism settings, the machine wasn't optimally loaded over time. I ran builds with 16 threads and 16 parallel makes and the peak memory usage I see is something like 8GB during intensive compile/link phases, so 16GB for RAM and tmpfs sounds like a reasonable minimum. The tmpfs would reduce SSD wear quite a bit ;-)
The quantity of RAM boils down to the budget, after a (very) quick search, i have estimated the cost of 64GB of RAM to be 1500 to 2000 US$. Thanks. Chris
Others have machines with 64GB RAM and use it for all of TMPDIR, at which point you'll be almost entirely CPU-bound. OK, so 16GB sounds like a minimum, 32GB or 64GB being even better, at that size, this is not that cheap...
Thanks,
Chris
Ross
_______________________________________________ yocto mailing list yocto@... https://lists.yoctoproject.org/listinfo/yocto Chris Tapp
opensource@... www.keylevel.com
|
|
Hi, On Sep 3, 2013, at 3:29 AM, Christian Gagneraud < chgans@...> wrote: Isn't RAID-5 going to be slower, especially if it's software? RAID 1 is probably better as you'll potentially double the write speed to disk. I use a couple of Vertex SSDs in RAID 1 giving a theoretical write speed near to 1GBs. Write endurance is possibly a concern, but I've not had any issues using them on a local build machine. I would probably look at some higher end models if I was going to run a lot of builds. A lot less noise than hard drives ;-)
Thanks for the info, i will have a look at RAID-1, as you can see, I know absolutely nothing about RAID! ;)
Does SSD really help with disk throughput? Then what's the point of using ramdisk for TMPDIR/WORKDIR? If you "fully" work in RAM, the disk bottleneck shouldn't be such a problem anymore (basically, on disk, you should only have your yocto source tree and your download directory?).
I use a Gigabyte Z77X-UP5TH motherboard
which has support for RAID in BIOS, at boot up, and Thunderbolt connected to an Apple 27" Thunderbolt display. I've got two SSDs in a RAID1 configuration (striped).
If you can wait for some more time, they'll be releasing a version of the motherboard for the new haswell chips as well, but it's not probably going to increase performance.
I use a 3770K i7 quad-core processor, 16GB RAM, with a liquid cooled solution running at 3.8GHz. I've overclocked the CPU to 4.5GHz, but I end up shaving only 2 minutes off build times, so I just run it at 3.8GHz.
A core-image-minimal build takes around 22 minutes for me, for a Xilinx ZC702 machine configuration (Dual ARM Cortex A9 processor + FPGA).
Here are the modifications that I've done to my system, to tweak SSD performance, for Ubuntu-12.10, for a RAID1 array.
SSD performance tweaks (for non RAID0 arrays)
Step 01.01: Modify /etc/fstab.
$ sudo gedit /etc/fstab
Increase the life of the SSD by reducing how much the OS writes to the disk. If you don't need to knowwhen each file or directory was last accessed, add the following two options to the /etc/fstab file:
noatime, nodiratime
To enable TRIM support to help manage disk performance over the long term, add the following option to the /etc/fstab file:
discard
The /etc/fstab file should look like this:
# / was on /dev/sda1 during installation UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx / ext4 discard,noatime,nodiratime,errors=remount-ro 0 1
Move /tmp to RAM
# Move /tmp to RAM none /tmp tmpfs defaults,noatime,nodiratime,noexec,nodev,nosuid 0 0
Step 01.02: Move the browser's cache to a tmpfs in RAM
Launch firefox and type the following in the location bar:
Right click and enter a new preference configution by selecting the New->String option.
Preference name: browser.cache.disk.parent_directory string value: /tmp/firefox-cache
Best regards,
Elvis Dowson
|
|
Christian Gagneraud <chgans@...>
On 03/09/13 13:04, Elvis Dowson wrote: Hi,
On Sep 3, 2013, at 3:29 AM, Christian Gagneraud <chgans@... <mailto:chgans@...>> wrote:
Isn't RAID-5 going to be slower, especially if it's software? RAID 1 is probably better as you'll potentially double the write speed to disk. I use a couple of Vertex SSDs in RAID 1 giving a theoretical write speed near to 1GBs. Write endurance is possibly a concern, but I've not had any issues using them on a local build machine. I would probably look at some higher end models if I was going to run a lot of builds. A lot less noise than hard drives ;-) Thanks for the info, i will have a look at RAID-1, as you can see, I know absolutely nothing about RAID! ;)
Does SSD really help with disk throughput? Then what's the point of using ramdisk for TMPDIR/WORKDIR? If you "fully" work in RAM, the disk bottleneck shouldn't be such a problem anymore (basically, on disk, you should only have your yocto source tree and your download directory?). I use a Gigabyte Z77X-UP5TH motherboard
http://www.gigabyte.us/press-center/news-page.aspx?nid=1166
which has support for RAID in BIOS, at boot up, and Thunderbolt connected to an Apple 27" Thunderbolt display. I've got two SSDs in a RAID1 configuration (striped).
If you can wait for some more time, they'll be releasing a version of the motherboard for the new haswell chips as well, but it's not probably going to increase performance. Right now, i'm just proposing an infrastructure solution, i'm not even sure it will be accepted, and the final hardware choice (if accepted) might not even be in my hands ... I use a 3770K i7 quad-core processor, 16GB RAM, with a liquid cooled solution running at 3.8GHz. I've overclocked the CPU to 4.5GHz, but I end up shaving only 2 minutes off build times, so I just run it at 3.8GHz.
A core-image-minimal build takes around 22 minutes for me, for a Xilinx ZC702 machine configuration (Dual ARM Cortex A9 processor + FPGA).
Is it a full build from scratch (cross-toolchain, native stuff, etc...)? If so, it's quite impressive to me! Chris Here are the modifications that I've done to my system, to tweak SSD performance, for Ubuntu-12.10, for a RAID1 array.
*SSD performance tweaks (for non RAID0 arrays)*
Step 01.01: Modify /etc/fstab.
$ sudo gedit /etc/fstab
Increase the life of the SSD by reducing how much the OS writes to the disk. If you don't need to knowwhen each file or directory was last accessed, add the following two options to the /etc/fstab file:
noatime, nodiratime
To enable TRIM support to help manage disk performance over the long term, add the following option to the /etc/fstab file:
discard
The /etc/fstab file should look like this:
# / was on /dev/sda1 during installation UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx / ext4 discard,noatime,nodiratime,errors=remount-ro 0 1
Move /tmp to RAM
# Move /tmp to RAM none /tmp tmpfs defaults,noatime,nodiratime,noexec,nodev,nosuid 0 0
See: Guide software RAID/LVM TRIM support on Linux <http://www.ocztechnologyforum.com/forum/showthread.php?82648-software-RAID-LVM-TRIM-support-on-Linux> for more details.
Step 01.02: Move the browser's cache to a tmpfs in RAM
Launch firefox and type the following in the location bar:
about:config
Right click and enter a new preference configution by selecting the New->String option.
Preference name: browser.cache.disk.parent_directory string value:/tmp/firefox-cache
See: Running Ubuntu and other Linux flavors on an SSD « Brizoma <http://brizoma.wordpress.com/2012/08/04/running-ubuntu-and-other-linux-flavors-on-an-ssd/>.
Best regards,
Elvis Dowson
|
|
I use a 3770K i7 quad-core processor, 16GB RAM, with a liquid cooled solution running at 3.8GHz. I've overclocked the CPU to 4.5GHz, but I end up shaving only 2 minutes off build times, so I just run it at 3.8GHz.
A core-image-minimal build takes around 22 minutes for me, for a Xilinx ZC702 machine configuration (Dual ARM Cortex A9 processor + FPGA). Is it a full build from scratch (cross-toolchain, native stuff, etc...)? If so, it's quite impressive to me!
Yes, it is a full build from scratch, and the core-image-minimal builds the cross tool chain, kernel and root file system. A full kernel build from scratch completes in under 1 or 2 minutes, can't remember exactly, will let u know in a while. This represents approximately 1600 tasks. A full meta-toolchain-sdk task takes about 40 minutes and executes around 3600 tasks. I'll send some precise figures later on. Best regards, Elvis Dowson
|
|
Am 2013-09-03 00:16, schrieb Chris Tapp: On 2 Sep 2013, at 22:45, Christian Gagneraud wrote:
On 03/09/13 00:35, Burton, Ross wrote: Hi Ross,
On 2 September 2013 06:05, Christian Gagneraud <chgans@...> wrote:
So right now, I'm thinking about: - CPU: Xeon E5, maybe 2 x E5-2670/90, for a total of 16 cores (32 threads) - Hard drives: 500GB, 1 TB or 2 TB (ideally with RAID if it can speed up the builds) RAID-5 seems to be what i am after. Isn't RAID-5 going to be slower, especially if it's software? RAID 1 is probably better as you'll potentially double the write speed to disk. I use a couple of Vertex SSDs in RAID 1 giving a theoretical write speed near to 1GBs. Write endurance is possibly a concern, but I've not had any issues using them on a local build machine. I would probably look at some higher end models if I was going to run a lot of builds. A lot less noise than hard drives ;-) Hi, this sounds interesting to me. Having a brief look into wikipedia ( http://en.wikipedia.org/wiki/RAID ) tells me that RAID-1 gives no increase in write speed at all (for sequentioal operations), by theory: 1x ...while a RAID-5 at least in theory, should give you: (n-1)x, assumed the hardware is fast enough to support it (RAID controller!). Having had a small conversation here, I was told we even made experiences on having HW RAID being slowed down by the RAID controller, and its limitation of I/O-operations, that still could be handled. So, it seems to me one motivation why using SW RAID instead of HW RAID is exactly to overcome this limitations of having the bottleneck of a RAID controllers. Question, running RAID-5, have you tried to adjust the chunck size? In our case most I/O operations are on relatively small text files and may require some adjustment to the RAID chunck size value in respect to that fact. BR, Lothar Rubusch
|
|
On 3 Sep 2013, at 12:57, lothar@... wrote: Am 2013-09-03 00:16, schrieb Chris Tapp:
On 2 Sep 2013, at 22:45, Christian Gagneraud wrote:
On 03/09/13 00:35, Burton, Ross wrote: Hi Ross,
On 2 September 2013 06:05, Christian Gagneraud <chgans@...> wrote:
So right now, I'm thinking about: - CPU: Xeon E5, maybe 2 x E5-2670/90, for a total of 16 cores (32 threads) - Hard drives: 500GB, 1 TB or 2 TB (ideally with RAID if it can speed up the builds) RAID-5 seems to be what i am after. Isn't RAID-5 going to be slower, especially if it's software? RAID 1 is probably better as you'll potentially double the write speed to disk. I use a couple of Vertex SSDs in RAID 1 giving a theoretical write speed near to 1GBs. Write endurance is possibly a concern, but I've not had any issues using them on a local build machine. I would probably look at some higher end models if I was going to run a lot of builds. A lot less noise than hard drives ;-) Hi, this sounds interesting to me.
Having a brief look into wikipedia ( http://en.wikipedia.org/wiki/RAID ) tells me that RAID-1 gives no increase in write speed at all (for sequentioal operations), by theory: 1x ...while a RAID-5 at least in theory, should give you: (n-1)x, assumed the hardware is fast enough to support it (RAID controller!). Quite right. That'll teach me to hack out an e-mail late at night whilst packing for a road-trip :-( I meant to say RAID 0. Having had a small conversation here, I was told we even made experiences on having HW RAID being slowed down by the RAID controller, and its limitation of I/O-operations, that still could be handled. So, it seems to me one motivation why using SW RAID instead of HW RAID is exactly to overcome this limitations of having the bottleneck of a RAID controllers.
Question, running RAID-5, have you tried to adjust the chunck size? In our case most I/O operations are on relatively small text files and may require some adjustment to the RAID chunck size value in respect to that fact.
No, mainly because two SSDs for RAID 0 was enough cash to spend ;-) BR, Lothar Rubusch _______________________________________________ yocto mailing list yocto@... https://lists.yoctoproject.org/listinfo/yocto
Chris Tapp opensource@... www.keylevel.com
|
|
On 3 Sep 2013, at 00:29, Christian Gagneraud wrote: On 03/09/13 10:16, Chris Tapp wrote:
On 2 Sep 2013, at 22:45, Christian Gagneraud wrote:
On 03/09/13 00:35, Burton, Ross wrote:
Hi Ross,
On 2 September 2013 06:05, Christian Gagneraud <chgans@...> wrote:
So right now, I'm thinking about: - CPU: Xeon E5, maybe 2 x E5-2670/90, for a total of 16 cores (32 threads) - Hard drives: 500GB, 1 TB or 2 TB (ideally with RAID if it can speed up the builds) RAID-5 seems to be what i am after.
Hi Chris,
Isn't RAID-5 going to be slower, especially if it's software? RAID 1 is probably better as you'll potentially double the write speed to disk. I use a couple of Vertex SSDs in RAID 1 giving a theoretical write speed near to 1GBs. Write endurance is possibly a concern, but I've not had any issues using them on a local build machine. I would probably look at some higher end models if I was going to run a lot of builds. A lot less noise than hard drives ;-) Thanks for the info, i will have a look at RAID-1, as you can see, I know absolutely nothing about RAID! ;) Did you see my correction to this? I meant to say RAID 0. Sorry for the confusion. Does SSD really help with disk throughput? Then what's the point of using ramdisk for TMPDIR/WORKDIR? If you "fully" work in RAM, the disk bottleneck shouldn't be such a problem anymore (basically, on disk, you should only have your yocto source tree and your download directory?). Running from RAM would be fastest - it really comes down to how much you have and how much you want to keep.
- RAM: i don't really know, maybe 8 or 16 GB or more? At least 16GB of RAM for the vast amount of disk cache that will give you. 32GB or more will mean you can easily put the TMPDIR or WORKDIR into a tmpfs (there's been discussion about this a few weeks ago). Yes, I remember that one now, well spotted!
I've 16GB of RAM and a 8GB tmpfs with rm_work was sufficient for WORKDIR which gave a 10% speedup (and massive reduction on disk wear). I'm a bit surprise to see only a 10% speedup. I looked at this a while back on a quad core + hyper-threading system(so 8 cores). Depending on what you're building, there are significant periods of the build where even 8 cores aren't maxed out as there's not enough on the ready list to feed to them - basically there are times when you're not CPU, memory or I/O bound. I've estimated that being able to max out the CPUs would cut 20-25% of the build time, but the build-time dependencies mean this isn't easy/possible. At one point I inverted the priority scheme used by the bitbake scheduler and it (very surprisingly) made no difference to the overall build time! I have the same configuration here (4 cores, 8 threads), although, i didn't try to tweak bitbake, but i've noticed the same phenomenon as you: even with "aggressive" parallelism settings, the machine wasn't optimally loaded over time.
I ran builds with 16 threads and 16 parallel makes and the peak memory usage I see is something like 8GB during intensive compile/link phases, so 16GB for RAM and tmpfs sounds like a reasonable minimum. The tmpfs would reduce SSD wear quite a bit ;-) The quantity of RAM boils down to the budget, after a (very) quick search, i have estimated the cost of 64GB of RAM to be 1500 to 2000 US$.
That sounds high - I generally get 16GB DDR 1600 for less that 150 US$ - it was quite a bit lower a year back! Thanks. Chris
Others have machines with 64GB RAM and use it for all of TMPDIR, at which point you'll be almost entirely CPU-bound. OK, so 16GB sounds like a minimum, 32GB or 64GB being even better, at that size, this is not that cheap...
Thanks,
Chris
Ross
_______________________________________________ yocto mailing list yocto@... https://lists.yoctoproject.org/listinfo/yocto Chris Tapp
opensource@... www.keylevel.com
Chris Tapp opensource@... www.keylevel.com
|
|
On 3 Sep 2013, at 02:04, Elvis Dowson wrote: Hi, On Sep 3, 2013, at 3:29 AM, Christian Gagneraud < chgans@...> wrote: Isn't RAID-5 going to be slower, especially if it's software? RAID 1 is probably better as you'll potentially double the write speed to disk. I use a couple of Vertex SSDs in RAID 1 giving a theoretical write speed near to 1GBs. Write endurance is possibly a concern, but I've not had any issues using them on a local build machine. I would probably look at some higher end models if I was going to run a lot of builds. A lot less noise than hard drives ;-)
Thanks for the info, i will have a look at RAID-1, as you can see, I know absolutely nothing about RAID! ;)
Does SSD really help with disk throughput? Then what's the point of using ramdisk for TMPDIR/WORKDIR? If you "fully" work in RAM, the disk bottleneck shouldn't be such a problem anymore (basically, on disk, you should only have your yocto source tree and your download directory?).
I use a Gigabyte Z77X-UP5TH motherboard
which has support for RAID in BIOS, at boot up, and Thunderbolt connected to an Apple 27" Thunderbolt display. I've got two SSDs in a RAID1 configuration (striped).
If you can wait for some more time, they'll be releasing a version of the motherboard for the new haswell chips as well, but it's not probably going to increase performance.
I use a 3770K i7 quad-core processor, 16GB RAM, with a liquid cooled solution running at 3.8GHz. I've overclocked the CPU to 4.5GHz, but I end up shaving only 2 minutes off build times, so I just run it at 3.8GHz.
A core-image-minimal build takes around 22 minutes for me, for a Xilinx ZC702 machine configuration (Dual ARM Cortex A9 processor + FPGA).
That's basically the spec I run (water cooling also keeps the noise down!). I generally get build times of just over 50 minutes for my system which has 'X' / GLES / Boost with something like 5500 tasks. Much better than the 10+ hours on a VM on the 4 year old MacBook Pro!
Here are the modifications that I've done to my system, to tweak SSD performance, for Ubuntu-12.10, for a RAID1 array.
SSD performance tweaks (for non RAID0 arrays)
Step 01.01: Modify /etc/fstab.
$ sudo gedit /etc/fstab
Increase the life of the SSD by reducing how much the OS writes to the disk. If you don't need to knowwhen each file or directory was last accessed, add the following two options to the /etc/fstab file:
noatime, nodiratime
To enable TRIM support to help manage disk performance over the long term, add the following option to the /etc/fstab file:
discard
The /etc/fstab file should look like this:
# / was on /dev/sda1 during installation UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx / ext4 discard,noatime,nodiratime,errors=remount-ro 0 1
Move /tmp to RAM
# Move /tmp to RAM none /tmp tmpfs defaults,noatime,nodiratime,noexec,nodev,nosuid 0 0
Step 01.02: Move the browser's cache to a tmpfs in RAM
Launch firefox and type the following in the location bar:
Right click and enter a new preference configution by selecting the New->String option.
Preference name: browser.cache.disk.parent_directory string value: /tmp/firefox-cache
Best regards,
Elvis Dowson
|
|
Christian Gagneraud <chgans@...>
On 04/09/13 07:22, Chris Tapp wrote: On 3 Sep 2013, at 00:29, Christian Gagneraud wrote:
On 03/09/13 10:16, Chris Tapp wrote:
On 2 Sep 2013, at 22:45, Christian Gagneraud wrote:
On 03/09/13 00:35, Burton, Ross wrote:
Hi Ross,
On 2 September 2013 06:05, Christian Gagneraud <chgans@...> wrote:
So right now, I'm thinking about: - CPU: Xeon E5, maybe 2 x E5-2670/90, for a total of 16 cores (32 threads) - Hard drives: 500GB, 1 TB or 2 TB (ideally with RAID if it can speed up the builds) RAID-5 seems to be what i am after.
Hi Chris,
Isn't RAID-5 going to be slower, especially if it's software? RAID 1 is probably better as you'll potentially double the write speed to disk. I use a couple of Vertex SSDs in RAID 1 giving a theoretical write speed near to 1GBs. Write endurance is possibly a concern, but I've not had any issues using them on a local build machine. I would probably look at some higher end models if I was going to run a lot of builds. A lot less noise than hard drives ;-) Thanks for the info, i will have a look at RAID-1, as you can see, I know absolutely nothing about RAID! ;) Did you see my correction to this? I meant to say RAID 0. Sorry for the confusion.
No problem, at least it forces me to look at RAID-5, RAID-1 and now RAID-0, thanks! ;)
Does SSD really help with disk throughput? Then what's the point of using ramdisk for TMPDIR/WORKDIR? If you "fully" work in RAM, the disk bottleneck shouldn't be such a problem anymore (basically, on disk, you should only have your yocto source tree and your download directory?). Running from RAM would be fastest - it really comes down to how much you have and how much you want to keep.
- RAM: i don't really know, maybe 8 or 16 GB or more? At least 16GB of RAM for the vast amount of disk cache that will give you. 32GB or more will mean you can easily put the TMPDIR or WORKDIR into a tmpfs (there's been discussion about this a few weeks ago). Yes, I remember that one now, well spotted!
I've 16GB of RAM and a 8GB tmpfs with rm_work was sufficient for WORKDIR which gave a 10% speedup (and massive reduction on disk wear). I'm a bit surprise to see only a 10% speedup. I looked at this a while back on a quad core + hyper-threading system(so 8 cores). Depending on what you're building, there are significant periods of the build where even 8 cores aren't maxed out as there's not enough on the ready list to feed to them - basically there are times when you're not CPU, memory or I/O bound. I've estimated that being able to max out the CPUs would cut 20-25% of the build time, but the build-time dependencies mean this isn't easy/possible. At one point I inverted the priority scheme used by the bitbake scheduler and it (very surprisingly) made no difference to the overall build time! I have the same configuration here (4 cores, 8 threads), although, i didn't try to tweak bitbake, but i've noticed the same phenomenon as you: even with "aggressive" parallelism settings, the machine wasn't optimally loaded over time.
I ran builds with 16 threads and 16 parallel makes and the peak memory usage I see is something like 8GB during intensive compile/link phases, so 16GB for RAM and tmpfs sounds like a reasonable minimum. The tmpfs would reduce SSD wear quite a bit ;-) The quantity of RAM boils down to the budget, after a (very) quick search, i have estimated the cost of 64GB of RAM to be 1500 to 2000 US$. That sounds high - I generally get 16GB DDR 1600 for less that 150 US$ - it was quite a bit lower a year back!
Thanks. Chris
Others have machines with 64GB RAM and use it for all of TMPDIR, at which point you'll be almost entirely CPU-bound. OK, so 16GB sounds like a minimum, 32GB or 64GB being even better, at that size, this is not that cheap...
Thanks,
Chris
Ross
_______________________________________________ yocto mailing list yocto@... https://lists.yoctoproject.org/listinfo/yocto Chris Tapp
opensource@... www.keylevel.com
Chris Tapp
opensource@... www.keylevel.com
|
|
Christian Gagneraud <chgans@...>
On 03/09/13 20:55, Elvis Dowson wrote:
I use a 3770K i7 quad-core processor, 16GB RAM, with a liquid cooled solution running at 3.8GHz. I've overclocked the CPU to 4.5GHz, but I end up shaving only 2 minutes off build times, so I just run it at 3.8GHz.
A core-image-minimal build takes around 22 minutes for me, for a Xilinx ZC702 machine configuration (Dual ARM Cortex A9 processor + FPGA). Is it a full build from scratch (cross-toolchain, native stuff, etc...)? If so, it's quite impressive to me! Yes, it is a full build from scratch, and the core-image-minimal builds the cross tool chain, kernel and root file system. A full kernel build from scratch completes in under 1 or 2 minutes, can't remember exactly, will let u know in a while. This represents approximately 1600 tasks. A full meta-toolchain-sdk task takes about 40 minutes and executes around 3600 tasks. I'll send some precise figures later on.
Very interesting figures indeed. Not sure about the water cooling stuff, does that come in "standard", or did you build and tweak your server yourself? Up to now, I was thinking about an off-the-shelf server, maybe the best approach is to build it myself (actually get some people here build it themselves!) I would have another question concerning the CPU, Does anyone know how the Xeon E5 compare to the i7 in this context (build server and yocto)? Regards, Chris Best regards,
Elvis Dowson
|
|
On Sep 4, 2013, at 6:45 AM, Christian Gagneraud <chgans@...> wrote:
On 04/09/13 07:22, Chris Tapp wrote:
On 3 Sep 2013, at 00:29, Christian Gagneraud wrote:
On 03/09/13 10:16, Chris Tapp wrote:
On 2 Sep 2013, at 22:45, Christian Gagneraud wrote:
On 03/09/13 00:35, Burton, Ross wrote:
Hi Ross,
On 2 September 2013 06:05, Christian Gagneraud <chgans@...> wrote: So right now, I'm thinking about: - CPU: Xeon E5, maybe 2 x E5-2670/90, for a total of 16 cores (32 threads) - Hard drives: 500GB, 1 TB or 2 TB (ideally with RAID if it can speed up the builds) RAID-5 seems to be what i am after. Hi Chris,
Isn't RAID-5 going to be slower, especially if it's software? RAID 1 is probably better as you'll potentially double the write speed to disk. I use a couple of Vertex SSDs in RAID 1 giving a theoretical write speed near to 1GBs. Write endurance is possibly a concern, but I've not had any issues using them on a local build machine. I would probably look at some higher end models if I was going to run a lot of builds. A lot less noise than hard drives ;-) Thanks for the info, i will have a look at RAID-1, as you can see, I know absolutely nothing about RAID! ;) Did you see my correction to this? I meant to say RAID 0. Sorry for the confusion. No problem, at least it forces me to look at RAID-5, RAID-1 and now RAID-0, thanks! ;) Sorry, my setup is a RAID0 striped SSDx2 configuration as well, not RAID1. I have a 3TB standard drive for performing backup, since I can lose data anytime, if any one of the drives fail. The cooling solution is from Corsair, and it's easy to install. I think, CPU motherboard, SSD, RAM, case, power supply, etc was well within USD$ 1500. The apple display was the most expensive component. The Mac Pro, when it comes out would be a perfect build server, though, with PCIe SSD, XEON CPUs, etc. Elvis Dowson
|
|

Khem Raj
On Sep 3, 2013, at 8:47 PM, Elvis Dowson <elvis.dowson@...> wrote:
On Sep 4, 2013, at 6:45 AM, Christian Gagneraud <chgans@...> wrote:
On 04/09/13 07:22, Chris Tapp wrote:
On 3 Sep 2013, at 00:29, Christian Gagneraud wrote:
On 03/09/13 10:16, Chris Tapp wrote:
On 2 Sep 2013, at 22:45, Christian Gagneraud wrote:
On 03/09/13 00:35, Burton, Ross wrote:
Hi Ross,
On 2 September 2013 06:05, Christian Gagneraud <chgans@...> wrote: So right now, I'm thinking about: - CPU: Xeon E5, maybe 2 x E5-2670/90, for a total of 16 cores (32 threads) - Hard drives: 500GB, 1 TB or 2 TB (ideally with RAID if it can speed up the builds) RAID-5 seems to be what i am after. Hi Chris,
Isn't RAID-5 going to be slower, especially if it's software? RAID 1 is probably better as you'll potentially double the write speed to disk. I use a couple of Vertex SSDs in RAID 1 giving a theoretical write speed near to 1GBs. Write endurance is possibly a concern, but I've not had any issues using them on a local build machine. I would probably look at some higher end models if I was going to run a lot of builds. A lot less noise than hard drives ;-) Thanks for the info, i will have a look at RAID-1, as you can see, I know absolutely nothing about RAID! ;)
you want RAID-1 if you want full redundancy. so if 1 of the disks die then you are still OK. This is generally slow since data is written to two locations. For faster build you would want RAID-0 striped so the writes happen in parallel to both disks but if any of the disks go away the whole data is lost. Did you see my correction to this? I meant to say RAID 0. Sorry for the confusion. No problem, at least it forces me to look at RAID-5, RAID-1 and now RAID-0, thanks! ;) Sorry, my setup is a RAID0 striped SSDx2 configuration as well, not RAID1. I have a 3TB standard drive for performing backup, since I can lose data anytime, if any one of the drives fail.
The cooling solution is from Corsair, and it's easy to install.
I think, CPU motherboard, SSD, RAM, case, power supply, etc was well within USD$ 1500.
The apple display was the most expensive component.
The Mac Pro, when it comes out would be a perfect build server, though, with PCIe SSD, XEON CPUs, etc.
Elvis Dowson
_______________________________________________ yocto mailing list yocto@... https://lists.yoctoproject.org/listinfo/yocto
|
|