On 10/19/2010 05:30 PM, Tian, Kevin wrote:
From: Darren HartI guess Josh may go to bed now. :-)
Sent: Wednesday, October 20, 2010 8:24 AM
On 10/19/2010 05:16 PM, Joshua Lock wrote:
On Wed, 2010-10-20 at 01:04 +0100, Joshua Lock wrote:It is my udnerstanding that we plan to run both the renderer and the
On Tue, 2010-10-19 at 14:31 -0700, Saul Wold wrote:The image I just built with my latest changes in josh/demo worked!?!
I did a sloppy job pushing my changes when leaving the office but think
Can you take a look at this since you have worked with the gstreamer?
I have replicated most/all of the in the josh/demo branch.
I also took a look through the rygel code to see what gstreamer elements
are explicitly being used, I've created a list and tried to ensure as
many as possible of them are in the IMAGE_INSTALL list for
poky-image-rygel, adding them to the RDEPENDS for rygel doesn't appear
to have included them in the image I just created...
Full list of names of pipeline elements I found in the rygel code
decodebin2, videorate, videoscale, ffmpegcolorspace, ffenc_wmv1,
twolame, lame, mp3parse, ffenc_wmav2, convert-sink-pad,
ffenc_mpeg2video, audio-src-pad, audio-sink-pad, audio-enc-sink-pad,
sink, mpegtsmux, audioconvert, audioresample, audiorate, capsfilter,
audiotestsrc, videotestsrc, ffmux_asf
Rygel did not segfault :-)
If you have time Dongxiao (or anyone else) I'd appreciate if you could
double-check my changes (my install_append in rygel to create a .config
isn't working, so you'll need to do that manually and run
rygel-preferences to disable the tracker plugin).
If you could test using Rygel as a renderer for some content served by
the mediatomb image, that would be much appreciated. You'll want
gupnp-av-cp as provided by gupnp-tools to control the renderer and
gupnp-av-cp on each "player". That is an ARM BeagleBoard and a PowerPC
*unknown* board. Each will have a suitable display. Is this your
btw, Darren, do we have to use real boards for the exercise? We don't have
those boards in PRC. Any limitations to use Qemu environments as the alternative
in our side?
For testing the images, you can use all x86 hardware. I'd prefer we not use QEMU as it is likely to complicate the networking, which is rather integral to this demo.
Also it'd be great if we have a detail list for demo steps. Is it included in the
No documented steps yet, just build the live images in the demo layers and boot on hardware. If you run into issues, please share them here and we'll help work through them.
Embedded Linux Kernel