Date   

Yocto weekly bug trend charts -- WW46

You, Yongkang <yongkang.you@...>
 

This is latest weekly Yocto bug trend. The open bug number was 148.

Thanks,
Yongkang


Re: Tracing/profiling tools for Yocto v1.0

Tom Zanussi <tom.zanussi@...>
 

Comments below...

On Fri, 2010-11-12 at 17:02 -0800, Bruce Ashfield wrote:
On 10-11-12 5:25 PM, Tom Zanussi wrote:
Hi,

For the 1.0 Yocto release, we'd like to have as complete a set of
tracing and profiling tools as possible, enough so that most users will
be satisfied with what's available, but not so many as to produce a
maintenance burden.

The current set is pretty decent:

latencytop
powertop
lttng
lttng-ust
oprofile(ui)
trace-cmd
perf

but there seems to be an omission or two with respect to the current set
as packaged in Yocto, and there are a few other tools that I think would
make sense to add, either to address a gap in the current set, or
because they're popular enough to be missed by more than a couple
users:

KernelShark
perf trace scripting support
SystemTap
blktrace
sysprof
These match my lists that I've been adding to various
kernels (and roadmaps) for a while, so no arguments here.

See below for some comments and ideas.


These are just my own opinions regarding what I think is missing - see
below for more details on each tool, and some reasons I think it would
make sense to include them. If you disagree, or even better, have
suggestions for other tools that you think are essential and missing,
please let me know. Otherwise, I plan on adding support for them to
Yocto in the very near future (e.g. starting next week).

Just one note - I know that some of these may not be appropriate for all
platforms; in those cases, I'd expect they just wouldn't be included in
the images for those machines. Actually, except for sysprof and
KernelShark, they all have modes that should allow them to be used with
minimal footprints on the target system, and even then I think both
KernelShark and sysprof could both be relatively easily retrofitted with
a remote layer like OprofileUI's that would make them lightweight on the
target.

Anyway, on to some descriptions of the tools themselves, followed by a
short summary at the end...

----

Tool: KernelShark
URL: http://rostedt.homelinux.com/kernelshark/
Architectures supported: all, nothing arch-specific

KernelShark is a front-end GUI interface to trace-cmd, a tracing tool
that's already included in the Yocto SDK (trace-cmd basically provides
an easier-to-use text-based interface to the raw debugfs tracing files
contained in /sys/kernel/debug/tracing).

Tracing can be started and stopped from the GUI; when the trace session
ends, the results are displayed in a couple of sub-windows: a graphical
area that displays events for each CPU but that can also display
per-task graphs, and a listbox that displays a detailed list of events
in the trace. In addition to display of raw events, it also supports
display of the output of the kernel's ftrace plugins
(/sys/kernel/debug/tracing/available_tracers) such as the function and
function_graph tracers, which are very useful on their own for figuring
out exactly what the kernel does in particular codepaths.

One very nice KernelShark feature is the ability to easily toggle the
individual events or event subsystems of interest; specifying these
manually is usually one of the most unpleasant parts of command-line
tracing, for this reason alone KernelShark is worth looking at, as it
makes the whole tracing experience much more manageable and enjoyable
(and therefore more likely to be used). Additionally, the extensive
support of filtering and searching is very useful. The GUI itself is
also extensible via Python plug-ins. All in all a great tool for
running and viewing traces.

Support for remote targets: The event subsystem and ftrace plugins that
provide the data for trace-cmd/KernelShark are completely implemented
within the kernel; both control and trace stream data retrieval are
accessed via debugfs files. The files that provide the data retrieval
function are accessible via splice, which means that the trace streams
could be easily sent over the network and processed on the host. The
current KernelShark code doesn't do that - currently the UI needs to run
on the target - but that would be an area where Yocto could add some
value - it shouldn't be a huge amount of effort to add that capability.
In the worst case, something along the lines of what OprofileUI does
(start/stop the trace on the target, and send the results back when
done) could also be acceptable as a local stopgap solution.
Agreed, adding off-target viewing/control would be a nice
addition here. Phase (b) perhaps ?
Yeah, I agree - we probably don't have time to do it now...


----

Tool: perf trace scripting support
URL: none, included in the kernel sources
Architectures supported: all, nothing arch-specific

Yocto already includes the 'perf' tool, which is a userspace tool that's
actually bundled as part of the mainline linux kernel source. 'perf
trace' is a subtool of perf that performs system-wide (or per-task)
event tracing and displays the raw trace event data using format strings
associated with each trace event. In fact, the events and event
descriptions used by perf are the same as those used by
trace-cmd/KernelShark to generate its traces (the kernel event
subsystem, see /sys/kernel/debug/tracing/events).

As is the case with KernelShark, the reams of raw trace data provided by
perf trace provide a lot of useful detail, but the question becomes how
to realistically extract useful high-level information from it. You
could sit down and pore through it for trends or specific conditions (no
fun, and it's not really humanly possible with large data sets).
Filtering can be used, but that only goes so far. Realistically, to
make sense of it, it needs to be 'boiled down' somehow into a more
manageable form. The fancy word for that is 'aggregation', which
basically just means 'sticking the important stuff in a hash table'.

The perf trace scripting support embeds scripting language interpreters
into perf to allow perf's internal event dispatch mechanism to call
script handlers directly (script handlers can also call back into perf).
The scripting_ops interface formalizes this interaction and allows any
scripting engine that implements the API to be used as a full-fledged
event-processing language - currently Python and Perl are implemented.

Events are exposed in the scripting interpreter as function calls, where
each param is an event field (in the event description pseudo-file for
the event in the kernel event subsystem). During processing, every
event in the trace stream is converted into a corresponding function
call in the scripting language. At that point, the handler can do
anything it want to using the available facilities of the scripting
language such as, for example, aggregate the event data in a hash table.

A starter script with handlers for each event type can be automatically
generated from existing trace data using the 'perf trace -g' command.
This allows for one-off, quick turnaround trace experiments. But
scripts can be 'promoted' to full-fledged 'perf trace' scripts that
essentially become part of perf and can be listed using 'perf trace -l'.
This involves simply writing a couple wrapper shell scripts and putting
them in the right places.

In general, perf trace scripting is a useful tool to have when the
standard set of off-the-shelf tools aren't really enough to analyze a
problem. To take a simple example, using tools like iostat you can get
a general statistical idea of the read/write activity on the system, but
those tools won't tell you which processes are actually responsible for
most of the I/O activity. The 'perf trace rw-by-pid' canned script in
perf trace uses the system-call read/write tracepoints
(sys_enter/exit_read/write) to capture all the reads and writes (and
failed reads/writes) of every process on the system and at the end
displays a detailed per-process summary of the results. That
information can be used to determine which processes are responsible for
the most I/O activity on the system, which can in turn be used to target
and drill down into the detailed read/write activity caused by a
specific process using for example the rw-by-file canned script which
displays the per-file read/write activity for a specific process.

To give a couple more concrete examples of how this capability can be
useful, here are some other examples of things that can only be done
with scripting, such as detecting complex or 'compound' events.

Simple hard-coded filters and triggers can scan data for simple
conditions e.g. someone tried to read /etc/passwd. This kind of thing
should be possible with the current event filtering capabilities even
without scripting support e.g. scan the event stream for events that
satisfy the condition:

event == vfs_open&& filename == "/etc/passwd"

(This would tell you that someone tried to open /etc/password, but that
in itself isn't very useful - you'd really like to at least know who,
which of course could be accomplished by scripting.)

But a lot of other problems involve pattern matching over multiple
events. One example from a recent lkml posting:

The poster had noticed a certain inefficient pattern in block I/O data,
where multiple readahead requests resulted in an unnecessarily
inefficient pattern:

- queue first request
- plug queue
- queue second adjacent request
- merge
- unplug, issue, complete

In the case of readahead, latency is extremely important for throughput:
explicitly unplugging after each readahead increased throughput by 68%.
It's interesting to note that older kernels didn't have this problem,
but some unknown commit(s) introduced it.

This is the type of pattern that you would really need scripting support
in order to detect. A simple script to check for this condition and
detect a regression such as this could be quickly written and made
available, and possibly avoid the situation where a problem like this
could go undetected for a couple kernel revisions.

Perf and perf trace scripting also support 'live mode' (over the network
if desired), where the trace stream is processed as soon as it's
generated. Getting back to the "/etc/password" example - as mentioned,
something an administrator might want would be to monitor accesses to
"/etc/passwd" and see who's trying to access it. With live mode, a
continuously running script could monitor sys_open calls, compare the
opened filename against "/etc/passwd", get the uid and look up username
to find out who's trying to read it, and have the Python script e-mail
the culprit's name to the admin when detected.

Live mode is important for both the small and large targets,
so this is a good addition.


Baically, live-mode allows for long-running trace sessions that can
continuously scan for rare conditions. Referring back to the readahead
example, one assumption the poster made was that "merging of a readahead
window with anything other than its own sibling" would be extremely
rare. A long-running script could easily be written to detect this
exact condition and either confirm or refute that assumption, which
would be hard to do without some kind of scripting support.

Perf trace scripting is relatively new, so there aren't yet a lot of
real-world examples - currently there are about 15 canned scripts
available (see 'perf trace -l') including the rw-by-pid and rw-by-file
examples described above.

The main data source for perf trace scripting are the statically defined
trace events defined in /sys/kernel/debug/tracing/events. It's also
possible to use the dynamic event sources available from the 'perf
probe' tool, but this is still an area of active integration at the
moment.

Support for remote targets: perf and perf trace scripting 'live-mode'
support allows the trace stream to be piped over the network using e.g.
netcat. Using that mode, the target does nothing but generate the trace
stream and send it over the network to the host, where a live-mode
script can be applied to it. Even so, this is probably not the most
efficient way to transfer trace data - one hope would be that perf would
add support for splice, but that's uncertain at this point.
I'd also suggest that doing a canned powermanagement script would
be good here. Using the existing tracepoints (and adding our own)
to get a detailed view of C and P states would be a nice demo.
Makes sense, and shouldn't be too much work, but still - phase (b) too?


----

Tool: SystemTap
URL: http://sourceware.org/systemtap/
Architectures supported: x86, x86_64, ppc, ppc64, ia64, s390, arm

SystemTap is also a system-wide tracing tool that allows users to write
scripts that attach handlers to events and perform complex aggregation
and filtering of the event stream. It's been around for a long time and
thus has a lot of canned scripts available, which make use of a set of
general-purpose script-support libraries called 'tapsets' (see the
SystemTap wiki, off of the above link).

The language used to write SystemTap scripts isn't however a
general-purpose language like Perl or Python, but rather a C-like
language defined specifically for SystemTap. The reason for that has to
do with the way SystemTap works - SystemTap scripts are executed in the
kernel, which makes general-purpose language runtimes off-limits.
Basically what SystemTap does is translate a user script into an
equivalent C version, which is then compiled into a kernel module.
Inserting the kernel module attaches the C code to specific event
sources in the kernel - whenever an event is hit, the corresponding
event handler is invoked and does whatever it's told to do - usually
this is updating a counter in a hash table or something similar. When
the tracing session exits, the script typically calculates and displays
a summary of the aggregation(s), or whatever the user wants it to do.

In addition to the standard set of event sources (the static kernel
tracepoint events, and dynamic events via kprobes) SystemTap also
supports user space probing if the kernel is built with utrace support.
User space probing can be done either dynamically, or statically if the
application contains static tracepoints. A very interesting aspect of
this is that via dtrace-compatible markers, the existing static dtrace
tracepoints contained in, for example, the Java or Python runtimes can
also be used as event sources (e.g. if they're compiled with
--enable-dtrace). This should allow any Python or Java application to
be much more meaningfully traced and profiled using SystemTap - for
example, with complete userspace support theoretically every detail of
say an http request to a Java web application could be followed, from
the network device driver to the web server through a Java servlet and
back out through the kernel again. Supporting this however, in addition
to having utrace support in the kernel, might also require some
SystemTap-specific patches to the affected applications. Users can also
instrument their own applications using static tracepoints
(http://sourceware.org/systemtap/wiki/AddingUserSpaceProbingToApps).

As mentioned, there are a whole host of scripts available. Examples
include everything from per-process network traffic monitoring,
packet-drop monitoring, per-process disk I/O times, to the same types of
applications described above for 'perf trace scripting). There are too
many to usefully cover here, see
http://sourceware.org/systemtap/examples/keyword-index.html for a
complete list of the available scripts. Everything in SystemTap is also
very well documented - there are tutorials, handbooks, and a bunch of
useful information on the wiki such as 'War Stories' and deep-dives into
other use cases i.e. there's no shortage of useful info for new (and
old) users. I won't cover any specific examples here - basically all of
the motivations and capabilities described above for 'perf trace
scripting' should apply equally well to SystemTap, and won't be repeated
here.

Support for remote targets: SystemTap supports a cross-instrumentation
mode, where only the SystemTap run-time needs to be available on the
target. The instrumentation kernel module derived from a myscript.stp
generated on host (stap -r kernel_version myscript.stp -m module_name)
is copied over to target and executed via staprun 'myscript.ko'.

However, apparently host and target must still be the same architecture
for this to work.
Systemtap is the lowest on my list of items to add. Nothing
against systemtap, but the in kernel and architecture bindings
have always been problematic in an embedded scenario and I've
rarely (never) gotten a strong request for it.
Yeah, I'm kind of afraid of what could turn up once we get to the nuts
and bolts of integrating this. Still, I think it would be worth the
effort.


----

Tool: blktrace
URL: http://linux.die.net/man/8/blktrace
Architectures supported: all, nothing arch-specific

Still the best way to get detailed disk I/O traces, and you can do some
really cool things with it:

http://feedblog.org/2010/04/27/2009/

Support for remote targets: Uses splice/sendfile, so the target can if
it wants do nothing but generate the trace data and send it over the
network. blkparse, the data collection portion of blktrace, fully
supports this mode and in fact encourages it in order to avoid
perturbing the results that occur when writing trace data on the target.

----

Tool: sysprof
URL: http://www.daimi.au.dk/~sandmann/sysprof/
Architectures supported: all, nothing arch-specific

A nice simple system-wide profiling UI - it profiles the kernel and all
running userspace applications. It displays functions in one window, and
an expandable tree of callees for the selected function in the the other
window, all with hit stats. Clicking on a callee in the callee window
shows callers of that function in a third window.

I don't know if this provides much more than OprofileUI, but the
interface is nice and it's popular in some quarters...
I think it is worth adding.


----

In summary, each of these tools provides a unique set of useful
capabilities that I think would be very nice to have in Yocto. There
are of course overlaps e.g. both SystemTap and trace-cmd provide
function-callgraph tracing, both trace-cmd and perf trace provide
event-subsystem-based tracing, SystemTap and perf trace scripting both
provide different ways of achieving the same kinds of high-level
aggregation goals, while blktrace, SystemTap, and perf trace scripting
all provide different ways of looking at block I/O. But they also each
have their own strengths as well, and do much more than what they do in
overlap.
That's ok. perf collides with oprofile, and everything else, so
overlap is no big issue, as long as we control the options and
can make them all co-exist in the kernel.


At some point some of the these tools will be completely overlap each
other - for example SystemTap and/or perf trace scripting eventually
will probably do everything blktrace does, and will additionally have
the potential to show that information in a larger context e.g. along
with VFS and/or mm data sources. Making things like that happen -
adding value to those tools or providing larger contexts could be a
focus for future Yocto contributions. On the other hand, it may make
sense in v1.0 to spend a small amount of development time to actually
help provide some coherent integration to all these tools and maybe
contribute to something like perfkit (http://audidude.com/?p=504).
There may not be time to do that, but at least the minimum set of tools
for a great user experience should be available, which I think the above
list goes a long way to providing. Comments welcome...
I've also had pings in the past about:

tuna and oscillscope:
http://www.osadl.org/Single-View.111+M52212cb1379.0.html, but they are
more 'tuning',
and I haven't checked activity on them for a while.
Those look like great tools too - they should go in.

Although not a toolkit/tracing/profiling, having either
a nice how to, or light way to use dynamic tracepoints
with kprobes is a good idea. Plenty of things that we can
do to contribute here as well.
I agree - and I think it would be nice to have a section in the wiki
dedicated to using all the tools we bundle...

As for raw kprobes/jprobes, there seem to be a few nice articles on
kprobes/jprobes from the IBM and Redhat guys, but they may be a little
outdated. There are also the examples in the kernel source /samples and
a detailed doc in /Documentation.. But yeah, we should probably have
our own up-to-date and Yocto-specific docs covering this (and other)
topics.

Ensuring that all these work with KGDB/KDB is also key,
since regressions sneak in pretty easily. Debug and trace
are getting closer and should be considered together. In
that same spirit better kexec/kdump/ftrace_dumo_on_oops
testing helps debug/tracing/profiling in the degenerate case.

And finally, having a good story around boottime tracing
and optimization is a key usecase for any of these tools.

We should do a ranking of the complete list (once compiled)
and see what can or can't be done .. since there IS quite a
bit of it here :)
Definitely, we need to do that, regardless of how much of it we can get
in initially - it's unlikely a lot of it will, since there's only a week
in the schedule for it, but if there's extra time at the end...

Thanks,

Tom


Cheers,

Bruce






Tom



_______________________________________________
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: bitbake problems when testing pre-built images

Pedro I. Sanchez <psanchez@...>
 

Thank you Jessica, it worked.

May I suggest to update the wiki? As written, the instructions there do not work, at least for the section "Using Pre-Built Binaries and QEMU".

Thanks again,

--
Pedro

On 10-11-14 04:20 PM, Zhang, Jessica wrote:
Pedro I. Sanchez wrote:
Hello,

I'm starting to play with Yocto and the first thing I want to do is to
run pre-built images to get a feeling for the system. Unfortunately
I'm getting errors when running the poky-qemu command. My host
machine is Ubuntu 10.04.

I'm following the Wiki instructions at
http://www.yoctoproject.org/docs/yocto-quick-start/yocto-project-qs.html.

The section "Using Pre-Built Binaries and QEMU" lists three steps to
follow in order to test pre-built images, steps which I'm executing as
follows after downloading the following arm-target files:

yocto-eglibc-i586-arm-toolchain-sdk-0.9.tar.bz2
zImage-2.6.34-qemuarm-0.9.bin
yocto-image-minimal-qemuarm-0.9.rootfs.tar.bz2
Please download yocto-image-minimal-qemuarm-0.9.rootfs.ext3


and installing the toolchain with:

$ sudo tar xjf yocto-eglibc-i586-arm-toolchain-sdk-0.9.tar.bz2 -C /
$ source /opt/poky/environment-setup-armv5te-poky-linux-gnueabi

The final step is supposed to be to run the poky-emu command but I get
the following error:

$ poky-qemu zImage-2.6.34-qemuarm-0.9.bin
yocto-image-minimal-qemuarm-0.9.rootfs.tar
The command should be
$poky-qemu qemuarm zImage-2.6.34-qemuarm-0.9.bin
yocto-image-minimal-qemuarm-0.9.rootfs.ext3 ext3

Hope this should get you going...

- Jessica


Re: bitbake problems when testing pre-built images

Zhang, Jessica
 

Pedro I. Sanchez wrote:
Hello,

I'm starting to play with Yocto and the first thing I want to do is to
run pre-built images to get a feeling for the system. Unfortunately
I'm getting errors when running the poky-qemu command. My host
machine is Ubuntu 10.04.

I'm following the Wiki instructions at
http://www.yoctoproject.org/docs/yocto-quick-start/yocto-project-qs.html.

The section "Using Pre-Built Binaries and QEMU" lists three steps to
follow in order to test pre-built images, steps which I'm executing as
follows after downloading the following arm-target files:

yocto-eglibc-i586-arm-toolchain-sdk-0.9.tar.bz2
zImage-2.6.34-qemuarm-0.9.bin
yocto-image-minimal-qemuarm-0.9.rootfs.tar.bz2
Please download yocto-image-minimal-qemuarm-0.9.rootfs.ext3


and installing the toolchain with:

$ sudo tar xjf yocto-eglibc-i586-arm-toolchain-sdk-0.9.tar.bz2 -C /
$ source /opt/poky/environment-setup-armv5te-poky-linux-gnueabi

The final step is supposed to be to run the poky-emu command but I get
the following error:

$ poky-qemu zImage-2.6.34-qemuarm-0.9.bin
yocto-image-minimal-qemuarm-0.9.rootfs.tar
The command should be
$poky-qemu qemuarm zImage-2.6.34-qemuarm-0.9.bin
yocto-image-minimal-qemuarm-0.9.rootfs.ext3 ext3

Hope this should get you going...

- Jessica


bitbake problems when testing pre-built images

Pedro I. Sanchez <psanchez@...>
 

Hello,

I'm starting to play with Yocto and the first thing I want to do is to run pre-built images to get a feeling for the system. Unfortunately I'm getting errors when running the poky-qemu command. My host machine is Ubuntu 10.04.

I'm following the Wiki instructions at http://www.yoctoproject.org/docs/yocto-quick-start/yocto-project-qs.html.

The section "Using Pre-Built Binaries and QEMU" lists three steps to follow in order to test pre-built images, steps which I'm executing as follows after downloading the following arm-target files:

yocto-eglibc-i586-arm-toolchain-sdk-0.9.tar.bz2
zImage-2.6.34-qemuarm-0.9.bin
yocto-image-minimal-qemuarm-0.9.rootfs.tar.bz2

and installing the toolchain with:

$ sudo tar xjf yocto-eglibc-i586-arm-toolchain-sdk-0.9.tar.bz2 -C /
$ source /opt/poky/environment-setup-armv5te-poky-linux-gnueabi

The final step is supposed to be to run the poky-emu command but I get the following error:

$ poky-qemu zImage-2.6.34-qemuarm-0.9.bin yocto-image-minimal-qemuarm-0.9.rootfs.tar
Set MACHINE to [qemuarm-0] based on kernel [zImage-2.6.34-qemuarm-0.9.bin]
In order for this script to dynamically infer paths
to kernels or filesystem images, you either need
bitbake in your PATH or to source poky-init-build-env
before running this script

Up to this point it is either I or the wiki docs missing something.

I then tried downloading poky and installing it as follows:

$ wget http://www.yoctoproject.org/downloads/poky/poky-laverne-4.0.tar.bz2
$ tar xjf poky-laverne-4.0.tar.bz2
$ source poky-laverne-4.0/poky-init-build-env poky-4.0-build

But running poky-qemu gives me this:

$ poky-qemu zImage-2.6.34-qemuarm-0.9.bin yocto-image-minimal-qemuarm-0.9.rootfs.tar
Set MACHINE to [qemuarm-0] based on kernel [zImage-2.6.34-qemuarm-0.9.bin]
Note: Unable to determine filesystem extension for yocto-image-minimal-qemuarm-0.9.rootfs.tar
We will use the default FSTYPE for qemuarm-0
Error: Unable to determine default fstype for MACHINE [qemuarm-0]


Any suggestions would be appreciated.


Thanks,

--
Pedro


Re: [PULL] devel/toolchain Recipes upgrades

Kamble, Nitin A <nitin.a.kamble@...>
 

Richard, Saul,
This is updated pull request. I have changed the commit messages with more specific information about regarding dropping of patches. Also I have verified these changes work well on all architectures, build sdk image from scratch, boot qemu images and basic commands inside image work.

Also I have found issues with autoconf upgrade commit, which I have dropped from this set.

Pull URL: http://git.pokylinux.org/cgit.cgi/poky-contrib/log/?h=nitin/upgrades

Thanks & Regards,
Nitin

meta/conf/distro/include/distro_tracking_fields.inc | 240
meta/conf/distro/include/poky-default-revisions.inc | 2
meta/conf/distro/include/poky-default.inc | 6
meta/conf/distro/include/poky-fixed-revisions.inc | 4
meta/recipes-core/eglibc/cross-localedef-native_2.12.bb | 4
meta/recipes-core/eglibc/eglibc_2.12.bb | 2
meta/recipes-core/tasks/task-poky-sdk.bb | 5
meta/recipes-devtools/bison/bison_2.4.2.bb | 22
meta/recipes-devtools/bison/bison_2.4.3.bb | 22
meta/recipes-devtools/diffstat/diffstat_1.47.bb | 26
meta/recipes-devtools/diffstat/diffstat_1.54.bb | 22
meta/recipes-devtools/gcc/gcc-4.5.0.inc | 84
meta/recipes-devtools/gcc/gcc-4.5.0/100-uclibc-conf.patch | 37
meta/recipes-devtools/gcc/gcc-4.5.0/103-uclibc-conf-noupstream.patch | 15
meta/recipes-devtools/gcc/gcc-4.5.0/200-uclibc-locale.patch | 2840 ----------
meta/recipes-devtools/gcc/gcc-4.5.0/203-uclibc-locale-no__x.patch | 233
meta/recipes-devtools/gcc/gcc-4.5.0/204-uclibc-locale-wchar_fix.patch | 48
meta/recipes-devtools/gcc/gcc-4.5.0/205-uclibc-locale-update.patch | 519 -
meta/recipes-devtools/gcc/gcc-4.5.0/301-missing-execinfo_h.patch | 13
meta/recipes-devtools/gcc/gcc-4.5.0/302-c99-snprintf.patch | 13
meta/recipes-devtools/gcc/gcc-4.5.0/303-c99-complex-ugly-hack.patch | 14
meta/recipes-devtools/gcc/gcc-4.5.0/304-index_macro.patch | 28
meta/recipes-devtools/gcc/gcc-4.5.0/305-libmudflap-susv3-legacy.patch | 49
meta/recipes-devtools/gcc/gcc-4.5.0/306-libstdc++-namespace.patch | 38
meta/recipes-devtools/gcc/gcc-4.5.0/307-locale_facets.patch | 19
meta/recipes-devtools/gcc/gcc-4.5.0/602-sdk-libstdc++-includes.patch | 20
meta/recipes-devtools/gcc/gcc-4.5.0/64bithack.patch | 33
meta/recipes-devtools/gcc/gcc-4.5.0/740-sh-pr24836.patch | 29
meta/recipes-devtools/gcc/gcc-4.5.0/800-arm-bigendian.patch | 34
meta/recipes-devtools/gcc/gcc-4.5.0/904-flatten-switch-stmt-00.patch | 74
meta/recipes-devtools/gcc/gcc-4.5.0/arm-bswapsi2.patch | 13
meta/recipes-devtools/gcc/gcc-4.5.0/arm-nolibfloat.patch | 24
meta/recipes-devtools/gcc/gcc-4.5.0/arm-softfloat.patch | 16
meta/recipes-devtools/gcc/gcc-4.5.0/arm-unbreak-eabi-armv4t.dpatch | 36
meta/recipes-devtools/gcc/gcc-4.5.0/cache-amnesia.patch | 31
meta/recipes-devtools/gcc/gcc-4.5.0/disable_relax_pic_calls_flag.patch | 44
meta/recipes-devtools/gcc/gcc-4.5.0/fedora/gcc43-c++-builtin-redecl.patch | 114
meta/recipes-devtools/gcc/gcc-4.5.0/fedora/gcc43-cpp-pragma.patch | 284 -
meta/recipes-devtools/gcc/gcc-4.5.0/fedora/gcc43-i386-libgomp.patch | 65
meta/recipes-devtools/gcc/gcc-4.5.0/fedora/gcc43-ia64-libunwind.patch | 550 -
meta/recipes-devtools/gcc/gcc-4.5.0/fedora/gcc43-java-debug-iface-type.patch | 19
meta/recipes-devtools/gcc/gcc-4.5.0/fedora/gcc43-java-nomulti.patch | 48
meta/recipes-devtools/gcc/gcc-4.5.0/fedora/gcc43-libgomp-speedup.patch | 2797 ---------
meta/recipes-devtools/gcc/gcc-4.5.0/fedora/gcc43-ppc32-retaddr.patch | 90
meta/recipes-devtools/gcc/gcc-4.5.0/fedora/gcc43-pr27898.patch | 16
meta/recipes-devtools/gcc/gcc-4.5.0/fedora/gcc43-pr32139.patch | 19
meta/recipes-devtools/gcc/gcc-4.5.0/fedora/gcc43-pr33763.patch | 159
meta/recipes-devtools/gcc/gcc-4.5.0/fedora/gcc43-rh251682.patch | 89
meta/recipes-devtools/gcc/gcc-4.5.0/fedora/gcc43-rh330771.patch | 31
meta/recipes-devtools/gcc/gcc-4.5.0/fedora/gcc43-rh341221.patch | 32
meta/recipes-devtools/gcc/gcc-4.5.0/fortran-cross-compile-hack.patch | 30
meta/recipes-devtools/gcc/gcc-4.5.0/gcc-4.0.2-e300c2c3.patch | 319 -
meta/recipes-devtools/gcc/gcc-4.5.0/gcc-4.3.1-ARCH_FLAGS_FOR_TARGET.patch | 31
meta/recipes-devtools/gcc/gcc-4.5.0/gcc-4.3.3-SYSROOT_CFLAGS_FOR_TARGET.patch | 114
meta/recipes-devtools/gcc/gcc-4.5.0/gcc-arm-frename-registers.patch | 25
meta/recipes-devtools/gcc/gcc-4.5.0/gcc-flags-for-build.patch | 178
meta/recipes-devtools/gcc/gcc-4.5.0/gcc-ice-hack.dpatch | 331 -
meta/recipes-devtools/gcc/gcc-4.5.0/gcc-poison-dir-extend.patch | 24
meta/recipes-devtools/gcc/gcc-4.5.0/gcc-poison-parameters.patch | 83
meta/recipes-devtools/gcc/gcc-4.5.0/gcc-poison-system-directories.patch | 201
meta/recipes-devtools/gcc/gcc-4.5.0/gcc-pr43698-arm-rev-instr.patch | 117
meta/recipes-devtools/gcc/gcc-4.5.0/gcc-uclibc-locale-ctype_touplow_t.patch | 67
meta/recipes-devtools/gcc/gcc-4.5.0/gcc_revert_base_version_to_4.5.0.patch | 9
meta/recipes-devtools/gcc/gcc-4.5.0/libstdc++-emit-__cxa_end_cleanup-in-text.patch | 40
meta/recipes-devtools/gcc/gcc-4.5.0/libstdc++-pic.dpatch | 71
meta/recipes-devtools/gcc/gcc-4.5.0/optional_libstdc.patch | 23
meta/recipes-devtools/gcc/gcc-4.5.0/pr30961.dpatch | 179
meta/recipes-devtools/gcc/gcc-4.5.0/pr35942.patch | 38
meta/recipes-devtools/gcc/gcc-4.5.0/zecke-xgcc-cpp.patch | 28
meta/recipes-devtools/gcc/gcc-4.5.1.inc | 81
meta/recipes-devtools/gcc/gcc-4.5.1/100-uclibc-conf.patch | 37
meta/recipes-devtools/gcc/gcc-4.5.1/103-uclibc-conf-noupstream.patch | 15
meta/recipes-devtools/gcc/gcc-4.5.1/200-uclibc-locale.patch | 2840 ++++++++++
meta/recipes-devtools/gcc/gcc-4.5.1/203-uclibc-locale-no__x.patch | 233
meta/recipes-devtools/gcc/gcc-4.5.1/204-uclibc-locale-wchar_fix.patch | 48
meta/recipes-devtools/gcc/gcc-4.5.1/205-uclibc-locale-update.patch | 519 +
meta/recipes-devtools/gcc/gcc-4.5.1/301-missing-execinfo_h.patch | 13
meta/recipes-devtools/gcc/gcc-4.5.1/302-c99-snprintf.patch | 13
meta/recipes-devtools/gcc/gcc-4.5.1/303-c99-complex-ugly-hack.patch | 14
meta/recipes-devtools/gcc/gcc-4.5.1/304-index_macro.patch | 28
meta/recipes-devtools/gcc/gcc-4.5.1/305-libmudflap-susv3-legacy.patch | 49
meta/recipes-devtools/gcc/gcc-4.5.1/306-libstdc++-namespace.patch | 38
meta/recipes-devtools/gcc/gcc-4.5.1/307-locale_facets.patch | 19
meta/recipes-devtools/gcc/gcc-4.5.1/602-sdk-libstdc++-includes.patch | 20
meta/recipes-devtools/gcc/gcc-4.5.1/64bithack.patch | 33
meta/recipes-devtools/gcc/gcc-4.5.1/740-sh-pr24836.patch | 29
meta/recipes-devtools/gcc/gcc-4.5.1/800-arm-bigendian.patch | 34
meta/recipes-devtools/gcc/gcc-4.5.1/904-flatten-switch-stmt-00.patch | 74
meta/recipes-devtools/gcc/gcc-4.5.1/arm-bswapsi2.patch | 13
meta/recipes-devtools/gcc/gcc-4.5.1/arm-nolibfloat.patch | 24
meta/recipes-devtools/gcc/gcc-4.5.1/arm-softfloat.patch | 16
meta/recipes-devtools/gcc/gcc-4.5.1/arm-unbreak-eabi-armv4t.dpatch | 36
meta/recipes-devtools/gcc/gcc-4.5.1/cache-amnesia.patch | 31
meta/recipes-devtools/gcc/gcc-4.5.1/disable_relax_pic_calls_flag.patch | 44
meta/recipes-devtools/gcc/gcc-4.5.1/fedora/gcc43-c++-builtin-redecl.patch | 114
meta/recipes-devtools/gcc/gcc-4.5.1/fedora/gcc43-cpp-pragma.patch | 284 +
meta/recipes-devtools/gcc/gcc-4.5.1/fedora/gcc43-i386-libgomp.patch | 65
meta/recipes-devtools/gcc/gcc-4.5.1/fedora/gcc43-ia64-libunwind.patch | 550 +
meta/recipes-devtools/gcc/gcc-4.5.1/fedora/gcc43-java-debug-iface-type.patch | 19
meta/recipes-devtools/gcc/gcc-4.5.1/fedora/gcc43-java-nomulti.patch | 48
meta/recipes-devtools/gcc/gcc-4.5.1/fedora/gcc43-libgomp-speedup.patch | 2797 +++++++++
meta/recipes-devtools/gcc/gcc-4.5.1/fedora/gcc43-ppc32-retaddr.patch | 90
meta/recipes-devtools/gcc/gcc-4.5.1/fedora/gcc43-pr27898.patch | 16
meta/recipes-devtools/gcc/gcc-4.5.1/fedora/gcc43-pr32139.patch | 19
meta/recipes-devtools/gcc/gcc-4.5.1/fedora/gcc43-pr33763.patch | 159
meta/recipes-devtools/gcc/gcc-4.5.1/fedora/gcc43-rh251682.patch | 89
meta/recipes-devtools/gcc/gcc-4.5.1/fedora/gcc43-rh330771.patch | 31
meta/recipes-devtools/gcc/gcc-4.5.1/fedora/gcc43-rh341221.patch | 32
meta/recipes-devtools/gcc/gcc-4.5.1/fortran-cross-compile-hack.patch | 30
meta/recipes-devtools/gcc/gcc-4.5.1/gcc-4.0.2-e300c2c3.patch | 319 +
meta/recipes-devtools/gcc/gcc-4.5.1/gcc-4.3.1-ARCH_FLAGS_FOR_TARGET.patch | 31
meta/recipes-devtools/gcc/gcc-4.5.1/gcc-4.3.3-SYSROOT_CFLAGS_FOR_TARGET.patch | 114
meta/recipes-devtools/gcc/gcc-4.5.1/gcc-arm-frename-registers.patch | 25
meta/recipes-devtools/gcc/gcc-4.5.1/gcc-flags-for-build.patch | 178
meta/recipes-devtools/gcc/gcc-4.5.1/gcc-ice-hack.dpatch | 331 +
meta/recipes-devtools/gcc/gcc-4.5.1/gcc-poison-dir-extend.patch | 24
meta/recipes-devtools/gcc/gcc-4.5.1/gcc-poison-parameters.patch | 83
meta/recipes-devtools/gcc/gcc-4.5.1/gcc-poison-system-directories.patch | 201
meta/recipes-devtools/gcc/gcc-4.5.1/gcc-uclibc-locale-ctype_touplow_t.patch | 67
meta/recipes-devtools/gcc/gcc-4.5.1/libstdc++-emit-__cxa_end_cleanup-in-text.patch | 40
meta/recipes-devtools/gcc/gcc-4.5.1/libstdc++-pic.dpatch | 71
meta/recipes-devtools/gcc/gcc-4.5.1/optional_libstdc.patch | 23
meta/recipes-devtools/gcc/gcc-4.5.1/pr30961.dpatch | 179
meta/recipes-devtools/gcc/gcc-4.5.1/pr35942.patch | 38
meta/recipes-devtools/gcc/gcc-4.5.1/zecke-xgcc-cpp.patch | 28
meta/recipes-devtools/gcc/gcc-cross-canadian_4.5.0.bb | 25
meta/recipes-devtools/gcc/gcc-cross-canadian_4.5.1.bb | 25
meta/recipes-devtools/gcc/gcc-cross-initial_4.5.0.bb | 5
meta/recipes-devtools/gcc/gcc-cross-initial_4.5.1.bb | 5
meta/recipes-devtools/gcc/gcc-cross-intermediate_4.5.0.bb | 4
meta/recipes-devtools/gcc/gcc-cross-intermediate_4.5.1.bb | 4
meta/recipes-devtools/gcc/gcc-cross_4.5.0.bb | 10
meta/recipes-devtools/gcc/gcc-cross_4.5.1.bb | 10
meta/recipes-devtools/gcc/gcc-crosssdk-initial_4.5.0.bb | 4
meta/recipes-devtools/gcc/gcc-crosssdk-initial_4.5.1.bb | 4
meta/recipes-devtools/gcc/gcc-crosssdk-intermediate_4.5.0.bb | 4
meta/recipes-devtools/gcc/gcc-crosssdk-intermediate_4.5.1.bb | 4
meta/recipes-devtools/gcc/gcc-crosssdk_4.5.0.bb | 4
meta/recipes-devtools/gcc/gcc-crosssdk_4.5.1.bb | 4
meta/recipes-devtools/gcc/gcc-runtime_4.5.0.bb | 11
meta/recipes-devtools/gcc/gcc-runtime_4.5.1.bb | 11
meta/recipes-devtools/gcc/gcc_4.5.0.bb | 10
meta/recipes-devtools/gcc/gcc_4.5.1.bb | 10
meta/recipes-devtools/gdb/gdb-cross-canadian_7.1.bb | 10
meta/recipes-devtools/gdb/gdb-cross-canadian_7.2.bb | 10
meta/recipes-devtools/gdb/gdb-cross_7.1.bb | 6
meta/recipes-devtools/gdb/gdb-cross_7.2.bb | 6
meta/recipes-devtools/gdb/gdb.inc | 3
meta/recipes-devtools/gdb/gdb/fix_for_build_error_internal_error_call.patch | 18
meta/recipes-devtools/gdb/gdb_7.1.bb | 3
meta/recipes-devtools/gdb/gdb_7.2.bb | 3
meta/recipes-devtools/libtool/libtool-cross_2.2.10.bb | 34
meta/recipes-devtools/libtool/libtool-cross_2.4.bb | 34
meta/recipes-devtools/libtool/libtool-native_2.2.10.bb | 22
meta/recipes-devtools/libtool/libtool-native_2.4.bb | 22
meta/recipes-devtools/libtool/libtool-nativesdk_2.2.10.bb | 27
meta/recipes-devtools/libtool/libtool-nativesdk_2.4.bb | 27
meta/recipes-devtools/libtool/libtool/cross_compile.patch | 27
meta/recipes-devtools/libtool/libtool/prefix.patch | 31
meta/recipes-devtools/libtool/libtool_2.2.10.bb | 33
meta/recipes-devtools/libtool/libtool_2.4.bb | 33
meta/recipes-devtools/make/make_3.81.bb | 3
meta/recipes-devtools/make/make_3.82.bb | 3
meta/recipes-devtools/python/python-gst_0.10.18.bb | 18
meta/recipes-devtools/python/python-gst_0.10.19.bb | 18
meta/recipes-devtools/python/python-native-2.6.5/00-fix-bindir-libdir-for-cross.patch | 20
meta/recipes-devtools/python/python-native-2.6.5/04-default-is-optimized.patch | 18
meta/recipes-devtools/python/python-native-2.6.5/10-distutils-fix-swig-parameter.patch | 16
meta/recipes-devtools/python/python-native-2.6.5/11-distutils-never-modify-shebang-line.patch | 18
meta/recipes-devtools/python/python-native-2.6.5/12-distutils-prefix-is-inside-staging-area.patch | 60
meta/recipes-devtools/python/python-native-2.6.5/debug.patch | 27
meta/recipes-devtools/python/python-native-2.6.5/nohostlibs.patch | 53
meta/recipes-devtools/python/python-native-2.6.5/sitecustomize.py | 45
meta/recipes-devtools/python/python-native/04-default-is-optimized.patch | 18
meta/recipes-devtools/python/python-native/10-distutils-fix-swig-parameter.patch | 16
meta/recipes-devtools/python/python-native/11-distutils-never-modify-shebang-line.patch | 18
meta/recipes-devtools/python/python-native/12-distutils-prefix-is-inside-staging-area.patch | 60
meta/recipes-devtools/python/python-native/debug.patch | 27
meta/recipes-devtools/python/python-native/nohostlibs.patch | 53
meta/recipes-devtools/python/python-native/sitecustomize.py | 45
meta/recipes-devtools/python/python-native_2.6.5.bb | 30
meta/recipes-devtools/python/python-native_2.6.6.bb | 30
meta/recipes-devtools/python/python-scons-native_1.3.0.bb | 6
meta/recipes-devtools/python/python-scons-native_2.0.1.bb | 6
meta/recipes-devtools/python/python-scons_1.3.0.bb | 12
meta/recipes-devtools/python/python-scons_2.0.1.bb | 12
meta/recipes-devtools/python/python/00-fix-bindir-libdir-for-cross.patch | 20
meta/recipes-devtools/python/python/01-use-proper-tools-for-cross-build.patch | 32
meta/recipes-devtools/python/python/04-default-is-optimized.patch | 28
meta/recipes-devtools/python/python/06-avoid_usr_lib_termcap_path_in_linking.patch | 14
meta/recipes-devtools/python/python/99-ignore-optimization-flag.patch | 20
meta/recipes-devtools/python/python_2.6.5.bb | 124
meta/recipes-devtools/python/python_2.6.6.bb | 122
meta/recipes-extended/lsof/lsof_4.83.bb | 41
meta/recipes-extended/lsof/lsof_4.84.bb | 41
meta/recipes-kernel/linux-libc-headers/linux-libc-headers/hayes-gone.patch | 46
meta/recipes-kernel/linux-libc-headers/linux-libc-headers/ppc_glibc_build_fix.patch | 25
meta/recipes-kernel/linux-libc-headers/linux-libc-headers_2.6.34.bb | 49
meta/recipes-kernel/linux-libc-headers/linux-libc-headers_2.6.36.bb | 47
200 files changed, 11323 insertions(+), 11697 deletions(-)

Nitin A Kamble (15):
bison upgrade from 2.4.2. to 2.4.3
Make upgrade from 3.81 to 3.82
diffstat: upgrade from 1.47 to 1.54
gdb upgrade from 7.1 to 7.2
python-gst: upgrade from 0.10.18 to 0.10.19
python, python-native upgrade from 2.6.5 to 2.6.6
lsof: upgrade from 4.83 to 4.84
linux-libc-headers: upgrade from 2.6.34 to 2.6.36
eglibc: update svn checkout commit
task-poky-sdk: add tcl package in the sdk image
distro_tracking: update as per current state of devel/toolchain recipes
poky-fixed-versions: update version for python recipe
gcc: upgrade from 4.5.0 to 4.5.1
poky-default.inc: update gcc & linux-libc-headers versions
libtool upgrade from 2.2.10 to 2.4

Pull URL: http://git.pokylinux.org/cgit.cgi/poky-contrib/log/?h=nitin/upgrades


Re: World Package List

Frans Meulenbroeks <fransmeulenbroeks@...>
 

2010/11/6 Saul Wold <sgw@linux.intel.com>:

Please find enclosed the annotated list of recipes that are not currently
built as part of any task or image. There are a some basic options that
could occur with these recipes, do nothing, move the recipe to a layer, or
add the recipe to an existing image. The notes indicate a number of
different options for the recipes that we chose.

The major changes to recipes are to moving them  to meta-extras,
meta-demoapps or meta-m2. Meta-extras is a location that will ultimately be
deprecated, meta-demoapps is a staging area for further review and items
that are more vertical oriented recipes.

Additional notes such as LSB or gmae SDK would move into those tasks or
images as appropriate. There are additional notes which will be used for
future discussion.

Please review the attached list and provide feedback.

I suggest to keep at least libexif, libsndfile1, libsamplerate0,
taglib and the gupnp* packages (and probably also alsa-tools)
The lib things are used by mythtv, a package that I maintain in OE,
and eventually I would like to contribute that recipe to yocto.
gupnp* is probably required for my media server project (which could
be an additional overlay).
As far as the location of the files, I have no opinion.

Frans


[PATCH 0/1] remove some invalid information from distro_tracking_fields.inc

Mei Lei <lei.mei@...>
 

Some packages were removed from the world , but their information also exsit in the distro_tracking_fields.inc

Pull URL: git://git.pokylinux.org/poky-contrib.git
Branch: lmei3/distrotracking
Browse: http://git.pokylinux.org/cgit.cgi/poky-contrib/log/?h=lmei3/distrotracking

Thanks,
Mei Lei <lei.mei@intel.com>
---


Mei Lei (1):
distro_tracking_fields.inc: remove some packages information from it

.../conf/distro/include/distro_tracking_fields.inc | 591 --------------------
1 files changed, 0 insertions(+), 591 deletions(-)


[PATCH 0/3] upgrade recipes cracklib,sqlite3

Yu Ke <ke.yu@...>
 

also with distro tracking field update

Pull URL: git://git.pokylinux.org/poky-contrib.git
Branch: kyu3/update-11-13
Browse: http://git.pokylinux.org/cgit.cgi/poky-contrib/log/?h=kyu3/update-11-13

Thanks,
Yu Ke <ke.yu@intel.com>
---


Yu Ke (3):
cracklib: upgrade from 2.8.16 to 2.8.18
sqlite: upgrade from 3.6.23 to 3.7.3
distro tracking: update the info for upgrade recpies

.../conf/distro/include/distro_tracking_fields.inc | 42 ++++++++++----------
.../{cracklib_2.8.16.bb => cracklib_2.8.18.bb} | 0
.../{sqlite3_3.6.23.1.bb => sqlite3_3.7.3.bb} | 2 +-
3 files changed, 22 insertions(+), 22 deletions(-)
rename meta/recipes-extended/cracklib/{cracklib_2.8.16.bb => cracklib_2.8.18.bb} (100%)
rename meta/recipes-support/sqlite/{sqlite3_3.6.23.1.bb => sqlite3_3.7.3.bb} (67%)


[PATCH 1/1] distro_tracking_fields.inc: remove some packages information from it

Mei Lei <lei.mei@...>
 

This commit fix [BUGID #514]

Some packages were removed from the world, but their information also exist in the distro_tracking_fields.inc.

Signed-off-by: Mei Lei <lei.mei@intel.com>
---
.../conf/distro/include/distro_tracking_fields.inc | 591 --------------------
1 files changed, 0 insertions(+), 591 deletions(-)

diff --git a/meta/conf/distro/include/distro_tracking_fields.inc b/meta/conf/distro/include/distro_tracking_fields.inc
index 4a00ce7..b84d85c 100644
--- a/meta/conf/distro/include/distro_tracking_fields.inc
+++ b/meta/conf/distro/include/distro_tracking_fields.inc
@@ -72,10 +72,6 @@ RECIPE_STATUS_pn-minicom = "red"
RECIPE_LATEST_VERIONS_pn-minicom = "check"
RECIPE_MAINTAINER_pn-minicom = "Dongxiao Xu <dongxiao.xu@intel.com"

-RECIPE_STATUS_pn-moblin-proto = "red"
-RECIPE_LATEST_VERIONS_pn-moblin-proto = "check"
-RECIPE_MAINTAINER_pn-moblin-proto = "Dexuan Cui <dexuan.cui@intel.com>"
-
RECIPE_STATUS_pn-patch = "red"
RECIPE_LATEST_VERIONS_pn-patch = "check"
RECIPE_MAINTAINER_pn-patch = "Nitin A Kamble <nitin.a.kamble@intel.com>"
@@ -864,16 +860,6 @@ RECIPE_LATEST_RELEASE_DATE_pn-fakeroot = "11/2009"
RECIPE_COMMENTS_pn-fakeroot = ""
RECIPE_MAINTAINER_pn-fakeroot = "Kevin Tian <kevin.tian@intel.com>"

-RECIPE_STATUS_pn-prism-firmware = "green"
-DEPENDENCY_CHECK_pn-prism-firmware = "not done"
-RECIPE_LATEST_VERSION_pn-prism-firmware = "1.8.4"
-RECIPE_NO_UPDATE_REASON_pn-prism-firmware = "1.7.4 and 1.8.4 are both released at same time, while 1.8.4 is known with connection issue"
-RECIPE_INTEL_SECTION_pn-prism-firmware = "base utils"
-RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-prism-firmware = "same time"
-RECIPE_LATEST_RELEASE_DATE_pn-prism-firmware = "07/2005"
-RECIPE_COMMENTS_pn-prism-firmware = "1.7.4 and 1.8.4 are both released at same time. perhaps then we don't need upgrade this one?"
-RECIPE_MAINTAINER_pn-prism-firmware = "Kevin Tian <kevin.tian@intel.com>"
-
RECIPE_STATUS_pn-base-files = "green"
DEPENDENCY_CHECK_pn-base-files = "not done"
RECIPE_LATEST_VERSION_pn-base-files = "3.0.14"
@@ -1023,9 +1009,6 @@ RECIPE_STATUS_pn-coreutils = "red"
RECIPE_MAINTAINER_pn-coreutils = "Kevin Tian <kevin.tian@intel.com>"
RECIPE_LATEST_VERSION_pn-coreutils = "8.5"

-
-
-
RECIPE_STATUS_pn-libuser = "red"
RECIPE_MAINTAINER_pn-libuser = "Edwin Zhai <edwin.zhai@intel.com>"
DEPENDENCY_CHECK_pn-libuser = "not done"
@@ -1076,7 +1059,6 @@ RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-pax = "n/a"
RECIPE_LATEST_RELEASE_DATE_pn-pax = "08/2005"
RECIPE_COMMENTS_pn-pax = ""

-
RECIPE_STATUS_pn-watchdog = "green"
RECIPE_MAINTAINER_pn-watchdog = "Dexuan Cui <dexuan.cui@intel.com>"
DEPENDENCY_CHECK_pn-watchdog = "not done"
@@ -1088,10 +1070,6 @@ RECIPE_LATEST_RELEASE_DATE_pn-watchdog = "06/2010"
RECIPE_COMMENTS_pn-watchdog = ""
DISTRO_PN_ALIAS_pn-watchdog = "Debian=watchdog Ubuntu=watchdog Mandriva=watchdog"

-RECIPE_STATUS_pn-json-glib = "red"
-RECIPE_LATEST_VERSION_pn-json-glib = "0.7.6+d5922b42604c09ba7ebcb0adc1566d0a33a99808"
-RECIPE_MAINTAINER_pn-json-glib = "Kevin Tian <kevin.tian@intel.com>"
-
RECIPE_STATUS_pn-libatomics-ops = "red"
DISTRO_PN_ALIAS_pn-libatomics-ops = "Meego=libatomic-ops Debian=libatomic-ops Ubuntu=libatomic-ops OpenSuSE=libatomic-ops Mandriva=libatomic-ops"
RECIPE_LATEST_VERSION_pn-libatomics-ops = "7.2alpha4"
@@ -1105,11 +1083,6 @@ RECIPE_STATUS_pn-libgdbus = "red"
RECIPE_LATEST_VERSION_pn-libgdbus = "Release-0.2"
RECIPE_MAINTAINER_pn-libgdbus = "Kevin Tian <kevin.tian@intel.com>"

-RECIPE_STATUS_pn-libjana = "red"
-DISTRO_PN_ALIAS_pn-libjana = "Fedora=jana OpenSuSE=jana"
-RECIPE_LATEST_VERSION_pn-libjana = "acd72f232c72f8692dcacdd885eea756778042c2"
-RECIPE_MAINTAINER_pn-libjana = "Kevin Tian <kevin.tian@intel.com>"
-
RECIPE_STATUS_pn-apr = "red"
RECIPE_LATEST_VERSION_pn-apr = "1.4.2"
RECIPE_MAINTAINER_pn-apr = "Kevin Tian <kevin.tian@intel.com>"
@@ -1143,22 +1116,10 @@ RECIPE_STATUS_pn-insserv = "red"
RECIPE_LATEST_VERSION_pn-insserv = "1.14.0"
RECIPE_MAINTAINER_pn-insserv = "Kevin Tian <kevin.tian@intel.com>"

-RECIPE_STATUS_pn-moblin-app-installer = "red"
-RECIPE_LATEST_VERSION_pn-moblin-app-installer = "n/a"
-RECIPE_MAINTAINER_pn-moblin-app-installer = "Kevin Tian <kevin.tian@intel.com>"
-
RECIPE_STATUS_pn-modutils-collateral = "red"
RECIPE_LATEST_VERSION_pn-modutils-collateral = "1.0"
RECIPE_MAINTAINER_pn-modutils-collateral = "Kevin Tian <kevin.tian@intel.com>"

-RECIPE_STATUS_pn-monit = "red"
-RECIPE_LATEST_VERSION_pn-monit = "5.1.1"
-RECIPE_MAINTAINER_pn-monit = "Kevin Tian <kevin.tian@intel.com>"
-
-RECIPE_STATUS_pn-mozilla-headless-services = "red"
-RECIPE_LATEST_VERSION_pn-mozilla-headless-services = "0.1+git0+c679b95dd8c2807b70186d233b3833861a499315"
-RECIPE_MAINTAINER_pn-mozilla-headless-services = "Kevin Tian <kevin.tian@intel.com>"
-
RECIPE_STATUS_pn-mtd-utils = "red"
RECIPE_LATEST_VERSION_pn-mtd-utils = "1.3.1+a67747b7a314e685085b62e8239442ea54959dbc"
RECIPE_MAINTAINER_pn-mtd-utils = "Kevin Tian <kevin.tian@intel.com>"
@@ -1167,18 +1128,10 @@ RECIPE_STATUS_pn-ohm = "red"
RECIPE_LATEST_VERSION_pn-ohm = "3cb3496846508929b9f2d05683ec93523de7947c"
RECIPE_MAINTAINER_pn-ohm = "Kevin Tian <kevin.tian@intel.com>"

-RECIPE_STATUS_pn-pam = "red"
-RECIPE_LATEST_VERSION_pn-pam = "1.1.1"
-RECIPE_MAINTAINER_pn-pam = "Kevin Tian <kevin.tian@intel.com>"
-
RECIPE_STATUS_pn-procps = "red"
RECIPE_LATEST_VERSION_pn-procps = "3.2.8"
RECIPE_MAINTAINER_pn-procps = "Kevin Tian <kevin.tian@intel.com>"

-RECIPE_STATUS_pn-gmime = "red"
-RECIPE_LATEST_VERSION_pn-gmime = "2.5.3"
-RECIPE_MAINTAINER_pn-gmime = "Kevin Tian <kevin.tian@intel.com>"
-
RECIPE_STATUS_pn-iso-codes = "red"
RECIPE_LATEST_VERSION_pn-iso-codes = "3.16"
RECIPE_MAINTAINER_pn-iso-codes = "Kevin Tian <kevin.tian@intel.com>"
@@ -1187,10 +1140,6 @@ RECIPE_STATUS_pn-libgsf = "red"
RECIPE_LATEST_VERSION_pn-libgsf = "1.14.18"
RECIPE_MAINTAINER_pn-libgsf = "Kevin Tian <kevin.tian@intel.com>"

-RECIPE_STATUS_pn-libnotify = "red"
-RECIPE_LATEST_VERSION_pn-libnotify = "0.4.5"
-RECIPE_MAINTAINER_pn-libnotify = "Kevin Tian <kevin.tian@intel.com>"
-
RECIPE_STATUS_pn-beecrypt = "red"
RECIPE_LATEST_VERSION_pn-beecrypt = "4.2.1"
RECIPE_MAINTAINER_pn-beecrypt = "Kevin Tian <kevin.tian@intel.com>"
@@ -1213,10 +1162,6 @@ RECIPE_LATEST_VERSION_pn-console-tools = "0.3.2"
RECIPE_MAINTAINER_pn-console-tools = "Kevin Tian <kevin.tian@intel.com>"
DISTRO_PN_ALIAS_pn-console-tools = "Debian=console-tools Ubuntu=console-tools"

-RECIPE_STATUS_pn-dalston = "red"
-RECIPE_LATEST_VERSION_pn-dalston = "?"
-RECIPE_MAINTAINER_pn-dalston = "Kevin Tian <kevin.tian@intel.com>"
-
RECIPE_STATUS_pn-devicekit = "red"
DISTRO_PN_ALIAS_pn-devicekit = "Fedora=DeviceKit Mandriva=devicekit"
RECIPE_LATEST_VERSION_pn-devicekit = "3"
@@ -1261,21 +1206,11 @@ DISTRO_PN_ALIAS_pn-libfribidi = "OpenSuSE=fribidi Ubuntu=fribidi Mandriva=fribid
RECIPE_LATEST_VERSION_pn-libfribidi = "0.19.2"
RECIPE_MAINTAINER_pn-libfribidi = "Kevin Tian <kevin.tian@intel.com>"

-RECIPE_STATUS_pn-cdrtools = "red"
-RECIPE_LATEST_VERSION_pn-cdrtools = "2.01"
-RECIPE_MAINTAINER_pn-libfribidi = "Kevin Tian <kevin.tian@intel.com>"
-DISTRO_PN_ALIAS_pn-cdrtools-native = "upstream=http://cdrecord.berlios.de/private/cdrecord.html"
-
RECIPE_STATUS_pn-shasum-native = "red"
RECIPE_LATEST_VERSION_pn-shasum-native = "1.0"
DISTRO_PN_ALIAS_pn-cdrtools-native = "Poky"
RECIPE_MAINTAINER_pn-shasum-native = "Kevin Tian <kevin.tian@intel.com>"

-RECIPE_STATUS_pn-tzcode = "red"
-RECIPE_LATEST_VERSION_pn-tzcode = "2009r"
-RECIPE_MAINTAINER_pn-tzcode = "Kevin Tian <kevin.tian@intel.com>"
-DISTRO_PN_ALIAS_pn-tzcode-native = "Ubuntu=libc-bin Debian=libc-bin Fedora=glibc-common"
-
RECIPE_STATUS_pn-rpcbind = "green"
DEPENDENCY_CHECK_pn-rpcbind = "not done"
RECIPE_LATEST_VERSION_pn-rpcbind = "0.2.0"
@@ -1329,20 +1264,6 @@ RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-grub = "n/a"
RECIPE_LATEST_RELEASE_DATE_pn-grub = "12/2005"
RECIPE_COMMENTS_pn-grub = ""

-RECIPE_STATUS_pn-yum = "yellow" # unused patch files
-RECIPE_MAINTAINER_pn-yum = "Qing He <qing.he@intel.com>"
-DEPENDENCY_CHECK_pn-yum = "not done"
-RECIPE_LATEST_VERSION_pn-yum = "3.2.27"
-RECIPE_PATCH_pn-yum+paths = "allow custom lib path"
-RECIPE_PATCH_pn-yum+paths2 = "fix python lib path"
-RECIPE_PATCH_pn-yum+yum-install-recommends.py = "poky package system"
-RECIPE_PATCH_pn-yum+extract-postinst.awk = "poky package system"
-RECIPE_PATCH_pn-yum+98_yum = "used by populate-volatile"
-RECIPE_INTEL_SECTION_pn-yum = "base utils"
-RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-yum = "1 month"
-RECIPE_LATEST_RELEASE_DATE_pn-yum = "03/2010"
-RECIPE_COMMENTS_pn-yum = "most files are under GPLv2+, but yum/sqlutils.py is under GPLv2"
-
RECIPE_STATUS_pn-update-rc.d = "green"
RECIPE_MAINTAINER_pn-update-rc.d = "Qing He <qing.he@intel.com>"
DEPENDENCY_CHECK_pn-update-rc.d = "not done"
@@ -1656,16 +1577,6 @@ RECIPE_LATEST_RELEASE_DATE_pn-tinylogin = "n/a"
RECIPE_COMMENTS_pn-tinylogin = "merged into busybox"
DISTRO_PN_ALIAS_pn-tinylogin = "Debian=busybox Ubuntu=busybox Mandriva=busybox"

-RECIPE_STATUS_pn-spectrum-fw = "green" # need upgrade
-DISTRO_PN_ALIAS_pn-spectrum-fw = ""
-RECIPE_MAINTAINER_pn-spectrum-fw = "Ke Yu <ke.yu@intel.com>"
-DEPENDENCY_CHECK_pn-spectrum-fw = "not done"
-RECIPE_LATEST_VERSION_pn-spectrum-fw = "1.0"
-RECIPE_INTEL_SECTION_pn-spectrum-fw = "base utils"
-RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-spectrum-fw = "n/a"
-RECIPE_LATEST_RELEASE_DATE_pn-spectrum-fw = "n/a"
-RECIPE_COMMENTS_pn-spectrum-fw = "difficult to identify official site and current status"
-
RECIPE_STATUS_pn-rpm = "red" # need upgrade
RECIPE_MAINTAINER_pn-rpm = "Joshua Lock <joshua.lock@intel.com>"
DEPENDENCY_CHECK_pn-rpm = "not done"
@@ -1764,14 +1675,6 @@ RECIPE_STATUS_pn-openobex = "red"
RECIPE_LATEST_VERSION_pn-openobex = "1.5"
RECIPE_MAINTAINER_pn-openobex = "Qing He <qing.he@intel.com>"

-RECIPE_STATUS_pn-sessreg = "red"
-RECIPE_LATEST_VERSION_pn-sessreg = "1.0.5"
-RECIPE_MAINTAINER_pn-sessreg = "Qing He <qing.he@intel.com>"
-
-RECIPE_STATUS_pn-packagekit = "red"
-RECIPE_LATEST_VERSION_pn-packagekit = "0.6.3"
-RECIPE_MAINTAINER_pn-packagekit = "Qing He <qing.he@intel.com>"
-
RECIPE_STATUS_pn-sed = "green"
RECIPE_MAINTAINER_pn-sed = "Qing He <qing.he@intel.com>"
DEPENDENCY_CHECK_pn-sed = "not done"
@@ -1822,18 +1725,10 @@ RECIPE_LATEST_VERSION_pn-ubootchart = "0.0+r12"
RECIPE_MAINTAINER_pn-ubootchart = "Qing He <qing.he@intel.com>"
DISTRO_PN_ALIAS_pn-ubootchart = "OSPDT upstream=http://code.google.com/p/ubootchart"

-RECIPE_STATUS_pn-tracker = "red"
-RECIPE_LATEST_VERSION_pn-tracker = "0.9.6"
-RECIPE_MAINTAINER_pn-tracker = "Qing He <qing.he@intel.com>"
-
RECIPE_STATUS_pn-tiff = "red"
RECIPE_LATEST_VERSION_pn-tiff = "3.9.2"
RECIPE_MAINTAINER_pn-tiff = "Qing He <qing.he@intel.com>"

-RECIPE_STATUS_pn-syncevolution = "red"
-RECIPE_LATEST_VERSION_pn-syncevolution = "0.9.1"
-RECIPE_MAINTAINER_pn-syncevolution = "Qing He <qing.he@intel.com>"
-
RECIPE_STATUS_pn-libexif = "red"
RECIPE_LATEST_VERSION_pn-libexif = "0.6.19"
RECIPE_MAINTAINER_pn-libexif = "Qing He <qing.he@intel.com>"
@@ -1850,14 +1745,6 @@ RECIPE_STATUS_pn-wbxml2 = "red"
RECIPE_LATEST_VERSION_pn-wbxml2 = "0.10.8"
RECIPE_MAINTAINER_pn-wbxml2 = "Qing He <qing.he@intel.com>"

-RECIPE_STATUS_pn-sreadahead = "red"
-RECIPE_LATEST_VERSION_pn-sreadahead = "1.0"
-RECIPE_MAINTAINER_pn-sreadahead = "Qing He <qing.he@intel.com>"
-
-RECIPE_STATUS_pn-smart = "red"
-RECIPE_LATEST_VERSION_pn-smart = "1.3"
-RECIPE_MAINTAINER_pn-smart = "Qing He <qing.he@intel.com>"
-
RECIPE_STATUS_pn-qemu-config = "red"
RECIPE_LATEST_VERSION_pn-qemu-config = "1.0"
DISTRO_PN_ALIAS_pn-qemu-config = "OpenedHand"
@@ -1922,15 +1809,6 @@ RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-cronie = "3 months"
RECIPE_LATEST_RELEASE_DATE_pn-cronie = "02/2010"
RECIPE_MAINTAINER_pn-cronie = "Dexuan Cui <dexuan.cui@intel.com>"

-RECIPE_STATUS_pn-crontabs = "red"
-DEPENDENCY_CHECK_pn-crontabs = "not done"
-RECIPE_LATEST_VERSION_pn-crontabs = "n/a"
-RECIPE_INTEL_SECTION_pn-crontabs = "base"
-RECIPE_NO_OF_PATCHES_pn-crontabs = "0"
-RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-crontabs = "n/a"
-RECIPE_LATEST_RELEASE_DATE_pn-crontabs = "n/a"
-RECIPE_MAINTAINER_pn-crontabs = "Scott Garman <scott.a.garman@intel.com>"
-
RECIPE_STATUS_pn-grep = "green"
DEPENDENCY_CHECK_pn-grep = "not done"
RECIPE_LATEST_VERSION_pn-grep = "2.7"
@@ -2025,23 +1903,6 @@ RECIPE_LATEST_RELEASE_DATE_pn-libproxy="2010/05/19"
RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-libproxy="3 months"
RECIPE_MAINTAINER_pn-libproxy = "Dongxiao Xu <dongxiao.xu@intel.com>"

-RECIPE_STATUS_pn-networkmanager="red"
-DISTRO_PN_ALIAS_pn-networkmanager = "Fedora=NetworkManager OpenSuSE=NetworkManager Ubuntu=network-manager Debian=network-manager"
-RECIPE_LATEST_VERSION_pn-networkmanager="0.8"
-RECIPE_NO_UPDATE_REASON_pn-networkmanager="poky will use connman instead"
-RECIPE_NO_OF_PATCHES_pn-networkmanager="4"
-RECIPE_LATEST_RELEASE_DATE_pn-networkmanager="2010/04/18"
-RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-networkmanager="3 months"
-RECIPE_MAINTAINER_pn-networkmanager = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
-RECIPE_STATUS_pn-networkmanager-applet="red"
-RECIPE_LATEST_VERSION_pn-networkmanager-applet="n/a"
-RECIPE_NO_UPDATE_REASON_pn-networkmanager-applet="poky will use connman instead"
-RECIPE_NO_OF_PATCHES_pn-networkmanager-applet="4"
-RECIPE_LATEST_RELEASE_DATE_pn-networkmanager-applet="n/a"
-RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-networkmanager-applet="n/a"
-RECIPE_MAINTAINER_pn-networkmanager-applet = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
RECIPE_STATUS_pn-connman = "green"
DISTRO_PN_ALIAS_pn-connman = "Meego=connman"
RECIPE_LATEST_VERSION_pn-connman = "0.56"
@@ -2058,7 +1919,6 @@ RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-connman-gnome="n/a"
RECIPE_MAINTAINER_pn-connman-gnome = "Dongxiao Xu <dongxiao.xu@intel.com>"
DISTRO_PN_ALIAS_pn-connman-gnome = "Intel"

-
RECIPE_STATUS_pn-empathy = "red"
RECIPE_LATEST_VERSION_pn-empathy = "2.31.2"
RECIPE_MAINTAINER_pn-empathy = "Dongxiao Xu <dongxiao.xu@intel.com>"
@@ -2087,10 +1947,6 @@ RECIPE_STATUS_pn-libopensync-plugin-syncml = "red"
RECIPE_LATEST_VERSION_pn-libopensync-plugin-syncml = "0.39"
RECIPE_MAINTAINER_pn-libopensync-plugin-syncml = "Dongxiao Xu <dongxiao.xu@intel.com>"

-RECIPE_STATUS_pn-mobile-broadband-provider-info = "red"
-RECIPE_LATEST_VERSION_pn-mobile-broadband-provider-info = "084f664306398abf0df88cb82add5a577fc4ce72"
-RECIPE_MAINTAINER_pn-mobile-broadband-provider-info = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
RECIPE_STATUS_pn-ofono = "red"
RECIPE_LATEST_VERSION_pn-ofono = "0.21"
RECIPE_MAINTAINER_pn-ofono = "Dongxiao Xu <dongxiao.xu@intel.com>"
@@ -2127,19 +1983,11 @@ RECIPE_STATUS_pn-ppp-dialin = "red"
RECIPE_LATEST_VERSION_pn-ppp-dialin = "check"
RECIPE_MAINTAINER_pn-ppp-dialin = "Dongxiao Xu <dongxiao.xu@intel.com>"

-RECIPE_STATUS_pn-samba = "red"
-RECIPE_LATEST_VERSION_pn-samba = "3.5.3"
-RECIPE_MAINTAINER_pn-samba = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
RECIPE_STATUS_pn-libgsmd = "red"
RECIPE_LATEST_VERSION_pn-libgsmd = "check"
DISTRO_PN_ALIAS_pn-libgsmd = "Fedora=gsm Ubuntu=libgsm Debian=libgsm Opensuse=libgsm"
RECIPE_MAINTAINER_pn-libgsmd = "Dongxiao Xu <dongxiao.xu@intel.com>"

-RECIPE_STATUS_pn-gnet = "red"
-RECIPE_LATEST_VERSION_pn-gnet = "2.0.8"
-RECIPE_MAINTAINER_pn-gnet = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
RECIPE_STATUS_pn-zeroconf = "red"
RECIPE_LATEST_VERSION_pn-zeroconf = "0.9"
DISTRO_PN_ALIAS_pn-zeroconf = "OSPDT upstream=http://www.progsoc.org/~wildfire/zeroconf/"
@@ -2149,10 +1997,6 @@ RECIPE_STATUS_pn-gypsy = "red"
RECIPE_LATEST_VERSION_pn-gypsy = "0.7"
RECIPE_MAINTAINER_pn-gypsy = "Dongxiao Xu <dongxiao.xu@intel.com>"

-RECIPE_STATUS_pn-librest = "red"
-RECIPE_LATEST_VERSION_pn-librest = "0.6"
-RECIPE_MAINTAINER_pn-librest = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
RECIPE_STATUS_pn-libopensync-plugin-file = "red"
RECIPE_LATEST_VERSION_pn-libopensync-plugin-file = "0.39"
RECIPE_MAINTAINER_pn-libopensync-plugin-file = "Dongxiao Xu <dongxiao.xu@intel.com>"
@@ -2161,27 +2005,6 @@ RECIPE_STATUS_pn-libopensync-plugin-vformat = "red"
RECIPE_LATEST_VERSION_pn-libopensync-plugin-vformat = "0.39"
RECIPE_MAINTAINER_pn-libopensync-plugin-vformat = "Dongxiao Xu <dongxiao.xu@intel.com>"

-RECIPE_STATUS_pn-libsynthesis = "red"
-RECIPE_LATEST_VERSION_pn-libsynthesis = "b682e6be01f2a0ee97e034d03320059f779978a3"
-RECIPE_MAINTAINER_pn-libsynthesis = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
-RECIPE_STATUS_pn-twitter-glib = "red"
-RECIPE_LATEST_VERSION_pn-twitter-glib = "0.9.8"
-RECIPE_MAINTAINER_pn-twitter-glib = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
-RECIPE_STATUS_pn-libsocialweb = "red"
-RECIPE_LATEST_VERSION_pn-libsocialweb = "n/a"
-RECIPE_MAINTAINER_pn-libsocialweb = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
-RECIPE_STATUS_pn-libccss = "red"
-DISTRO_PN_ALIAS_pn-libccss = "OpenSuSE=ccss"
-RECIPE_LATEST_VERSION_pn-libccss = "aaed6b2b1a206b7a0c55a786b980bdec21004a93"
-RECIPE_MAINTAINER_pn-libccss = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
-RECIPE_STATUS_pn-bisho = "red"
-RECIPE_LATEST_VERSION_pn-bisho = "n/a"
-RECIPE_MAINTAINER_pn-bisho = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
RECIPE_STATUS_pn-libsyncml = "red"
RECIPE_LATEST_VERSION_pn-libsyncml = "0.5.4"
RECIPE_MAINTAINER_pn-libsyncml = "Dongxiao Xu <dongxiao.xu@intel.com>"
@@ -2194,13 +2017,6 @@ RECIPE_STATUS_pn-libetpan = "red"
RECIPE_LATEST_VERSION_pn-libetpan = "1.0"
RECIPE_MAINTAINER_pn-libetpan = "Dongxiao Xu <dongxiao.xu@intel.com>"

-RECIPE_STATUS_pn-bickley = "red"
-RECIPE_LATEST_VERSION_pn-bickley = "0.4.4"
-RECIPE_MAINTAINER_pn-bickley = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
-RECIPE_STATUS_pn-mojito = "red"
-RECIPE_LATEST_VERSION_pn-mojito = "n/a"
-RECIPE_MAINTAINER_pn-mojito = "Dongxiao Xu <dongxiao.xu@intel.com>"
RECIPE_STATUS_pn-hostap-utils="yellow" # patch investigation needed; hostap-utils.inc is only included by one bb file.
RECIPE_LATEST_VERSION_pn-hostap-utils="0.4.7"
RECIPE_NO_OF_PATCHES_pn-hostap-utils="1"
@@ -2294,34 +2110,6 @@ RECIPE_LATEST_RELEASE_DATE_pn-enchant="2010/04/01"
RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-enchant="10 months"
RECIPE_MAINTAINER_pn-enchant = "Dongxiao Xu <dongxiao.xu@intel.com>"

-RECIPE_STATUS_pn-task-moblin = "red"
-RECIPE_LATEST_VERSION_pn-task-moblin = "1.0"
-RECIPE_MAINTAINER_pn-task-moblin = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
-RECIPE_STATUS_pn-task-moblin-apps-x11-pimlico = "red"
-RECIPE_LATEST_VERSION_pn-task-moblin-apps-x11-pimlico = "1.0"
-RECIPE_MAINTAINER_pn-task-moblin-apps-x11-pimlico = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
-RECIPE_STATUS_pn-task-moblin-boot = "red"
-RECIPE_LATEST_VERSION_pn-task-moblin-boot = "1.0"
-RECIPE_MAINTAINER_pn-task-moblin-boot = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
-RECIPE_STATUS_pn-task-moblin-sdk = "red"
-RECIPE_LATEST_VERSION_pn-task-moblin-sdk = "1.0"
-RECIPE_MAINTAINER_pn-task-moblin-sdk = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
-RECIPE_STATUS_pn-task-moblin-standalone-sdk-target = "red"
-RECIPE_LATEST_VERSION_pn-task-moblin-standalone-sdk-target = "1.0"
-RECIPE_MAINTAINER_pn-task-moblin-standalone-sdk-target = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
-RECIPE_STATUS_pn-task-moblin-tools = "red"
-RECIPE_LATEST_VERSION_pn-task-moblin-tools = "1.0"
-RECIPE_MAINTAINER_pn-task-moblin-tools = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
-RECIPE_STATUS_pn-task-moblin-x11-netbook = "red"
-RECIPE_LATEST_VERSION_pn-task-moblin-x11-netbook = "1.0"
-RECIPE_MAINTAINER_pn-task-moblin-x11-netbook = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
RECIPE_STATUS_pn-task-poky-sdk = "red"
RECIPE_LATEST_VERSION_pn-task-poky-sdk = "1.0"
RECIPE_MAINTAINER_pn-task-poky-sdk = "Dongxiao Xu <dongxiao.xu@intel.com>"
@@ -2338,22 +2126,10 @@ RECIPE_STATUS_pn-task-poky-standalone-sdk-target = "red"
RECIPE_LATEST_VERSION_pn-task-poky-standalone-sdk-target = "1.0"
RECIPE_MAINTAINER_pn-task-poky-standalone-sdk-target = "Dongxiao Xu <dongxiao.xu@intel.com>"

-RECIPE_STATUS_pn-task-poky-x11-netbook = "red"
-RECIPE_LATEST_VERSION_pn-task-poky-x11-netbook = "1.0"
-RECIPE_MAINTAINER_pn-task-poky-x11-netbook = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
-RECIPE_STATUS_pn-xerces-c = "red"
-RECIPE_LATEST_VERSION_pn-xerces-c = "3.1.1"
-RECIPE_MAINTAINER_pn-xerces-c = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
RECIPE_STATUS_pn-xournal = "red"
RECIPE_LATEST_VERSION_pn-xournal = "0.4.5"
RECIPE_MAINTAINER_pn-xournal = "Dongxiao Xu <dongxiao.xu@intel.com>"

-RECIPE_STATUS_pn-carrick = "red"
-RECIPE_LATEST_VERSION_pn-carrick = "1.0"
-RECIPE_MAINTAINER_pn-carrick = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
RECIPE_STATUS_pn-wv = "red"
RECIPE_LATEST_VERSION_pn-wv = "1.2.1"
RECIPE_MAINTAINER_pn-wv = "Dongxiao Xu <dongxiao.xu@intel.com>"
@@ -2406,18 +2182,10 @@ RECIPE_STATUS_pn-task-poky = "green"
RECIPE_LATEST_VERSION_pn-task-poky = "1.0"
RECIPE_MAINTAINER_pn-task-poky = "Dongxiao Xu <dongxiao.xu@intel.com>"

-RECIPE_STATUS_pn-task-poky-minimal = "green"
-RECIPE_LATEST_VERSION_pn-task-poky-minimal = "1.0"
-RECIPE_MAINTAINER_pn-task-poky-minimal = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
RECIPE_STATUS_pn-task-poky-basic = "green"
RECIPE_LATEST_VERSION_pn-task-poky-basic = "1.0"
RECIPE_MAINTAINER_pn-task-poky-basic = "Dongxiao Xu <dongxiao.xu@intel.com>"

-RECIPE_STATUS_pn-task-poky-sato = "green"
-RECIPE_LATEST_VERSION_pn-task-poky-sato = "1.0"
-RECIPE_MAINTAINER_pn-task-poky-sato = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
RECIPE_STATUS_pn-task-poky-lsb = "green"
RECIPE_LATEST_VERSION_pn-task-poky-lsb = "1.0"
RECIPE_MAINTAINER_pn-task-poky-lsb = "Dongxiao Xu <dongxiao.xu@intel.com>"
@@ -2442,10 +2210,6 @@ RECIPE_STATUS_pn-poky-image-sdk-live = "green"
RECIPE_LATEST_VERSION_pn-poky-image-sdk-live = "1.0"
RECIPE_MAINTAINER_pn-poky-image-sdk-live = "Dongxiao Xu <dongxiao.xu@intel.com>"

-RECIPE_STATUS_pn-poky-moblin-proto = "red"
-RECIPE_LATEST_VERSION_pn-poky-moblin-proto = "check"
-RECIPE_MAINTAINER_pn-poky-moblin-proto = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
RECIPE_STATUS_pn-poky-image-sdk-directdisk = "green"
RECIPE_LATEST_VERSION_pn-poky-image-sdk-directdisk = "1.0"
RECIPE_MAINTAINER_pn-poky-image-sdk-directdisk = "Dongxiao Xu <dongxiao.xu@intel.com>"
@@ -2631,14 +2395,6 @@ DISTRO_PN_ALIAS_pn-gst-ffmpeg = "Mandriva=gstreamer0.10-ffmpeg Debian=gstreamer0
RECIPE_LATEST_VERSION_pn-gst-ffmpeg = "0.10.10"
RECIPE_MAINTAINER_pn-gst-ffmpeg = "Dongxiao Xu <dongxiao.xu@intel.com>"

-RECIPE_STATUS_pn-moblin-sound-theme = "red"
-RECIPE_LATEST_VERSION_pn-moblin-sound-theme = "0.3"
-RECIPE_MAINTAINER_pn-moblin-sound-theme = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
-RECIPE_STATUS_pn-librds = "red"
-RECIPE_LATEST_VERSION_pn-librds = "0.0.1"
-RECIPE_MAINTAINER_pn-librds = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
RECIPE_STATUS_pn-alsa-oss = "red"
RECIPE_LATEST_VERSION_pn-alsa-oss = "1.0.17"
RECIPE_MAINTAINER_pn-alsa-oss = "Dongxiao Xu <dongxiao.xu@intel.com>"
@@ -2660,10 +2416,6 @@ RECIPE_STATUS_pn-libnice = "red"
RECIPE_LATEST_VERSION_pn-libnice = "0.0.12"
RECIPE_MAINTAINER_pn-libnice = "Dongxiao Xu <dongxiao.xu@intel.com>"

-RECIPE_STATUS_pn-bognor-regis = "red"
-RECIPE_LATEST_VERSION_pn-bognor-regis = "0.5.2-2"
-RECIPE_MAINTAINER_pn-bognor-regis = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
RECIPE_STATUS_pn-gst-fluendo-mp3 = "red"
RECIPE_LATEST_VERSION_pn-gst-fluendo-mp3 = "0.10.10"
RECIPE_MAINTAINER_pn-gst-fluendo-mp3 = "Dongxiao Xu <dongxiao.xu@intel.com>"
@@ -2674,10 +2426,6 @@ RECIPE_LATEST_VERSION_pn-gst-fluendo-mpegdemux = "0.10.23"
RECIPE_MAINTAINER_pn-gst-fluendo-mpegdemux = "Dongxiao Xu <dongxiao.xu@intel.com>"
DISTRO_PN_ALIAS_pn-gst-fluendo-mpegdemux = "Ubuntu=gstreamer0.10-fluendo-mpegdemux Debian=gstreamer0.10-fluendo-mpegdemux"

-RECIPE_STATUS_pn-hornsey = "red"
-RECIPE_LATEST_VERSION_pn-hornsey = "1.5.1"
-RECIPE_MAINTAINER_pn-hornsey = "Dongxiao Xu <dongxiao.xu@intel.com>"
-
RECIPE_STATUS_pn-libomxil = "red"
RECIPE_LATEST_VERSION_pn-libomxil = "0.9.2.1"
RECIPE_MAINTAINER_pn-libomxil = "Dongxiao Xu <dongxiao.xu@intel.com>"
@@ -2798,8 +2546,6 @@ RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-binutils="1 months"
RECIPE_LATEST_RELEASE_DATE_pn-binutils="2010/05/28"
RECIPE_MAINTAINER_pn-binutils = "Nitin A Kamble <nitin.a.kamble@intel.com>"

-DISTRO_PN_ALIAS_pn-ldconfig-native = "Ubuntu=libc-bin Fedora=glibc"
-
RECIPE_STATUS_pn-gcc="red" # recipe building is failing
RECIPE_LATEST_VERSION_pn-gcc="4.5.0"
RECIPE_NO_OF_PATCHES_pn-gcc="8"
@@ -2920,13 +2666,6 @@ RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-postinsts="6 months"
RECIPE_LATEST_RELEASE_DATE_pn-postinsts="2008/05/20"
RECIPE_MAINTAINER_pn-postinsts = "Nitin A Kamble <nitin.a.kamble@intel.com>"

-RECIPE_STATUS_pn-staging-linkage="green" # no code
-RECIPE_LATEST_VERSION_pn-staging-linkage="1.0"
-RECIPE_NO_OF_PATCHES_pn-staging-linkage="0"
-RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-staging-linkage="2 months"
-RECIPE_LATEST_RELEASE_DATE_pn-staging-linkage="2009/11/19"
-RECIPE_MAINTAINER_pn-staging-linkage = "Nitin A Kamble <nitin.a.kamble@intel.com>"
-
RECIPE_STATUS_pn-nasm="green" # upgraded
RECIPE_LATEST_VERSION_pn-nasm="2.07"
RECIPE_NO_OF_PATCHES_pn-nasm="0"
@@ -2969,13 +2708,6 @@ RECIPE_LATEST_RELEASE_DATE_pn-python-imaging="2009/11/15"
RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-python-imaging="37 months"
RECIPE_MAINTAINER_pnpython--imaging = "Nitin A Kamble <nitin.a.kamble@intel.com>"

-RECIPE_STATUS_pn-python-iniparse="green" # upgraded
-RECIPE_LATEST_VERSION_pn-python-iniparse="0.3.2"
-RECIPE_NO_OF_PATCHES_pn-python-iniparse="0"
-RECIPE_LATEST_RELEASE_DATE_pn-python-iniparse="2010/04/17"
-RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-python-iniparse="13 months"
-RECIPE_MAINTAINER_pn-python-iniparse = "Nitin A Kamble <nitin.a.kamble@intel.com>"
-
RECIPE_STATUS_pn-python-pycurl="green" # already at the latest release
RECIPE_LATEST_VERSION_pn-python-pycurl="7.19.0"
RECIPE_NO_OF_PATCHES_pn-python-pycurl="1"
@@ -3015,13 +2747,6 @@ RECIPE_LATEST_RELEASE_DATE_pn-python-scons="2010/06/06"
RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-python-scons="3 months"
RECIPE_MAINTAINER_pn-python-scons = "Nitin A Kamble <nitin.a.kamble@intel.com>"

-RECIPE_STATUS_pn-python-urlgrabber="green" # already @ the latest version
-RECIPE_LATEST_VERSION_pn-python-urlgrabber="3.9.1"
-RECIPE_NO_OF_PATCHES_pn-python-urlgrabber="2"
-RECIPE_LATEST_RELEASE_DATE_pn-python-urlgrabber="2009/09/25"
-RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-python-urlgrabber="2 months"
-RECIPE_MAINTAINER_pn-python-urlgrabber = "Nitin A Kamble <nitin.a.kamble@intel.com>"
-
RECIPE_STATUS_pn-python="green" # upgraded
DISTRO_PN_ALIAS_pn-python-gst = "OpenSuSE=python-gstreamer Ubuntu=gst0.10-python Debian=gst0.10-python"
RECIPE_LATEST_VERSION_pn-python="2.6.5"
@@ -3030,13 +2755,6 @@ RECIPE_LATEST_RELEASE_DATE_pn-python="2010/03/18"
RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-python="5 months"
RECIPE_MAINTAINER_pn-python = "Nitin A Kamble <nitin.a.kamble@intel.com>"

-RECIPE_STATUS_pn-yum-metadata-parser="green" # upgraded
-RECIPE_LATEST_VERSION_pn-yum-metadata-parser="1.1.4"
-RECIPE_NO_OF_PATCHES_pn-yum-metadata-parser="0"
-RECIPE_LATEST_RELEASE_DATE_pn-yum-metadata-parser="2010/01/07"
-RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-yum-metadata-parser="5 months"
-RECIPE_MAINTAINER_pn-yum-metadata-parser = "Nitin A Kamble <nitin.a.kamble@intel.com>"
-
RECIPE_STATUS_pn-quilt="green" # upgraded
RECIPE_LATEST_VERSION_pn-quilt="0.48"
RECIPE_NO_OF_PATCHES_pn-quilt="3"
@@ -3051,20 +2769,6 @@ RECIPE_LATEST_RELEASE_DATE_pn-tcl="2008/11/29"
RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-tcl="3 months"
RECIPE_MAINTAINER_pn-tcl = "Nitin A Kamble <nitin.a.kamble@intel.com>"

-RECIPE_STATUS_pn-unifdef="green" # poky local source files
-RECIPE_LATEST_VERSION_pn-unifdef="2.6.18+git"
-RECIPE_NO_OF_PATCHES_pn-unifdef="1"
-RECIPE_LATEST_RELEASE_DATE_pn-unifdef="2009/06/03"
-RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-unifdef="27 months"
-RECIPE_MAINTAINER_pn-unifdef = "Nitin A Kamble <nitin.a.kamble@intel.com>"
-
-RECIPE_STATUS_pn-qmake2-cross="green" # upgraded
-RECIPE_LATEST_VERSION_pn-qmake2-cross="2.10a"
-RECIPE_NO_OF_PATCHES_pn-qmake2-cross="1"
-RECIPE_LATEST_RELEASE_DATE_pn-qmake2-cross="2010/06/02"
-RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-qmake2-cross="4 months"
-RECIPE_MAINTAINER_pn-qmake2-cross = "Nitin A Kamble <nitin.a.kamble@intel.com>"
-
RECIPE_STATUS_pn-gnu-config="green" # upgraded
RECIPE_LATEST_VERSION_pn-gnu-config="0.1+cvs20080123"
RECIPE_NO_OF_PATCHES_pn-gnu-config="4"
@@ -3119,10 +2823,6 @@ RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-m4="6 month"
RECIPE_MAINTAINER_pn-m4="Nitin A Kamble <nitin.a.kamble@intel.com>"
RECIPE_COMMENTS_pn-m4= "Dongxiao Xu <dongxiao.xu@intel.com> will own GPLv2 m4"

-#
-# distro tracking fields for X11 apps(high level)
-#
-
RECIPE_STATUS_pn-owl-video = "green" # no update needed
DEPENDENCY_CHECK_pn-owl-video = "not done"
RECIPE_LATEST_VERSION_pn-owl-video = "0.0+svnr394"
@@ -3174,7 +2874,6 @@ RECIPE_LATEST_RELEASE_DATE_pn-oh-puzzles = "04/2008"
RECIPE_COMMENTS_pn-oh-puzzles = ""
RECIPE_MAINTAINER_pn-oh-puzzles = "Zhai Edwin <edwin.zhai@intel.com>"

-
RECIPE_STATUS_pn-gnome-terminal = "red"
RECIPE_LATEST_VERSION_pn-gnome-terminal = "2.31.3"
RECIPE_MAINTAINER_pn-gnome-terminal = "Edwin Zhai <edwin.zhai@intel.com>"
@@ -3185,30 +2884,6 @@ RECIPE_LATEST_VERSION_pn-puzzles = "svn_8956"
RECIPE_MAINTAINER_pn-puzzles = "Edwin Zhai <edwin.zhai@intel.com>"
DISTRO_PN_ALIAS_pn-puzzles = "Debian=sgt-puzzles Fedora=puzzles"

-RECIPE_STATUS_pn-xdg-user-dirs = "red"
-RECIPE_LATEST_VERSION_pn-xdg-user-dirs = "0.12"
-RECIPE_MAINTAINER_pn-xdg-user-dirs = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-xterm = "red"
-RECIPE_LATEST_VERSION_pn-xterm = "n/a"
-RECIPE_MAINTAINER_pn-xterm = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-iceauth = "red"
-RECIPE_LATEST_VERSION_pn-iceauth = "1.0.3"
-RECIPE_MAINTAINER_pn-iceauth = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-luit = "red"
-RECIPE_LATEST_VERSION_pn-luit = "1.0.5"
-RECIPE_MAINTAINER_pn-luit = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-oclock = "red"
-RECIPE_LATEST_VERSION_pn-oclock = "n/a"
-RECIPE_MAINTAINER_pn-oclock = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-rgb = "red"
-RECIPE_LATEST_VERSION_pn-rgb = "1.0.3"
-RECIPE_MAINTAINER_pn-rgb = "Edwin Zhai <edwin.zhai@intel.com>"
-
RECIPE_STATUS_pn-x11perf = "red"
RECIPE_LATEST_VERSION_pn-x11perf = "1.5.1"
RECIPE_MAINTAINER_pn-x11perf = "Edwin Zhai <edwin.zhai@intel.com>"
@@ -3218,34 +2893,6 @@ RECIPE_STATUS_pn-xbacklight = "red"
RECIPE_LATEST_VERSION_pn-xbacklight = "1.1.1"
RECIPE_MAINTAINER_pn-xbacklight = "Edwin Zhai <edwin.zhai@intel.com>"

-RECIPE_STATUS_pn-xbiff = "red"
-RECIPE_LATEST_VERSION_pn-xbiff = "1.0.2"
-RECIPE_MAINTAINER_pn-xbiff = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-xclipboard = "red"
-RECIPE_LATEST_VERSION_pn-xclipboard = "1.1.0"
-RECIPE_MAINTAINER_pn-xclipboard = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-xclock = "red"
-RECIPE_LATEST_VERSION_pn-xclock = "n/a"
-RECIPE_MAINTAINER_pn-xclock = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-xcmsdb = "red"
-RECIPE_LATEST_VERSION_pn-xcmsdb = "1.0.2"
-RECIPE_MAINTAINER_pn-xcmsdb = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-xconsole = "red"
-RECIPE_LATEST_VERSION_pn-xconsole = "1.0.3"
-RECIPE_MAINTAINER_pn-xconsole = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-xcursorgen = "red"
-RECIPE_LATEST_VERSION_pn-xcursorgen = "1.0.3"
-RECIPE_MAINTAINER_pn-xcursorgen = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-xdriinfo = "red"
-RECIPE_LATEST_VERSION_pn-xdriinfo = "1.0.3"
-RECIPE_MAINTAINER_pn-xdriinfo = "Edwin Zhai <edwin.zhai@intel.com>"
-
RECIPE_STATUS_pn-xev = "red"
RECIPE_LATEST_VERSION_pn-xev = "1.0.4"
RECIPE_MAINTAINER_pn-xev = "Edwin Zhai <edwin.zhai@intel.com>"
@@ -3256,96 +2903,16 @@ RECIPE_LATEST_VERSION_pn-xeyes = "1.1.0"
RECIPE_MAINTAINER_pn-xeyes = "Edwin Zhai <edwin.zhai@intel.com>"
DISTRO_PN_ALIAS_pn-xeyes = "Ubuntu=x11-apps Fedora=xorg-x11-apps"

-RECIPE_STATUS_pn-xfd = "red"
-RECIPE_LATEST_VERSION_pn-xfd = "1.0.1"
-RECIPE_MAINTAINER_pn-xfd = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-xfontsel = "red"
-RECIPE_LATEST_VERSION_pn-xfontsel = "1.0.2"
-RECIPE_MAINTAINER_pn-xfontsel = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-xgamma = "red"
-RECIPE_LATEST_VERSION_pn-xgamma = "1.0.3"
-RECIPE_MAINTAINER_pn-xgamma = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-xkbevd = "red"
-RECIPE_LATEST_VERSION_pn-xkbevd = "1.1.0"
-RECIPE_MAINTAINER_pn-xkbevd = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-xkbprint = "red"
-RECIPE_LATEST_VERSION_pn-xkbprint = "1.0.2"
-RECIPE_MAINTAINER_pn-xkbprint = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-xkbutils = "red"
-RECIPE_LATEST_VERSION_pn-xkbutils = "1.0.2"
-RECIPE_MAINTAINER_pn-xkbutils = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-xkill = "red"
-RECIPE_LATEST_VERSION_pn-xkill = "1.0.2"
-RECIPE_MAINTAINER_pn-xkill = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-xload = "red"
-RECIPE_LATEST_VERSION_pn-xload = "1.0.2"
-RECIPE_MAINTAINER_pn-xload = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-xlsatoms = "red"
-RECIPE_LATEST_VERSION_pn-xlsatoms = "1.1.0"
-RECIPE_MAINTAINER_pn-xlsatoms = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-xlsclients = "red"
-RECIPE_LATEST_VERSION_pn-xlsclients = "1.1.0"
-RECIPE_MAINTAINER_pn-xlsclients = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-xlsfonts = "red"
-RECIPE_LATEST_VERSION_pn-xlsfonts = "1.0.2"
-RECIPE_MAINTAINER_pn-xlsfonts = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-xmag = "red"
-RECIPE_LATEST_VERSION_pn-xmag = "1.0.3"
-RECIPE_MAINTAINER_pn-xmag = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-xmessage = "red"
-RECIPE_LATEST_VERSION_pn-xmessage = "1.0.3"
-RECIPE_MAINTAINER_pn-xmessage = "Edwin Zhai <edwin.zhai@intel.com>"
-
RECIPE_STATUS_pn-xrdb = "red"
RECIPE_LATEST_VERSION_pn-xrdb = "1.0.6"
RECIPE_MAINTAINER_pn-xrdb = "Edwin Zhai <edwin.zhai@intel.com>"
DISTRO_PN_ALIAS_pn-xrdb = "Ubuntu=x11-xserver-utils Fedora=xorg-x11-server-utils"

-RECIPE_STATUS_pn-xrefresh = "red"
-RECIPE_LATEST_VERSION_pn-xrefresh = "1.0.3"
-RECIPE_MAINTAINER_pn-xrefresh = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-xsetroot = "red"
-RECIPE_LATEST_VERSION_pn-xsetroot = "1.0.3"
-RECIPE_MAINTAINER_pn-xsetroot = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-xstdcmap = "red"
-RECIPE_LATEST_VERSION_pn-xstdcmap = "1.0.1"
-RECIPE_MAINTAINER_pn-xstdcmap = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-xtrap = "red"
-RECIPE_LATEST_VERSION_pn-xtrap = "1.0.2"
-RECIPE_MAINTAINER_pn-xtrap = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-xvidtune = "red"
-RECIPE_LATEST_VERSION_pn-xvidtune = "1.0.2"
-RECIPE_MAINTAINER_pn-xvidtune = "Edwin Zhai <edwin.zhai@intel.com>"
-
RECIPE_STATUS_pn-xvinfo = "red"
RECIPE_LATEST_VERSION_pn-xvinfo = "1.1.0"
RECIPE_MAINTAINER_pn-xvinfo = "Edwin Zhai <edwin.zhai@intel.com>"
DISTRO_PN_ALIAS_pn-xvinfo = "Fedora=xorg-x11-utils Ubuntu=x11-utils"

-RECIPE_STATUS_pn-xwd = "red"
-RECIPE_LATEST_VERSION_pn-xwd = "1.0.3"
-RECIPE_MAINTAINER_pn-xwd = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-xwud = "red"
-RECIPE_LATEST_VERSION_pn-xwud = "1.0.2"
-RECIPE_MAINTAINER_pn-xwud = "Edwin Zhai <edwin.zhai@intel.com>"
-
RECIPE_STATUS_pn-claws-plugin-gtkhtml2-viewer = "red"
DISTRO_PN_ALIAS_pn-claws-plugin-gtkhtml2-viewer = "Fedora=claws-mail-plugins OpenSuSE=claws-mail-extra-plugins Debian=claws-mail-extra-plugins"
RECIPE_LATEST_VERSION_pn-claws-plugin-gtkhtml2-viewer = "0.27"
@@ -3371,11 +2938,6 @@ RECIPE_LATEST_VERSION_pn-lndir = "n/a"
RECIPE_MAINTAINER_pn-lndir = "Edwin Zhai <edwin.zhai@intel.com>"
DISTRO_PN_ALIAS_pn-lndir = "Mandriva=lndir Ubuntu=xutils-dev Fedora=imake"

-RECIPE_STATUS_pn-mozilla-headless = "red"
-DISTRO_PN_ALIAS_pn-mozilla-headless = ""
-RECIPE_LATEST_VERSION_pn-mozilla-headless = "tip_hg_555cbc23"
-RECIPE_MAINTAINER_pn-mozilla-headless = "Edwin Zhai <edwin.zhai@intel.com>"
-
RECIPE_STATUS_pn-kf = "red"
RECIPE_LATEST_VERSION_pn-kf = "n/a"
RECIPE_MAINTAINER_pn-kf = "Edwin Zhai <edwin.zhai@intel.com>"
@@ -3390,10 +2952,6 @@ RECIPE_LATEST_VERSION_pn-x11vnc = "0.9.11_dev"
RECIPE_MAINTAINER_pn-x11vnc = "Edwin Zhai <edwin.zhai@intel.com>"
DISTRO_PN_ALIAS_pn-x11vnc = "Fedora=x11vnc Ubuntu=x11vnc"

-#
-# distro tracking fields for X11 apps packages
-#
-
RECIPE_STATUS_pn-mkfontdir="green" # no update needed
RECIPE_LATEST_VERSION_pn-mkfontdir="1.0.5"
RECIPE_NO_OF_PATCHES_pn-mkfontdir="0"
@@ -3543,18 +3101,6 @@ RECIPE_INTEL_SECTION_pn-qt4-tools-native = "graphic app"
RECIPE_MAINTAINER_pn-qt4-tools-native = "Yu Ke <ke.yu@intel.com>"
DISTRO_PN_ALIAS_pn-qt4-tools-native = "Mandriva=libqt4-devel Ubuntu=libqt4-dev"

-ECIPE_STATUS_pn-quicky = "green" # no update needed
-RECIPE_LATEST_VERSION_pn-quicky = "0.4"
-RECIPE_NO_OF_PATCHES_pn-quicky = "0"
-RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-quicky = "n/a"
-RECIPE_LATEST_RELEASE_DATE_pn-quicky = "2008/06/18"
-RECIPE_INTEL_SECTION_pn-quicky = "graphic app"
-RECIPE_MAINTAINER_pn-quicky = "Yu Ke <ke.yu@intel.com>"
-DISTRO_PN_ALIAS_pn-quicky = "OSPDT"
-#
-# distro tracking filed for X core packages
-#
-
RECIPE_STATUS_pn-bigreqsproto="green" # no update needed
DISTRO_PN_ALIAS_pn-bigreqsproto = "Meego=xorg-x11-proto-bigreqsproto"
RECIPE_LATEST_VERSION_pn-bigreqsproto="1.1.0"
@@ -3734,10 +3280,6 @@ RECIPE_LATEST_RELEASE_DATE_pn-xserver-xf86-dri-lite="2010/07/01"
RECIPE_INTEL_SECTION_pn-xserver-xf86-dri-lite="graphic core"
RECIPE_MAINTAINER_pn-xserver-xf86-dri-lite="Yu Ke <ke.yu@intel.com>"

-RECIPE_STATUS_pn-xbitmaps = "red"
-RECIPE_LATEST_VERSION_pn-xbitmaps = "1.1.0"
-RECIPE_MAINTAINER_pn-xbitmaps = "Ke Yu <ke.yu@intel.com>"
-
RECIPE_STATUS_pn-xf86-input-synaptics = "red"
DISTRO_PN_ALIAS_pn-xf86-input-synaptics = "Meego=xorg-x11-drv-synaptics Fedora=xorg-x11-drv-synaptics Ubuntu=xserver-xorg-input-synaptics Mandriva=x11-driver-input-synaptics Debian=xfree86-driver-synaptics"
RECIPE_LATEST_VERSION_pn-xf86-input-synaptics = "1.2.2"
@@ -3998,19 +3540,6 @@ RECIPE_LATEST_RELEASE_DATE_pn-libxtst = "10/2009"
RECIPE_COMMENTS_pn-libxtst = ""
RECIPE_MAINTAINER_pn-libxtst = "Dexuan Cui <dexuan.cui@intel.com>"

-RECIPE_STATUS_pn-libsdl = "red" # update needed
-DEPENDENCY_CHECK_pn-libsdl = "not done"
-RECIPE_LATEST_VERSION_pn-libsdl = "1.2.14"
-RECIPE_NO_OF_PATCHES_pn-libsdl = "2"
-RECIPE_PATCH_pn-libsdl+configure_tweak = "fix configure.in"
-RECIPE_PATCH_pn-libsdl+kernel-asm-page = "a small fix: include the proper header file"
-RECIPE_INTEL_SECTION_pn-libsdl = "x11/libs"
-RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-libsdl = "2 years"
-RECIPE_LATEST_RELEASE_DATE_pn-libsdl = "Oct 20, 2009"
-RECIPE_COMMENTS_pn-libsdl = "Version 1.3 is under construction and is likely unstable."
-RECIPE_MAINTAINER_pn-libsdl = "Dexuan Cui <dexuan.cui@intel.com>"
-DISTRO_PN_ALIAS_pn-libsdl = "Fedora=SDL Ubuntu=libsdl"
-
RECIPE_STATUS_pn-printproto = "green"
DISTRO_PN_ALIAS_pn-printproto = "Debian=x11proto-print-dev Ubuntu=x11proto-print-dev Mandriva=x11-proto-devel"
DEPENDENCY_CHECK_pn-printproto = "not done"
@@ -4626,7 +4155,6 @@ RECIPE_LATEST_RELEASE_DATE_pn-startup-notification = "04/13/2009"
RECIPE_COMMENTS_pn-startup-notification = "in recipe, SECTION is libs but to be more accurate it should be x11/libs"
RECIPE_MAINTAINER_pn-startup-notification = "Dexuan Cui <dexuan.cui@intel.com>"

-
RECIPE_STATUS_pn-galago-daemon = "red"
RECIPE_LATEST_VERSION_pn-galago-daemon = "0.5.1"
RECIPE_MAINTAINER_pn-galago-daemon = "Dexuan Cui <dexuan.cui@intel.com>"
@@ -4648,10 +4176,6 @@ RECIPE_LATEST_VERSION_pn-ttf-bitstream-vera = "1.10"
RECIPE_MAINTAINER_pn-ttf-bitstream-vera = "Dexuan Cui <dexuan.cui@intel.com>"
DISTRO_PN_ALIAS_pn-ttf-bitstream-vera = "Debian=ttf-bitstream-vera Ubuntu=ttf-bitstream-vera"

-RECIPE_STATUS_pn-gvfs = "red"
-RECIPE_LATEST_VERSION_pn-gvfs = "n/a"
-RECIPE_MAINTAINER_pn-gvfs = "Dexuan Cui <dexuan.cui@intel.com>"
-
RECIPE_STATUS_pn-libart-lgpl = "red"
RECIPE_LATEST_VERSION_pn-libart-lgpl = "2.3.21"
RECIPE_MAINTAINER_pn-libart-lgpl = "Dexuan Cui <dexuan.cui@intel.com>"
@@ -4733,23 +4257,11 @@ RECIPE_LATEST_VERSION_pn-windowswmproto = "1.0.4"
RECIPE_MAINTAINER_pn-windowswmproto = "Dexuan Cui <dexuan.cui@intel.com>"
DISTRO_PN_ALIAS_pn-windowswmproto = "Mandriva=x11-proto-devel OpenSuSE=xorg-x11-proto-devel"

-RECIPE_STATUS_pn-xpext = "red"
-RECIPE_LATEST_VERSION_pn-xpext = "1.0-5"
-RECIPE_MAINTAINER_pn-xpext = "Dexuan Cui <dexuan.cui@intel.com>"
-
RECIPE_STATUS_pn-xproxymanagementprotocol = "red"
DISTRO_PN_ALIAS_pn-xproxymanagementprotocol = "Meego=xorg-x11-proto-xproxymanagementprotocol"
RECIPE_LATEST_VERSION_pn-xproxymanagementprotocol = "1.0.3"
RECIPE_MAINTAINER_pn-xproxymanagementprotocol = "Dexuan Cui <dexuan.cui@intel.com>"

-RECIPE_STATUS_pn-xsp = "red"
-RECIPE_LATEST_VERSION_pn-xsp = "1.0.0-8"
-RECIPE_MAINTAINER_pn-xsp = "Dexuan Cui <dexuan.cui@intel.com>"
-
-RECIPE_STATUS_pn-droid-fonts = "red"
-RECIPE_LATEST_VERSION_pn-droid-fonts = "git:e3ea6e3d4c8a8c2dc71f608a74ed9f6137afe63d;latest_tag:android-2.1_r2.1s???"
-RECIPE_MAINTAINER_pn-droid-fonts = "Dexuan Cui <dexuan.cui@intel.com>"
-
RECIPE_STATUS_pn-libxklavier = "red"
RECIPE_LATEST_VERSION_pn-libxklavier = "5.0"
RECIPE_MAINTAINER_pn-libxklavier = "Dexuan Cui <dexuan.cui@intel.com>"
@@ -5212,16 +4724,11 @@ RECIPE_COMMENTS_pn-menu-cache = "Add as required when adding libfm"
DISTRO_PN_ALIAS_pn-menu-cache = "OSPDT"
RECIPE_MAINTAINER_pn-menu-cache = "Zhai Edwin <edwin.zhai@intel.com>"

-
RECIPE_STATUS_pn-gtk-engines = "red"
DISTRO_PN_ALIAS_pn-gtk-engines = "Fedora=gtk2-engines OpenSuSE=gtk2-engines Ubuntu=gtk2-engines Mandriva=gtk-engines2 Debian=gtk2-engines"
RECIPE_LATEST_VERSION_pn-gtk-engines = "2.18"
RECIPE_MAINTAINER_pn-gtk-engines = "Edwin Zhai <edwin.zhai@intel.com>"

-RECIPE_STATUS_pn-clutter-imcontext = "red"
-RECIPE_LATEST_VERSION_pn-clutter-imcontext = "0.1.6"
-RECIPE_MAINTAINER_pn-clutter-imcontext = "Edwin Zhai <edwin.zhai@intel.com>"
-
RECIPE_STATUS_pn-clutter-box2d = "red"
RECIPE_LATEST_VERSION_pn-clutter-box2d = "0.10.0"
RECIPE_MAINTAINER_pn-clutter-box2d = "Edwin Zhai <edwin.zhai@intel.com>"
@@ -5236,10 +4743,6 @@ DISTRO_PN_ALIAS_pn-clutter-gtk-1.0 = "Fedora=clutter-gtk OpenSuSE=clutter-gtk Ub
RECIPE_LATEST_VERSION_pn-clutter-gtk-1.0 = "0.90.0+git0+e8d828ba1d87937baa571e68fdff22f3e2d79ca8"
RECIPE_MAINTAINER_pn-clutter-gtk-1.0 = "Edwin Zhai <edwin.zhai@intel.com>"

-RECIPE_STATUS_pn-clutter-mozembed = "red"
-RECIPE_LATEST_VERSION_pn-clutter-mozembed = "0.10.5"
-RECIPE_MAINTAINER_pn-clutter-mozembed = "Edwin Zhai <edwin.zhai@intel.com>"
-
RECIPE_STATUS_pn-tidy = "red"
RECIPE_LATEST_VERSION_pn-tidy = "0.1.0+git0+e25416e1293e1074bfa6727c80527dcff5b1f3cb"
RECIPE_MAINTAINER_pn-tidy = "Edwin Zhai <edwin.zhai@intel.com>"
@@ -5292,22 +4795,10 @@ DISTRO_PN_ALIAS_pn-clutter-gtk-1.0 = "Fedora=clutter-gtk OpenSuSE=clutter-gtk Ub
RECIPE_LATEST_VERSION_pn-clutter = "1.2.8"
RECIPE_MAINTAINER_pn-clutter = "Edwin Zhai <edwin.zhai@intel.com>"

-RECIPE_STATUS_pn-gnome-menus = "red"
-RECIPE_LATEST_VERSION_pn-gnome-menus = "2.30.0"
-RECIPE_MAINTAINER_pn-gnome-menus = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-gnome-packagekit = "red"
-RECIPE_LATEST_VERSION_pn-gnome-packagekit = "2.30.2"
-RECIPE_MAINTAINER_pn-gnome-packagekit = "Edwin Zhai <edwin.zhai@intel.com>"
-
RECIPE_STATUS_pn-libgnomekbd = "red"
RECIPE_LATEST_VERSION_pn-libgnomekbd = "2.31.1"
RECIPE_MAINTAINER_pn-libgnomekbd = "Edwin Zhai <edwin.zhai@intel.com>"

-RECIPE_STATUS_pn-mx = "red"
-RECIPE_LATEST_VERSION_pn-mx = "1.0.2"
-RECIPE_MAINTAINER_pn-mx = "Edwin Zhai <edwin.zhai@intel.com>"
-
RECIPE_STATUS_pn-clipboard-manager = "red"
DISTRO_PN_ALIAS_pn-clipboard-manager = "OpenedHand"
RECIPE_LATEST_VERSION_pn-clipboard-manager = "0.6.8"
@@ -5343,18 +4834,6 @@ RECIPE_LATEST_VERSION_pn-libgtkstylus = "0.5"
RECIPE_MAINTAINER_pn-libgtkstylus = "Edwin Zhai <edwin.zhai@intel.com>"
DISTRO_PN_ALIAS_pn-libgtkstylus = "Debian=libgtkstylus Ubuntu=libgtkstylus"

-RECIPE_STATUS_pn-libidl = "red"
-RECIPE_LATEST_VERSION_pn-libidl = "0.8.14"
-RECIPE_MAINTAINER_pn-libidl = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-libsexy = "red"
-RECIPE_LATEST_VERSION_pn-libsexy = "0.1.11"
-RECIPE_MAINTAINER_pn-libsexy = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-matchbox-session-netbook = "red"
-RECIPE_LATEST_VERSION_pn-matchbox-session-netbook = "0.1"
-RECIPE_MAINTAINER_pn-matchbox-session-netbook = "Edwin Zhai <edwin.zhai@intel.com>"
-
RECIPE_STATUS_pn-matchbox-theme-sato-2 = "red"
RECIPE_LATEST_VERSION_pn-matchbox-theme-sato-2 = "0.1+svnr164"
RECIPE_MAINTAINER_pn-matchbox-theme-sato-2 = "Edwin Zhai <edwin.zhai@intel.com>"
@@ -5377,34 +4856,6 @@ RECIPE_STATUS_pn-metacity = "red"
RECIPE_LATEST_VERSION_pn-metacity = "2.30.1"
RECIPE_MAINTAINER_pn-metacity = "Edwin Zhai <edwin.zhai@intel.com>"

-RECIPE_STATUS_pn-moblin-cursor-theme = "red"
-RECIPE_LATEST_VERSION_pn-moblin-cursor-theme = "0.3"
-RECIPE_MAINTAINER_pn-moblin-cursor-theme = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-moblin-gtk-engine = "red"
-RECIPE_LATEST_VERSION_pn-moblin-gtk-engine = "1.2.3"
-RECIPE_MAINTAINER_pn-moblin-gtk-engine = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-moblin-icon-theme = "red"
-RECIPE_LATEST_VERSION_pn-moblin-icon-theme = "0.10"
-RECIPE_MAINTAINER_pn-moblin-icon-theme = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-moblin-menus = "red"
-RECIPE_LATEST_VERSION_pn-moblin-menus = "0.1.6"
-RECIPE_MAINTAINER_pn-moblin-menus = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-moblin-user-skel = "red"
-RECIPE_LATEST_VERSION_pn-moblin-user-skel = "0.18"
-RECIPE_MAINTAINER_pn-moblin-user-skel = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-moblin-web-browser = "red"
-RECIPE_LATEST_VERSION_pn-moblin-web-browser = "2.1.2"
-RECIPE_MAINTAINER_pn-moblin-web-browser = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-nautilus = "red"
-RECIPE_LATEST_VERSION_pn-nautilus = "2.31.2"
-RECIPE_MAINTAINER_pn-nautilus = "Edwin Zhai <edwin.zhai@intel.com>"
-
RECIPE_STATUS_pn-polkit-gnome = "red"
RECIPE_LATEST_VERSION_pn-polkit-gnome = "0.96"
RECIPE_MAINTAINER_pn-polkit-gnome = "Edwin Zhai <edwin.zhai@intel.com>"
@@ -5414,50 +4865,10 @@ DISTRO_PN_ALIAS_pn-pong-clock = "OpenedHand"
RECIPE_LATEST_VERSION_pn-pong-clock = "1.0"
RECIPE_MAINTAINER_pn-pong-clock = "Edwin Zhai <edwin.zhai@intel.com>"

-RECIPE_STATUS_pn-twm = "red"
-RECIPE_LATEST_VERSION_pn-twm = "1.0.4"
-RECIPE_MAINTAINER_pn-twm = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-anerley = "red"
-RECIPE_LATEST_VERSION_pn-anerley = "0.2.13"
-RECIPE_MAINTAINER_pn-anerley = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-nbtk = "red"
-RECIPE_LATEST_VERSION_pn-nbtk = "n/a"
-RECIPE_MAINTAINER_pn-nbtk = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-moblin-panel-applications = "red"
-RECIPE_LATEST_VERSION_pn-moblin-panel-applications = "0.1.28"
-RECIPE_MAINTAINER_pn-moblin-panel-applications = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-moblin-panel-media = "red"
-RECIPE_LATEST_VERSION_pn-moblin-panel-media = "0.0.18"
-RECIPE_MAINTAINER_pn-moblin-panel-media = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-moblin-panel-myzone = "red"
-RECIPE_LATEST_VERSION_pn-moblin-panel-myzone = "0.1.32"
-RECIPE_MAINTAINER_pn-moblin-panel-myzone = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-moblin-panel-pasteboard = "red"
-RECIPE_LATEST_VERSION_pn-moblin-panel-pasteboard = "0.0.6"
-RECIPE_MAINTAINER_pn-moblin-panel-pasteboard = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-moblin-panel-people = "red"
-RECIPE_LATEST_VERSION_pn-moblin-panel-people = "0.1.12"
-RECIPE_MAINTAINER_pn-moblin-panel-people = "Edwin Zhai <edwin.zhai@intel.com>"
-
-RECIPE_STATUS_pn-moblin-panel-status = "red"
-RECIPE_LATEST_VERSION_pn-moblin-panel-status = "0.1.21"
-RECIPE_MAINTAINER_pn-moblin-panel-status = "Edwin Zhai <edwin.zhai@intel.com>"
-
RECIPE_STATUS_pn-mutter = "red"
RECIPE_LATEST_VERSION_pn-mutter = "2.29.1_0.4"
RECIPE_MAINTAINER_pn-mutter = "Edwin Zhai <edwin.zhai@intel.com>"

-RECIPE_STATUS_pn-mutter-moblin = "red"
-RECIPE_LATEST_VERSION_pn-mutter-moblin = "0.75.13"
-RECIPE_MAINTAINER_pn-mutter-moblin = "Edwin Zhai <edwin.zhai@intel.com>"
-
RECIPE_STATUS_pn-qt4-x11-free = "green"
DISTRO_PN_ALIAS_pn-qt4-x11-free = "Ubuntu=qt-x11-free Debian=qt-x11-free"
RECIPE_LATEST_VERSION_pn-qt4-x11-free = "4.6.3"
@@ -5594,5 +5005,3 @@ RECIPE_INTEL_SECTION_pn-lighttpd = "base utils"
RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-lighttpd = "10 days"
RECIPE_LATEST_RELEASE_DATE_pn-lighttpd = "08/2010"
RECIPE_COMMENTS_pn-lighttpd = ""
-
-
--
1.6.3.3


[PATCH 3/3] distro tracking: update the info for upgrade recpies

Yu Ke <ke.yu@...>
 

they are:
xmodmap
cracklib
xf86-input-evdev
xf86-input-mouse
xf86-video-vmware
sqlite3
xf86-input-vmmouse
xwininfo
xset
xauth

Signed-off-by: Yu Ke <ke.yu@intel.com>
---
.../conf/distro/include/distro_tracking_fields.inc | 42 ++++++++++----------
1 files changed, 21 insertions(+), 21 deletions(-)

diff --git a/meta/conf/distro/include/distro_tracking_fields.inc b/meta/conf/distro/include/distro_tracking_fields.inc
index 4a00ce7..e0091d4 100644
--- a/meta/conf/distro/include/distro_tracking_fields.inc
+++ b/meta/conf/distro/include/distro_tracking_fields.inc
@@ -492,10 +492,10 @@ RECIPE_COMMENTS_pn-popt = ""
RECIPE_STATUS_pn-sqlite3 = "green" # need upgrade
RECIPE_MAINTAINER_pn-sqlite3 = "Ke Yu <ke.yu@intel.com>"
DEPENDENCY_CHECK_pn-sqlite3 = "not done"
-RECIPE_LATEST_VERSION_pn-sqlite3 = "3.6.23.1"
+RECIPE_LATEST_VERSION_pn-sqlite3 = "3.7.3"
RECIPE_INTEL_SECTION_pn-sqlite3 = "base libs"
RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-sqlite3 = "1 month"
-RECIPE_LATEST_RELEASE_DATE_pn-sqlite3 = "03/2010"
+RECIPE_LATEST_RELEASE_DATE_pn-sqlite3 = "10/2010"
RECIPE_COMMENTS_pn-sqlite3 = ""

RECIPE_STATUS_pn-libpthread-stubs = "green"
@@ -1756,9 +1756,9 @@ RECIPE_STATUS_pn-xinetd="red"
RECIPE_MAINTAINER_pn-xinetd="Yu Ke <ke.yu@intel.com>"
RECIPE_LATEST_VERSION_pn-xinetd="2.3.14"

-RECIPE_STATUS_pn-cracklib="red"
+RECIPE_STATUS_pn-cracklib="green"
RECIPE_MAINTAINER_pn-cracklib="Yu Ke <ke.yu@intel.com>"
-RECIPE_LATEST_VERSION_pn-cracklib="2.8.16"
+RECIPE_LATEST_VERSION_pn-cracklib="2.8.18"

RECIPE_STATUS_pn-openobex = "red"
RECIPE_LATEST_VERSION_pn-openobex = "1.5"
@@ -3413,10 +3413,10 @@ RECIPE_MAINTAINER_pn-mkfontscale="Yu Ke <ke.yu@intel.com>"
DISTRO_PN_ALIAS_pn-mkfontscale = "Mandriva=mkfontscale Ubuntu=xfonts-utils Fedora=xorg-x11-font-utils"

RECIPE_STATUS_pn-xauth="green" # no update needed
-RECIPE_LATEST_VERSION_pn-xauth="1.0.4"
+RECIPE_LATEST_VERSION_pn-xauth="1.0.5"
RECIPE_NO_OF_PATCHES_pn-xauth="0"
RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-xauth="1 year"
-RECIPE_LATEST_RELEASE_DATE_pn-xauth="2009/09/21"
+RECIPE_LATEST_RELEASE_DATE_pn-xauth="2010/09/24"
RECIPE_INTEL_SECTION_pn-xauth="graphic app"
RECIPE_MAINTAINER_pn-xauth="Yu Ke <ke.yu@intel.com>"

@@ -3456,10 +3456,10 @@ RECIPE_MAINTAINER_pn-xkbcomp="Yu Ke <ke.yu@intel.com>"
DISTRO_PN_ALIAS_pn-xkbcomp = "Ubuntu=x11-xkb-utils Fedora=xorg-x11-xkb-utils"

RECIPE_STATUS_pn-xmodmap="green" # no update needed
-RECIPE_LATEST_VERSION_pn-xmodmap="1.0.4"
+RECIPE_LATEST_VERSION_pn-xmodmap="1.0.5"
RECIPE_NO_OF_PATCHES_pn-xmodmap="0"
RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-xmodmap="2 years"
-RECIPE_LATEST_RELEASE_DATE_pn-xmodmap="2009/10/06"
+RECIPE_LATEST_RELEASE_DATE_pn-xmodmap="2010/09/24"
RECIPE_INTEL_SECTION_pn-xmodmap="graphic app"
RECIPE_MAINTAINER_pn-xmodmap="Yu Ke <ke.yu@intel.com>"
DISTRO_PN_ALIAS_pn-xmodmap = "Meego=xorg-x11-utils-xmodmap Fedora=xorg-x11-server-utils Ubuntu=x11-xserver-utils"
@@ -3475,18 +3475,18 @@ DISTRO_PN_ALIAS_pn-xprop = "Meego=xorg-x11-utils-xprop Fedora=xorg-x11-utils Ubu

RECIPE_STATUS_pn-xset="green" # no update needed
DISTRO_PN_ALIAS_pn-xset = "Fedora=xorg-x11-server-utils Ubuntu=x11-xserver-utils Debian=x11-xserver-utils Opensuse=xorg-x11"
-RECIPE_LATEST_VERSION_pn-xset="1.1.0"
+RECIPE_LATEST_VERSION_pn-xset="1.2.1"
RECIPE_NO_OF_PATCHES_pn-xset="1"
RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-xset="1 year"
-RECIPE_LATEST_RELEASE_DATE_pn-xset="2009/09/21"
+RECIPE_LATEST_RELEASE_DATE_pn-xset="2010/11/11"
RECIPE_INTEL_SECTION_pn-xset="graphic app"
RECIPE_MAINTAINER_pn-xset="Yu Ke <ke.yu@intel.com>"

RECIPE_STATUS_pn-xwininfo="green" # no update needed
-RECIPE_LATEST_VERSION_pn-xwininfo="1.0.5"
+RECIPE_LATEST_VERSION_pn-xwininfo="1.1.1"
RECIPE_NO_OF_PATCHES_pn-xwininfo="0"
RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-xwininfo="1 year"
-RECIPE_LATEST_RELEASE_DATE_pn-xwininfo="2009/10/12"
+RECIPE_LATEST_RELEASE_DATE_pn-xwininfo="2010/10/30"
RECIPE_INTEL_SECTION_pn-xwininfo="graphic app"
RECIPE_MAINTAINER_pn-xwininfo="Yu Ke <ke.yu@intel.com>"
DISTRO_PN_ALIAS_pn-xwininfo = "Fedora=xorg-x11-utils Ubuntu=x11-utils"
@@ -3600,19 +3600,19 @@ RECIPE_MAINTAINER_pn-xf86-input-keyboard="Yu Ke <ke.yu@intel.com>"

RECIPE_STATUS_pn-xf86-input-mouse="green" # no update needed
DISTRO_PN_ALIAS_pn-xf86-input-mouse = "Ubuntu=xserver-xorg-input-mouse Mandriva=x11-driver-input-mouse Debian=xserver-xorg-input-mouse"
-RECIPE_LATEST_VERSION_pn-xf86-input-mouse="1.5.0"
+RECIPE_LATEST_VERSION_pn-xf86-input-mouse="1.6.0"
RECIPE_NO_OF_PATCHES_pn-xf86-input-mouse="0"
RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-xf86-input-mouse="9 monthes"
-RECIPE_LATEST_RELEASE_DATE_pn-xf86-input-mouse="2009/10/06"
+RECIPE_LATEST_RELEASE_DATE_pn-xf86-input-mouse="2010/09/09"
RECIPE_INTEL_SECTION_pn-xf86-input-mouse="graphic core"
RECIPE_MAINTAINER_pn-xf86-input-mouse="Yu Ke <ke.yu@intel.com>"

-RECIPE_STATUS_pn-xf86-input-vmmouse="red" # update needed
+RECIPE_STATUS_pn-xf86-input-vmmouse="green" # update needed
DISTRO_PN_ALIAS_pn-xf86-input-vmmouse = "Fedora=xorg-x11-drv-vmmouse Ubuntu=xserver-xorg-input-vmmouse Mandriva=x11-driver-input-vmmouse Debian=xserver-xorg-input-vmmouse"
-RECIPE_LATEST_VERSION_pn-xf86-input-vmmouse="12.6.9"
+RECIPE_LATEST_VERSION_pn-xf86-input-vmmouse="12.6.10"
RECIPE_NO_OF_PATCHES_pn-xf86-input-vmmouse="0"
RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-xf86-input-vmmouse="1 month"
-RECIPE_LATEST_RELEASE_DATE_pn-xf86-input-vmmouse="2010/04/08"
+RECIPE_LATEST_RELEASE_DATE_pn-xf86-input-vmmouse="2010/08/10"
RECIPE_INTEL_SECTION_pn-xf86-input-vmmouse="graphic core"
RECIPE_MAINTAINER_pn-xf86-input-vmmouse="Yu Ke <ke.yu@intel.com>"

@@ -3643,19 +3643,19 @@ RECIPE_MAINTAINER_pn-libxfontcache="Yu Ke <ke.yu@intel.com>"

RECIPE_STATUS_pn-xf86-input-evdev="green" # no update needed
DISTRO_PN_ALIAS_pn-xf86-input-evdev = "Ubuntu=xserver-xorg-input-evdev Mandriva=x11-driver-input-evdev Debian=xserver-xorg-input-evdev Fedora=xorg-x11-drv-evdev Meego=xorg-x11-drv-evdev
-RECIPE_LATEST_VERSION_pn-xf86-input-evdev="2.4.0"
+RECIPE_LATEST_VERSION_pn-xf86-input-evdev="2.5.0"
RECIPE_NO_OF_PATCHES_pn-xf86-input-evdev="0"
RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-xf86-input-evdev="1 monthes"
-RECIPE_LATEST_RELEASE_DATE_pn-xf86-input-evdev="2010/04/06"
+RECIPE_LATEST_RELEASE_DATE_pn-xf86-input-evdev="2010/08/23"
RECIPE_INTEL_SECTION_pn-xf86-input-evdev="graphic core"
RECIPE_MAINTAINER_pn-xf86-input-evdev="Yu Ke <ke.yu@intel.com>"

RECIPE_STATUS_pn-xf86-video-vmware="green" # no update needed
DISTRO_PN_ALIAS_pn-xf86-video-vmware = "Debian=xserver-xorg-video-vmware Fedora=xorg-x11-drv-vmware Mandriva=x11-driver-video-vmware Ubuntu=xserver-xorg-video-vmware"
-RECIPE_LATEST_VERSION_pn-xf86-video-vmware="11.0.1"
+RECIPE_LATEST_VERSION_pn-xf86-video-vmware="11.0.3"
RECIPE_NO_OF_PATCHES_pn-xf86-video-vmware="0"
RECIPE_TIME_BETWEEN_LAST_TWO_RELEASES_pn-xf86-video-vmware="2 monthes"
-RECIPE_LATEST_RELEASE_DATE_pn-xf86-video-vmware="2010/03/18"
+RECIPE_LATEST_RELEASE_DATE_pn-xf86-video-vmware="2010/11/09"
RECIPE_INTEL_SECTION_pn-xf86-video-vmware="graphic core"
RECIPE_MAINTAINER_pn-xf86-video-vmware="Yu Ke <ke.yu@intel.com>"

--
1.7.0.4


[PATCH 2/3] sqlite: upgrade from 3.6.23 to 3.7.3

Yu Ke <ke.yu@...>
 

Signed-off-by: Yu Ke <ke.yu@intel.com>
---
.../{sqlite3_3.6.23.1.bb => sqlite3_3.7.3.bb} | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
rename meta/recipes-support/sqlite/{sqlite3_3.6.23.1.bb => sqlite3_3.7.3.bb} (67%)

diff --git a/meta/recipes-support/sqlite/sqlite3_3.6.23.1.bb b/meta/recipes-support/sqlite/sqlite3_3.7.3.bb
similarity index 67%
rename from meta/recipes-support/sqlite/sqlite3_3.6.23.1.bb
rename to meta/recipes-support/sqlite/sqlite3_3.7.3.bb
index 5075dd3..479516d 100644
--- a/meta/recipes-support/sqlite/sqlite3_3.6.23.1.bb
+++ b/meta/recipes-support/sqlite/sqlite3_3.7.3.bb
@@ -1,3 +1,3 @@
require sqlite3.inc

-PR = "r1"
+PR = "r0"
--
1.7.0.4


[PATCH 1/3] cracklib: upgrade from 2.8.16 to 2.8.18

Yu Ke <ke.yu@...>
 

Signed-off-by: Yu Ke <ke.yu@intel.com>
---
.../{cracklib_2.8.16.bb => cracklib_2.8.18.bb} | 0
1 files changed, 0 insertions(+), 0 deletions(-)
rename meta/recipes-extended/cracklib/{cracklib_2.8.16.bb => cracklib_2.8.18.bb} (100%)

diff --git a/meta/recipes-extended/cracklib/cracklib_2.8.16.bb b/meta/recipes-extended/cracklib/cracklib_2.8.18.bb
similarity index 100%
rename from meta/recipes-extended/cracklib/cracklib_2.8.16.bb
rename to meta/recipes-extended/cracklib/cracklib_2.8.18.bb
--
1.7.0.4


[PULL] poky-qemu fixes

Scott Garman <scott.a.garman@...>
 

This fixes two bugs with poky-qemu when it is run from a standalone meta-toolchain setup. Thanks to Sachin Kumar for reporting them to the list.

Pull URL: git://git.pokylinux.org/poky-contrib.git
Branch: sgarman/poky-qemu-fixes
Browse: http://git.pokylinux.org/cgit.cgi/poky-contrib/log/?h=sgarman/poky-qemu-fixes

Thanks,
Scott Garman <scott.a.garman@intel.com>
---


Scott Garman (1):
poky-qemu: Fix issues when running Yocto 0.9 release images

scripts/poky-qemu | 64 +++++++++++++++++++++++++++++-----------------------
1 files changed, 36 insertions(+), 28 deletions(-)

--
Scott Garman
Embedded Linux Distro Engineer - Yocto Project


Re: Tracing/profiling tools for Yocto v1.0

Bruce Ashfield <bruce.ashfield@...>
 

On 10-11-12 5:25 PM, Tom Zanussi wrote:
Hi,

For the 1.0 Yocto release, we'd like to have as complete a set of
tracing and profiling tools as possible, enough so that most users will
be satisfied with what's available, but not so many as to produce a
maintenance burden.

The current set is pretty decent:

latencytop
powertop
lttng
lttng-ust
oprofile(ui)
trace-cmd
perf

but there seems to be an omission or two with respect to the current set
as packaged in Yocto, and there are a few other tools that I think would
make sense to add, either to address a gap in the current set, or
because they're popular enough to be missed by more than a couple
users:

KernelShark
perf trace scripting support
SystemTap
blktrace
sysprof
These match my lists that I've been adding to various
kernels (and roadmaps) for a while, so no arguments here.

See below for some comments and ideas.


These are just my own opinions regarding what I think is missing - see
below for more details on each tool, and some reasons I think it would
make sense to include them. If you disagree, or even better, have
suggestions for other tools that you think are essential and missing,
please let me know. Otherwise, I plan on adding support for them to
Yocto in the very near future (e.g. starting next week).

Just one note - I know that some of these may not be appropriate for all
platforms; in those cases, I'd expect they just wouldn't be included in
the images for those machines. Actually, except for sysprof and
KernelShark, they all have modes that should allow them to be used with
minimal footprints on the target system, and even then I think both
KernelShark and sysprof could both be relatively easily retrofitted with
a remote layer like OprofileUI's that would make them lightweight on the
target.

Anyway, on to some descriptions of the tools themselves, followed by a
short summary at the end...

----

Tool: KernelShark
URL: http://rostedt.homelinux.com/kernelshark/
Architectures supported: all, nothing arch-specific

KernelShark is a front-end GUI interface to trace-cmd, a tracing tool
that's already included in the Yocto SDK (trace-cmd basically provides
an easier-to-use text-based interface to the raw debugfs tracing files
contained in /sys/kernel/debug/tracing).

Tracing can be started and stopped from the GUI; when the trace session
ends, the results are displayed in a couple of sub-windows: a graphical
area that displays events for each CPU but that can also display
per-task graphs, and a listbox that displays a detailed list of events
in the trace. In addition to display of raw events, it also supports
display of the output of the kernel's ftrace plugins
(/sys/kernel/debug/tracing/available_tracers) such as the function and
function_graph tracers, which are very useful on their own for figuring
out exactly what the kernel does in particular codepaths.

One very nice KernelShark feature is the ability to easily toggle the
individual events or event subsystems of interest; specifying these
manually is usually one of the most unpleasant parts of command-line
tracing, for this reason alone KernelShark is worth looking at, as it
makes the whole tracing experience much more manageable and enjoyable
(and therefore more likely to be used). Additionally, the extensive
support of filtering and searching is very useful. The GUI itself is
also extensible via Python plug-ins. All in all a great tool for
running and viewing traces.

Support for remote targets: The event subsystem and ftrace plugins that
provide the data for trace-cmd/KernelShark are completely implemented
within the kernel; both control and trace stream data retrieval are
accessed via debugfs files. The files that provide the data retrieval
function are accessible via splice, which means that the trace streams
could be easily sent over the network and processed on the host. The
current KernelShark code doesn't do that - currently the UI needs to run
on the target - but that would be an area where Yocto could add some
value - it shouldn't be a huge amount of effort to add that capability.
In the worst case, something along the lines of what OprofileUI does
(start/stop the trace on the target, and send the results back when
done) could also be acceptable as a local stopgap solution.
Agreed, adding off-target viewing/control would be a nice
addition here. Phase (b) perhaps ?


----

Tool: perf trace scripting support
URL: none, included in the kernel sources
Architectures supported: all, nothing arch-specific

Yocto already includes the 'perf' tool, which is a userspace tool that's
actually bundled as part of the mainline linux kernel source. 'perf
trace' is a subtool of perf that performs system-wide (or per-task)
event tracing and displays the raw trace event data using format strings
associated with each trace event. In fact, the events and event
descriptions used by perf are the same as those used by
trace-cmd/KernelShark to generate its traces (the kernel event
subsystem, see /sys/kernel/debug/tracing/events).

As is the case with KernelShark, the reams of raw trace data provided by
perf trace provide a lot of useful detail, but the question becomes how
to realistically extract useful high-level information from it. You
could sit down and pore through it for trends or specific conditions (no
fun, and it's not really humanly possible with large data sets).
Filtering can be used, but that only goes so far. Realistically, to
make sense of it, it needs to be 'boiled down' somehow into a more
manageable form. The fancy word for that is 'aggregation', which
basically just means 'sticking the important stuff in a hash table'.

The perf trace scripting support embeds scripting language interpreters
into perf to allow perf's internal event dispatch mechanism to call
script handlers directly (script handlers can also call back into perf).
The scripting_ops interface formalizes this interaction and allows any
scripting engine that implements the API to be used as a full-fledged
event-processing language - currently Python and Perl are implemented.

Events are exposed in the scripting interpreter as function calls, where
each param is an event field (in the event description pseudo-file for
the event in the kernel event subsystem). During processing, every
event in the trace stream is converted into a corresponding function
call in the scripting language. At that point, the handler can do
anything it want to using the available facilities of the scripting
language such as, for example, aggregate the event data in a hash table.

A starter script with handlers for each event type can be automatically
generated from existing trace data using the 'perf trace -g' command.
This allows for one-off, quick turnaround trace experiments. But
scripts can be 'promoted' to full-fledged 'perf trace' scripts that
essentially become part of perf and can be listed using 'perf trace -l'.
This involves simply writing a couple wrapper shell scripts and putting
them in the right places.

In general, perf trace scripting is a useful tool to have when the
standard set of off-the-shelf tools aren't really enough to analyze a
problem. To take a simple example, using tools like iostat you can get
a general statistical idea of the read/write activity on the system, but
those tools won't tell you which processes are actually responsible for
most of the I/O activity. The 'perf trace rw-by-pid' canned script in
perf trace uses the system-call read/write tracepoints
(sys_enter/exit_read/write) to capture all the reads and writes (and
failed reads/writes) of every process on the system and at the end
displays a detailed per-process summary of the results. That
information can be used to determine which processes are responsible for
the most I/O activity on the system, which can in turn be used to target
and drill down into the detailed read/write activity caused by a
specific process using for example the rw-by-file canned script which
displays the per-file read/write activity for a specific process.

To give a couple more concrete examples of how this capability can be
useful, here are some other examples of things that can only be done
with scripting, such as detecting complex or 'compound' events.

Simple hard-coded filters and triggers can scan data for simple
conditions e.g. someone tried to read /etc/passwd. This kind of thing
should be possible with the current event filtering capabilities even
without scripting support e.g. scan the event stream for events that
satisfy the condition:

event == vfs_open&& filename == "/etc/passwd"

(This would tell you that someone tried to open /etc/password, but that
in itself isn't very useful - you'd really like to at least know who,
which of course could be accomplished by scripting.)

But a lot of other problems involve pattern matching over multiple
events. One example from a recent lkml posting:

The poster had noticed a certain inefficient pattern in block I/O data,
where multiple readahead requests resulted in an unnecessarily
inefficient pattern:

- queue first request
- plug queue
- queue second adjacent request
- merge
- unplug, issue, complete

In the case of readahead, latency is extremely important for throughput:
explicitly unplugging after each readahead increased throughput by 68%.
It's interesting to note that older kernels didn't have this problem,
but some unknown commit(s) introduced it.

This is the type of pattern that you would really need scripting support
in order to detect. A simple script to check for this condition and
detect a regression such as this could be quickly written and made
available, and possibly avoid the situation where a problem like this
could go undetected for a couple kernel revisions.

Perf and perf trace scripting also support 'live mode' (over the network
if desired), where the trace stream is processed as soon as it's
generated. Getting back to the "/etc/password" example - as mentioned,
something an administrator might want would be to monitor accesses to
"/etc/passwd" and see who's trying to access it. With live mode, a
continuously running script could monitor sys_open calls, compare the
opened filename against "/etc/passwd", get the uid and look up username
to find out who's trying to read it, and have the Python script e-mail
the culprit's name to the admin when detected.

Live mode is important for both the small and large targets,
so this is a good addition.


Baically, live-mode allows for long-running trace sessions that can
continuously scan for rare conditions. Referring back to the readahead
example, one assumption the poster made was that "merging of a readahead
window with anything other than its own sibling" would be extremely
rare. A long-running script could easily be written to detect this
exact condition and either confirm or refute that assumption, which
would be hard to do without some kind of scripting support.

Perf trace scripting is relatively new, so there aren't yet a lot of
real-world examples - currently there are about 15 canned scripts
available (see 'perf trace -l') including the rw-by-pid and rw-by-file
examples described above.

The main data source for perf trace scripting are the statically defined
trace events defined in /sys/kernel/debug/tracing/events. It's also
possible to use the dynamic event sources available from the 'perf
probe' tool, but this is still an area of active integration at the
moment.

Support for remote targets: perf and perf trace scripting 'live-mode'
support allows the trace stream to be piped over the network using e.g.
netcat. Using that mode, the target does nothing but generate the trace
stream and send it over the network to the host, where a live-mode
script can be applied to it. Even so, this is probably not the most
efficient way to transfer trace data - one hope would be that perf would
add support for splice, but that's uncertain at this point.
I'd also suggest that doing a canned powermanagement script would
be good here. Using the existing tracepoints (and adding our own)
to get a detailed view of C and P states would be a nice demo.


----

Tool: SystemTap
URL: http://sourceware.org/systemtap/
Architectures supported: x86, x86_64, ppc, ppc64, ia64, s390, arm

SystemTap is also a system-wide tracing tool that allows users to write
scripts that attach handlers to events and perform complex aggregation
and filtering of the event stream. It's been around for a long time and
thus has a lot of canned scripts available, which make use of a set of
general-purpose script-support libraries called 'tapsets' (see the
SystemTap wiki, off of the above link).

The language used to write SystemTap scripts isn't however a
general-purpose language like Perl or Python, but rather a C-like
language defined specifically for SystemTap. The reason for that has to
do with the way SystemTap works - SystemTap scripts are executed in the
kernel, which makes general-purpose language runtimes off-limits.
Basically what SystemTap does is translate a user script into an
equivalent C version, which is then compiled into a kernel module.
Inserting the kernel module attaches the C code to specific event
sources in the kernel - whenever an event is hit, the corresponding
event handler is invoked and does whatever it's told to do - usually
this is updating a counter in a hash table or something similar. When
the tracing session exits, the script typically calculates and displays
a summary of the aggregation(s), or whatever the user wants it to do.

In addition to the standard set of event sources (the static kernel
tracepoint events, and dynamic events via kprobes) SystemTap also
supports user space probing if the kernel is built with utrace support.
User space probing can be done either dynamically, or statically if the
application contains static tracepoints. A very interesting aspect of
this is that via dtrace-compatible markers, the existing static dtrace
tracepoints contained in, for example, the Java or Python runtimes can
also be used as event sources (e.g. if they're compiled with
--enable-dtrace). This should allow any Python or Java application to
be much more meaningfully traced and profiled using SystemTap - for
example, with complete userspace support theoretically every detail of
say an http request to a Java web application could be followed, from
the network device driver to the web server through a Java servlet and
back out through the kernel again. Supporting this however, in addition
to having utrace support in the kernel, might also require some
SystemTap-specific patches to the affected applications. Users can also
instrument their own applications using static tracepoints
(http://sourceware.org/systemtap/wiki/AddingUserSpaceProbingToApps).

As mentioned, there are a whole host of scripts available. Examples
include everything from per-process network traffic monitoring,
packet-drop monitoring, per-process disk I/O times, to the same types of
applications described above for 'perf trace scripting). There are too
many to usefully cover here, see
http://sourceware.org/systemtap/examples/keyword-index.html for a
complete list of the available scripts. Everything in SystemTap is also
very well documented - there are tutorials, handbooks, and a bunch of
useful information on the wiki such as 'War Stories' and deep-dives into
other use cases i.e. there's no shortage of useful info for new (and
old) users. I won't cover any specific examples here - basically all of
the motivations and capabilities described above for 'perf trace
scripting' should apply equally well to SystemTap, and won't be repeated
here.

Support for remote targets: SystemTap supports a cross-instrumentation
mode, where only the SystemTap run-time needs to be available on the
target. The instrumentation kernel module derived from a myscript.stp
generated on host (stap -r kernel_version myscript.stp -m module_name)
is copied over to target and executed via staprun 'myscript.ko'.

However, apparently host and target must still be the same architecture
for this to work.
Systemtap is the lowest on my list of items to add. Nothing
against systemtap, but the in kernel and architecture bindings
have always been problematic in an embedded scenario and I've
rarely (never) gotten a strong request for it.


----

Tool: blktrace
URL: http://linux.die.net/man/8/blktrace
Architectures supported: all, nothing arch-specific

Still the best way to get detailed disk I/O traces, and you can do some
really cool things with it:

http://feedblog.org/2010/04/27/2009/

Support for remote targets: Uses splice/sendfile, so the target can if
it wants do nothing but generate the trace data and send it over the
network. blkparse, the data collection portion of blktrace, fully
supports this mode and in fact encourages it in order to avoid
perturbing the results that occur when writing trace data on the target.

----

Tool: sysprof
URL: http://www.daimi.au.dk/~sandmann/sysprof/
Architectures supported: all, nothing arch-specific

A nice simple system-wide profiling UI - it profiles the kernel and all
running userspace applications. It displays functions in one window, and
an expandable tree of callees for the selected function in the the other
window, all with hit stats. Clicking on a callee in the callee window
shows callers of that function in a third window.

I don't know if this provides much more than OprofileUI, but the
interface is nice and it's popular in some quarters...
I think it is worth adding.


----

In summary, each of these tools provides a unique set of useful
capabilities that I think would be very nice to have in Yocto. There
are of course overlaps e.g. both SystemTap and trace-cmd provide
function-callgraph tracing, both trace-cmd and perf trace provide
event-subsystem-based tracing, SystemTap and perf trace scripting both
provide different ways of achieving the same kinds of high-level
aggregation goals, while blktrace, SystemTap, and perf trace scripting
all provide different ways of looking at block I/O. But they also each
have their own strengths as well, and do much more than what they do in
overlap.
That's ok. perf collides with oprofile, and everything else, so
overlap is no big issue, as long as we control the options and
can make them all co-exist in the kernel.


At some point some of the these tools will be completely overlap each
other - for example SystemTap and/or perf trace scripting eventually
will probably do everything blktrace does, and will additionally have
the potential to show that information in a larger context e.g. along
with VFS and/or mm data sources. Making things like that happen -
adding value to those tools or providing larger contexts could be a
focus for future Yocto contributions. On the other hand, it may make
sense in v1.0 to spend a small amount of development time to actually
help provide some coherent integration to all these tools and maybe
contribute to something like perfkit (http://audidude.com/?p=504).
There may not be time to do that, but at least the minimum set of tools
for a great user experience should be available, which I think the above
list goes a long way to providing. Comments welcome...
I've also had pings in the past about:

tuna and oscillscope: http://www.osadl.org/Single-View.111+M52212cb1379.0.html, but they are more 'tuning',
and I haven't checked activity on them for a while.

Although not a toolkit/tracing/profiling, having either
a nice how to, or light way to use dynamic tracepoints
with kprobes is a good idea. Plenty of things that we can
do to contribute here as well.

Ensuring that all these work with KGDB/KDB is also key,
since regressions sneak in pretty easily. Debug and trace
are getting closer and should be considered together. In
that same spirit better kexec/kdump/ftrace_dumo_on_oops
testing helps debug/tracing/profiling in the degenerate case.

And finally, having a good story around boottime tracing
and optimization is a key usecase for any of these tools.

We should do a ranking of the complete list (once compiled)
and see what can or can't be done .. since there IS quite a
bit of it here :)

Cheers,

Bruce






Tom



_______________________________________________
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: Tracing/profiling tools for Yocto v1.0

Bruce Ashfield <bruce.ashfield@...>
 

On 10-11-12 5:29 PM, Zhang, Jessica wrote:
Here's another thread about sysprof, my question is should we support both
oprofile and sysprof or we should be using sysprof which seems a better
tool...
Both. There's still no one tracer to rule them all (*cough*
perf *cough*), and until there is some real unification it
is best to support the various tracers.

In particular oprofile is easy enough to enable, is known
to work on many boards (in particular semi vendor boards)
and works with the -rt kernels.

We have the ability to dynamically enable and disable the
various tracers at build (and of course boot) time with some
easy selection of kernel profiles, so I recommend going for
broad support at the moment.

Bruce


Hi Rob,

Could you please tell us how to contribute to oprofileUI? The
yoctoproject.org is using oprofileUI as a profiling tool, and during the
development we found some bugs of oprofileUI and want to contribute our
patches to fix it.
Okay. Interesting... :-) i'll expedite the move to the gnome
infrastructure. These days most of what you can do with OprofileUI you
shouldn't and instead should look at using the sysprof daemon and then
sysprof GUI. (Given that sysprof builds on perf counters and so isn't
x86 specific any longer.)

Cheerio,

Rob

Zanussi, Tom wrote:
Hi,

For the 1.0 Yocto release, we'd like to have as complete a set of
tracing and profiling tools as possible, enough so that most users
will be satisfied with what's available, but not so many as to
produce a maintenance burden.

The current set is pretty decent:

latencytop
powertop
lttng
lttng-ust
oprofile(ui)
trace-cmd
perf

but there seems to be an omission or two with respect to the current
set as packaged in Yocto, and there are a few other tools that I
think would make sense to add, either to address a gap in the current
set, or because they're popular enough to be missed by more than a
couple users:

KernelShark
perf trace scripting support
SystemTap
blktrace
sysprof

These are just my own opinions regarding what I think is missing - see
below for more details on each tool, and some reasons I think it would
make sense to include them. If you disagree, or even better, have
suggestions for other tools that you think are essential and missing,
please let me know. Otherwise, I plan on adding support for them to
Yocto in the very near future (e.g. starting next week).

Just one note - I know that some of these may not be appropriate for
all platforms; in those cases, I'd expect they just wouldn't be
included in the images for those machines. Actually, except for
sysprof and KernelShark, they all have modes that should allow them
to be used with minimal footprints on the target system, and even
then I think both KernelShark and sysprof could both be relatively
easily retrofitted with a remote layer like OprofileUI's that would
make them lightweight on the target.

Anyway, on to some descriptions of the tools themselves, followed by a
short summary at the end...

----

Tool: KernelShark
URL: http://rostedt.homelinux.com/kernelshark/
Architectures supported: all, nothing arch-specific

KernelShark is a front-end GUI interface to trace-cmd, a tracing tool
that's already included in the Yocto SDK (trace-cmd basically provides
an easier-to-use text-based interface to the raw debugfs tracing files
contained in /sys/kernel/debug/tracing).

Tracing can be started and stopped from the GUI; when the trace
session ends, the results are displayed in a couple of sub-windows: a
graphical area that displays events for each CPU but that can also
display per-task graphs, and a listbox that displays a detailed list
of events in the trace. In addition to display of raw events, it
also supports display of the output of the kernel's ftrace plugins
(/sys/kernel/debug/tracing/available_tracers) such as the function and
function_graph tracers, which are very useful on their own for
figuring out exactly what the kernel does in particular codepaths.

One very nice KernelShark feature is the ability to easily toggle the
individual events or event subsystems of interest; specifying these
manually is usually one of the most unpleasant parts of command-line
tracing, for this reason alone KernelShark is worth looking at, as it
makes the whole tracing experience much more manageable and enjoyable
(and therefore more likely to be used). Additionally, the extensive
support of filtering and searching is very useful. The GUI itself is
also extensible via Python plug-ins. All in all a great tool for
running and viewing traces.

Support for remote targets: The event subsystem and ftrace plugins
that provide the data for trace-cmd/KernelShark are completely
implemented within the kernel; both control and trace stream data
retrieval are accessed via debugfs files. The files that provide the
data retrieval function are accessible via splice, which means that
the trace streams could be easily sent over the network and processed
on the host. The current KernelShark code doesn't do that -
currently the UI needs to run on the target - but that would be an
area where Yocto could add some value - it shouldn't be a huge amount
of effort to add that capability. In the worst case, something along
the lines of what OprofileUI does (start/stop the trace on the
target, and send the results back when done) could also be acceptable
as a local stopgap solution.

----

Tool: perf trace scripting support
URL: none, included in the kernel sources
Architectures supported: all, nothing arch-specific

Yocto already includes the 'perf' tool, which is a userspace tool
that's actually bundled as part of the mainline linux kernel source.
'perf trace' is a subtool of perf that performs system-wide (or
per-task) event tracing and displays the raw trace event data using
format strings associated with each trace event. In fact, the events
and event descriptions used by perf are the same as those used by
trace-cmd/KernelShark to generate its traces (the kernel event
subsystem, see /sys/kernel/debug/tracing/events).

As is the case with KernelShark, the reams of raw trace data provided
by perf trace provide a lot of useful detail, but the question
becomes how to realistically extract useful high-level information
from it. You could sit down and pore through it for trends or
specific conditions (no fun, and it's not really humanly possible
with large data sets). Filtering can be used, but that only goes so
far. Realistically, to make sense of it, it needs to be 'boiled
down' somehow into a more manageable form. The fancy word for that
is 'aggregation', which basically just means 'sticking the important
stuff in a hash table'.

The perf trace scripting support embeds scripting language
interpreters into perf to allow perf's internal event dispatch
mechanism to call script handlers directly (script handlers can also
call back into perf). The scripting_ops interface formalizes this
interaction and allows any scripting engine that implements the API
to be used as a full-fledged event-processing language - currently
Python and Perl are implemented.

Events are exposed in the scripting interpreter as function calls,
where each param is an event field (in the event description
pseudo-file for the event in the kernel event subsystem). During
processing, every event in the trace stream is converted into a
corresponding function call in the scripting language. At that
point, the handler can do anything it want to using the available
facilities of the scripting language such as, for example, aggregate
the event data in a hash table.

A starter script with handlers for each event type can be
automatically generated from existing trace data using the 'perf
trace -g' command. This allows for one-off, quick turnaround trace
experiments. But scripts can be 'promoted' to full-fledged 'perf
trace' scripts that essentially become part of perf and can be listed
using 'perf trace -l'. This involves simply writing a couple wrapper
shell scripts and putting them in the right places.

In general, perf trace scripting is a useful tool to have when the
standard set of off-the-shelf tools aren't really enough to analyze a
problem. To take a simple example, using tools like iostat you can
get a general statistical idea of the read/write activity on the
system, but those tools won't tell you which processes are actually
responsible for most of the I/O activity. The 'perf trace rw-by-pid'
canned script in perf trace uses the system-call read/write
tracepoints (sys_enter/exit_read/write) to capture all the reads and
writes (and failed reads/writes) of every process on the system and
at the end displays a detailed per-process summary of the results.
That information can be used to determine which processes are
responsible for the most I/O activity on the system, which can in
turn be used to target and drill down into the detailed read/write
activity caused by a specific process using for example the
rw-by-file canned script which displays the per-file read/write
activity for a specific process.

To give a couple more concrete examples of how this capability can be
useful, here are some other examples of things that can only be done
with scripting, such as detecting complex or 'compound' events.

Simple hard-coded filters and triggers can scan data for simple
conditions e.g. someone tried to read /etc/passwd. This kind of thing
should be possible with the current event filtering capabilities even
without scripting support e.g. scan the event stream for events that
satisfy the condition:

event == vfs_open&& filename == "/etc/passwd"

(This would tell you that someone tried to open /etc/password, but
that in itself isn't very useful - you'd really like to at least know
who, which of course could be accomplished by scripting.)

But a lot of other problems involve pattern matching over multiple
events. One example from a recent lkml posting:

The poster had noticed a certain inefficient pattern in block I/O
data, where multiple readahead requests resulted in an unnecessarily
inefficient pattern:

- queue first request
- plug queue
- queue second adjacent request
- merge
- unplug, issue, complete

In the case of readahead, latency is extremely important for
throughput: explicitly unplugging after each readahead increased
throughput by 68%. It's interesting to note that older kernels didn't
have this problem, but some unknown commit(s) introduced it.

This is the type of pattern that you would really need scripting
support in order to detect. A simple script to check for this
condition and detect a regression such as this could be quickly
written and made available, and possibly avoid the situation where a
problem like this could go undetected for a couple kernel revisions.

Perf and perf trace scripting also support 'live mode' (over the
network if desired), where the trace stream is processed as soon as
it's generated. Getting back to the "/etc/password" example - as
mentioned, something an administrator might want would be to monitor
accesses to "/etc/passwd" and see who's trying to access it. With
live mode, a continuously running script could monitor sys_open
calls, compare the opened filename against "/etc/passwd", get the uid
and look up username to find out who's trying to read it, and have
the Python script e-mail the culprit's name to the admin when
detected.

Baically, live-mode allows for long-running trace sessions that can
continuously scan for rare conditions. Referring back to the
readahead example, one assumption the poster made was that "merging
of a readahead window with anything other than its own sibling" would
be extremely rare. A long-running script could easily be written to
detect this exact condition and either confirm or refute that
assumption, which would be hard to do without some kind of scripting
support.

Perf trace scripting is relatively new, so there aren't yet a lot of
real-world examples - currently there are about 15 canned scripts
available (see 'perf trace -l') including the rw-by-pid and rw-by-file
examples described above.

The main data source for perf trace scripting are the statically
defined trace events defined in /sys/kernel/debug/tracing/events.
It's also possible to use the dynamic event sources available from
the 'perf probe' tool, but this is still an area of active
integration at the moment.

Support for remote targets: perf and perf trace scripting 'live-mode'
support allows the trace stream to be piped over the network using
e.g. netcat. Using that mode, the target does nothing but generate
the trace stream and send it over the network to the host, where a
live-mode script can be applied to it. Even so, this is probably not
the most efficient way to transfer trace data - one hope would be
that perf would add support for splice, but that's uncertain at this
point.

----

Tool: SystemTap
URL: http://sourceware.org/systemtap/
Architectures supported: x86, x86_64, ppc, ppc64, ia64, s390, arm

SystemTap is also a system-wide tracing tool that allows users to
write scripts that attach handlers to events and perform complex
aggregation and filtering of the event stream. It's been around for
a long time and thus has a lot of canned scripts available, which
make use of a set of general-purpose script-support libraries called
'tapsets' (see the SystemTap wiki, off of the above link).

The language used to write SystemTap scripts isn't however a
general-purpose language like Perl or Python, but rather a C-like
language defined specifically for SystemTap. The reason for that has
to do with the way SystemTap works - SystemTap scripts are executed
in the kernel, which makes general-purpose language runtimes
off-limits. Basically what SystemTap does is translate a user script
into an equivalent C version, which is then compiled into a kernel
module. Inserting the kernel module attaches the C code to specific
event sources in the kernel - whenever an event is hit, the
corresponding event handler is invoked and does whatever it's told to
do - usually this is updating a counter in a hash table or something
similar. When the tracing session exits, the script typically
calculates and displays a summary of the aggregation(s), or whatever
the user wants it to do.

In addition to the standard set of event sources (the static kernel
tracepoint events, and dynamic events via kprobes) SystemTap also
supports user space probing if the kernel is built with utrace
support. User space probing can be done either dynamically, or
statically if the application contains static tracepoints. A very
interesting aspect of this is that via dtrace-compatible markers, the
existing static dtrace tracepoints contained in, for example, the
Java or Python runtimes can also be used as event sources (e.g. if
they're compiled with --enable-dtrace). This should allow any Python
or Java application to be much more meaningfully traced and profiled
using SystemTap - for example, with complete userspace support
theoretically every detail of say an http request to a Java web
application could be followed, from the network device driver to the
web server through a Java servlet and back out through the kernel
again. Supporting this however, in addition to having utrace support
in the kernel, might also require some SystemTap-specific patches to
the affected applications. Users can also instrument their own
applications using static tracepoints
(http://sourceware.org/systemtap/wiki/AddingUserSpaceProbingToApps).

As mentioned, there are a whole host of scripts available. Examples
include everything from per-process network traffic monitoring,
packet-drop monitoring, per-process disk I/O times, to the same types
of applications described above for 'perf trace scripting). There
are too many to usefully cover here, see
http://sourceware.org/systemtap/examples/keyword-index.html for a
complete list of the available scripts. Everything in SystemTap is
also very well documented - there are tutorials, handbooks, and a
bunch of useful information on the wiki such as 'War Stories' and
deep-dives into other use cases i.e. there's no shortage of useful
info for new (and old) users. I won't cover any specific examples
here - basically all of the motivations and capabilities described
above for 'perf trace scripting' should apply equally well to
SystemTap, and won't be repeated here.

Support for remote targets: SystemTap supports a cross-instrumentation
mode, where only the SystemTap run-time needs to be available on the
target. The instrumentation kernel module derived from a myscript.stp
generated on host (stap -r kernel_version myscript.stp -m module_name)
is copied over to target and executed via staprun 'myscript.ko'.

However, apparently host and target must still be the same
architecture for this to work.

----

Tool: blktrace
URL: http://linux.die.net/man/8/blktrace
Architectures supported: all, nothing arch-specific

Still the best way to get detailed disk I/O traces, and you can do
some really cool things with it:

http://feedblog.org/2010/04/27/2009/

Support for remote targets: Uses splice/sendfile, so the target can
if it wants do nothing but generate the trace data and send it over
the network. blkparse, the data collection portion of blktrace, fully
supports this mode and in fact encourages it in order to avoid
perturbing the results that occur when writing trace data on the
target.

----

Tool: sysprof
URL: http://www.daimi.au.dk/~sandmann/sysprof/
Architectures supported: all, nothing arch-specific

A nice simple system-wide profiling UI - it profiles the kernel and
all running userspace applications. It displays functions in one
window, and an expandable tree of callees for the selected function
in the the other window, all with hit stats. Clicking on a callee in
the callee window shows callers of that function in a third window.

I don't know if this provides much more than OprofileUI, but the
interface is nice and it's popular in some quarters...

----

In summary, each of these tools provides a unique set of useful
capabilities that I think would be very nice to have in Yocto. There
are of course overlaps e.g. both SystemTap and trace-cmd provide
function-callgraph tracing, both trace-cmd and perf trace provide
event-subsystem-based tracing, SystemTap and perf trace scripting both
provide different ways of achieving the same kinds of high-level
aggregation goals, while blktrace, SystemTap, and perf trace scripting
all provide different ways of looking at block I/O. But they also
each have their own strengths as well, and do much more than what
they do in overlap.

At some point some of the these tools will be completely overlap each
other - for example SystemTap and/or perf trace scripting eventually
will probably do everything blktrace does, and will additionally have
the potential to show that information in a larger context e.g. along
with VFS and/or mm data sources. Making things like that happen -
adding value to those tools or providing larger contexts could be a
focus for future Yocto contributions. On the other hand, it may make
sense in v1.0 to spend a small amount of development time to actually
help provide some coherent integration to all these tools and maybe
contribute to something like perfkit (http://audidude.com/?p=504).
There may not be time to do that, but at least the minimum set of
tools for a great user experience should be available, which I think
the above list goes a long way to providing. Comments welcome...

Tom


Re: [PULL] multimedia upgrades and disk optmization, Dongxiao Xu, 2010/11/12

Xu, Dongxiao <dongxiao.xu@...>
 

Saul Wold wrote:
On 11/12/2010 03:24 AM, Xu, Dongxiao wrote:
Hi Saul,

This pull request contains some gstreamer recipe upgrades and disk
space optimization, please help to review and pull.
Dongxiao,

Will you have a distro tracking pull request also?
Hi Saul,

Here is the pull request for distro tracking fields.

meta/conf/distro/include/distro_tracking_fields.inc | 26 ++++++++++----------
1 file changed, 13 insertions(+), 13 deletions(-)

Dongxiao Xu (1):
distro_tracking: Update distro tracking for gstreamer and gst-* recipes

Pull URL: http://git.pokylinux.org/cgit.cgi/poky-contrib/log/?h=dxu4/distro


Thanks,
Dongxiao

meta/classes/sstate.bbclass
Minor nit, traditional the options go between the command and the
file list, so I fixed this to be "rm -rf ${SSTATE_BUILDDIR}".
Yes you are right, thanks for the fixing.

Thanks,
Dongxiao


Sau!
| 3
meta/recipes-multimedia/gstreamer/gst-plugins-bad_0.10.19.bb
| 24
meta/recipes-multimedia/gstreamer/gst-plugins-bad_0.10.20.bb
| 24
meta/recipes-multimedia/gstreamer/gst-plugins-base_0.10.29.bb
| 22
meta/recipes-multimedia/gstreamer/gst-plugins-base_0.10.30.bb
| 22
meta/recipes-multimedia/gstreamer/gst-plugins-good_0.10.23.bb
| 19
meta/recipes-multimedia/gstreamer/gst-plugins-good_0.10.25.bb
| 19
meta/recipes-multimedia/gstreamer/gst-plugins-ugly_0.10.15.bb
| 19
meta/recipes-multimedia/gstreamer/gst-plugins-ugly_0.10.16.bb
| 19



meta/recipes-multimedia/gstreamer/gstreamer-0.10.29/check_fix.patch
| 17
meta/recipes-multimedia/gstreamer/gstreamer-0.10.29/gst-inspect-check-error.patch
| 14
meta/recipes-multimedia/gstreamer/gstreamer-0.10.29/gstregistrybinary.c
| 487 ----------
meta/recipes-multimedia/gstreamer/gstreamer-0.10.29/gstregistrybinary.h
| 194 ---
meta/recipes-multimedia/gstreamer/gstreamer-0.10.30/check_fix.patch
| 17
meta/recipes-multimedia/gstreamer/gstreamer-0.10.30/gst-inspect-check-error.patch
| 14
meta/recipes-multimedia/gstreamer/gstreamer-0.10.30/gstregistrybinary.c
| 487 ++++++++++
meta/recipes-multimedia/gstreamer/gstreamer-0.10.30/gstregistrybinary.h
| 194 +++ meta/recipes-multimedia/gstreamer/gstreamer_0.10.29.bb
| 30 meta/recipes-multimedia/gstreamer/gstreamer_0.10.30.bb
| 30 19 files changed, 829 insertions(+), 826 deletions(-)

Dongxiao Xu (6):
sstate.bbclass: Remove the temp sstate-build-* directories in
WORKDIR gstreamer: Upgrade to version 0.10.30
gst-plugins-base: Upgraded to version 0.10.30
gst-plugins-good: Upgraded to version 0.10.25
gst-plugins-bad: Upgraded to version 0.10.20
gst-plugins-ugly: Upgraded to version 0.10.16

Pull URL:
http://git.pokylinux.org/cgit.cgi/poky-contrib/log/?h=dxu4/distro
_______________________________________________
yocto mailing list
yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/yocto


Re: Tracing/profiling tools for Yocto v1.0

Zhang, Jessica
 

Here's another thread about sysprof, my question is should we support both
oprofile and sysprof or we should be using sysprof which seems a better
tool...

Hi Rob,

Could you please tell us how to contribute to oprofileUI? The
yoctoproject.org is using oprofileUI as a profiling tool, and during the
development we found some bugs of oprofileUI and want to contribute our
patches to fix it.
Okay. Interesting... :-) i'll expedite the move to the gnome
infrastructure. These days most of what you can do with OprofileUI you
shouldn't and instead should look at using the sysprof daemon and then
sysprof GUI. (Given that sysprof builds on perf counters and so isn't
x86 specific any longer.)

Cheerio,

Rob

Zanussi, Tom wrote:
Hi,

For the 1.0 Yocto release, we'd like to have as complete a set of
tracing and profiling tools as possible, enough so that most users
will be satisfied with what's available, but not so many as to
produce a maintenance burden.

The current set is pretty decent:

latencytop
powertop
lttng
lttng-ust
oprofile(ui)
trace-cmd
perf

but there seems to be an omission or two with respect to the current
set as packaged in Yocto, and there are a few other tools that I
think would make sense to add, either to address a gap in the current
set, or because they're popular enough to be missed by more than a
couple users:

KernelShark
perf trace scripting support
SystemTap
blktrace
sysprof

These are just my own opinions regarding what I think is missing - see
below for more details on each tool, and some reasons I think it would
make sense to include them. If you disagree, or even better, have
suggestions for other tools that you think are essential and missing,
please let me know. Otherwise, I plan on adding support for them to
Yocto in the very near future (e.g. starting next week).

Just one note - I know that some of these may not be appropriate for
all platforms; in those cases, I'd expect they just wouldn't be
included in the images for those machines. Actually, except for
sysprof and KernelShark, they all have modes that should allow them
to be used with minimal footprints on the target system, and even
then I think both KernelShark and sysprof could both be relatively
easily retrofitted with a remote layer like OprofileUI's that would
make them lightweight on the target.

Anyway, on to some descriptions of the tools themselves, followed by a
short summary at the end...

----

Tool: KernelShark
URL: http://rostedt.homelinux.com/kernelshark/
Architectures supported: all, nothing arch-specific

KernelShark is a front-end GUI interface to trace-cmd, a tracing tool
that's already included in the Yocto SDK (trace-cmd basically provides
an easier-to-use text-based interface to the raw debugfs tracing files
contained in /sys/kernel/debug/tracing).

Tracing can be started and stopped from the GUI; when the trace
session ends, the results are displayed in a couple of sub-windows: a
graphical area that displays events for each CPU but that can also
display per-task graphs, and a listbox that displays a detailed list
of events in the trace. In addition to display of raw events, it
also supports display of the output of the kernel's ftrace plugins
(/sys/kernel/debug/tracing/available_tracers) such as the function and
function_graph tracers, which are very useful on their own for
figuring out exactly what the kernel does in particular codepaths.

One very nice KernelShark feature is the ability to easily toggle the
individual events or event subsystems of interest; specifying these
manually is usually one of the most unpleasant parts of command-line
tracing, for this reason alone KernelShark is worth looking at, as it
makes the whole tracing experience much more manageable and enjoyable
(and therefore more likely to be used). Additionally, the extensive
support of filtering and searching is very useful. The GUI itself is
also extensible via Python plug-ins. All in all a great tool for
running and viewing traces.

Support for remote targets: The event subsystem and ftrace plugins
that provide the data for trace-cmd/KernelShark are completely
implemented within the kernel; both control and trace stream data
retrieval are accessed via debugfs files. The files that provide the
data retrieval function are accessible via splice, which means that
the trace streams could be easily sent over the network and processed
on the host. The current KernelShark code doesn't do that -
currently the UI needs to run on the target - but that would be an
area where Yocto could add some value - it shouldn't be a huge amount
of effort to add that capability. In the worst case, something along
the lines of what OprofileUI does (start/stop the trace on the
target, and send the results back when done) could also be acceptable
as a local stopgap solution.

----

Tool: perf trace scripting support
URL: none, included in the kernel sources
Architectures supported: all, nothing arch-specific

Yocto already includes the 'perf' tool, which is a userspace tool
that's actually bundled as part of the mainline linux kernel source.
'perf trace' is a subtool of perf that performs system-wide (or
per-task) event tracing and displays the raw trace event data using
format strings associated with each trace event. In fact, the events
and event descriptions used by perf are the same as those used by
trace-cmd/KernelShark to generate its traces (the kernel event
subsystem, see /sys/kernel/debug/tracing/events).

As is the case with KernelShark, the reams of raw trace data provided
by perf trace provide a lot of useful detail, but the question
becomes how to realistically extract useful high-level information
from it. You could sit down and pore through it for trends or
specific conditions (no fun, and it's not really humanly possible
with large data sets). Filtering can be used, but that only goes so
far. Realistically, to make sense of it, it needs to be 'boiled
down' somehow into a more manageable form. The fancy word for that
is 'aggregation', which basically just means 'sticking the important
stuff in a hash table'.

The perf trace scripting support embeds scripting language
interpreters into perf to allow perf's internal event dispatch
mechanism to call script handlers directly (script handlers can also
call back into perf). The scripting_ops interface formalizes this
interaction and allows any scripting engine that implements the API
to be used as a full-fledged event-processing language - currently
Python and Perl are implemented.

Events are exposed in the scripting interpreter as function calls,
where each param is an event field (in the event description
pseudo-file for the event in the kernel event subsystem). During
processing, every event in the trace stream is converted into a
corresponding function call in the scripting language. At that
point, the handler can do anything it want to using the available
facilities of the scripting language such as, for example, aggregate
the event data in a hash table.

A starter script with handlers for each event type can be
automatically generated from existing trace data using the 'perf
trace -g' command. This allows for one-off, quick turnaround trace
experiments. But scripts can be 'promoted' to full-fledged 'perf
trace' scripts that essentially become part of perf and can be listed
using 'perf trace -l'. This involves simply writing a couple wrapper
shell scripts and putting them in the right places.

In general, perf trace scripting is a useful tool to have when the
standard set of off-the-shelf tools aren't really enough to analyze a
problem. To take a simple example, using tools like iostat you can
get a general statistical idea of the read/write activity on the
system, but those tools won't tell you which processes are actually
responsible for most of the I/O activity. The 'perf trace rw-by-pid'
canned script in perf trace uses the system-call read/write
tracepoints (sys_enter/exit_read/write) to capture all the reads and
writes (and failed reads/writes) of every process on the system and
at the end displays a detailed per-process summary of the results.
That information can be used to determine which processes are
responsible for the most I/O activity on the system, which can in
turn be used to target and drill down into the detailed read/write
activity caused by a specific process using for example the
rw-by-file canned script which displays the per-file read/write
activity for a specific process.

To give a couple more concrete examples of how this capability can be
useful, here are some other examples of things that can only be done
with scripting, such as detecting complex or 'compound' events.

Simple hard-coded filters and triggers can scan data for simple
conditions e.g. someone tried to read /etc/passwd. This kind of thing
should be possible with the current event filtering capabilities even
without scripting support e.g. scan the event stream for events that
satisfy the condition:

event == vfs_open && filename == "/etc/passwd"

(This would tell you that someone tried to open /etc/password, but
that in itself isn't very useful - you'd really like to at least know
who, which of course could be accomplished by scripting.)

But a lot of other problems involve pattern matching over multiple
events. One example from a recent lkml posting:

The poster had noticed a certain inefficient pattern in block I/O
data, where multiple readahead requests resulted in an unnecessarily
inefficient pattern:

- queue first request
- plug queue
- queue second adjacent request
- merge
- unplug, issue, complete

In the case of readahead, latency is extremely important for
throughput: explicitly unplugging after each readahead increased
throughput by 68%. It's interesting to note that older kernels didn't
have this problem, but some unknown commit(s) introduced it.

This is the type of pattern that you would really need scripting
support in order to detect. A simple script to check for this
condition and detect a regression such as this could be quickly
written and made available, and possibly avoid the situation where a
problem like this could go undetected for a couple kernel revisions.

Perf and perf trace scripting also support 'live mode' (over the
network if desired), where the trace stream is processed as soon as
it's generated. Getting back to the "/etc/password" example - as
mentioned, something an administrator might want would be to monitor
accesses to "/etc/passwd" and see who's trying to access it. With
live mode, a continuously running script could monitor sys_open
calls, compare the opened filename against "/etc/passwd", get the uid
and look up username to find out who's trying to read it, and have
the Python script e-mail the culprit's name to the admin when
detected.

Baically, live-mode allows for long-running trace sessions that can
continuously scan for rare conditions. Referring back to the
readahead example, one assumption the poster made was that "merging
of a readahead window with anything other than its own sibling" would
be extremely rare. A long-running script could easily be written to
detect this exact condition and either confirm or refute that
assumption, which would be hard to do without some kind of scripting
support.

Perf trace scripting is relatively new, so there aren't yet a lot of
real-world examples - currently there are about 15 canned scripts
available (see 'perf trace -l') including the rw-by-pid and rw-by-file
examples described above.

The main data source for perf trace scripting are the statically
defined trace events defined in /sys/kernel/debug/tracing/events.
It's also possible to use the dynamic event sources available from
the 'perf probe' tool, but this is still an area of active
integration at the moment.

Support for remote targets: perf and perf trace scripting 'live-mode'
support allows the trace stream to be piped over the network using
e.g. netcat. Using that mode, the target does nothing but generate
the trace stream and send it over the network to the host, where a
live-mode script can be applied to it. Even so, this is probably not
the most efficient way to transfer trace data - one hope would be
that perf would add support for splice, but that's uncertain at this
point.

----

Tool: SystemTap
URL: http://sourceware.org/systemtap/
Architectures supported: x86, x86_64, ppc, ppc64, ia64, s390, arm

SystemTap is also a system-wide tracing tool that allows users to
write scripts that attach handlers to events and perform complex
aggregation and filtering of the event stream. It's been around for
a long time and thus has a lot of canned scripts available, which
make use of a set of general-purpose script-support libraries called
'tapsets' (see the SystemTap wiki, off of the above link).

The language used to write SystemTap scripts isn't however a
general-purpose language like Perl or Python, but rather a C-like
language defined specifically for SystemTap. The reason for that has
to do with the way SystemTap works - SystemTap scripts are executed
in the kernel, which makes general-purpose language runtimes
off-limits. Basically what SystemTap does is translate a user script
into an equivalent C version, which is then compiled into a kernel
module. Inserting the kernel module attaches the C code to specific
event sources in the kernel - whenever an event is hit, the
corresponding event handler is invoked and does whatever it's told to
do - usually this is updating a counter in a hash table or something
similar. When the tracing session exits, the script typically
calculates and displays a summary of the aggregation(s), or whatever
the user wants it to do.

In addition to the standard set of event sources (the static kernel
tracepoint events, and dynamic events via kprobes) SystemTap also
supports user space probing if the kernel is built with utrace
support. User space probing can be done either dynamically, or
statically if the application contains static tracepoints. A very
interesting aspect of this is that via dtrace-compatible markers, the
existing static dtrace tracepoints contained in, for example, the
Java or Python runtimes can also be used as event sources (e.g. if
they're compiled with --enable-dtrace). This should allow any Python
or Java application to be much more meaningfully traced and profiled
using SystemTap - for example, with complete userspace support
theoretically every detail of say an http request to a Java web
application could be followed, from the network device driver to the
web server through a Java servlet and back out through the kernel
again. Supporting this however, in addition to having utrace support
in the kernel, might also require some SystemTap-specific patches to
the affected applications. Users can also instrument their own
applications using static tracepoints
(http://sourceware.org/systemtap/wiki/AddingUserSpaceProbingToApps).

As mentioned, there are a whole host of scripts available. Examples
include everything from per-process network traffic monitoring,
packet-drop monitoring, per-process disk I/O times, to the same types
of applications described above for 'perf trace scripting). There
are too many to usefully cover here, see
http://sourceware.org/systemtap/examples/keyword-index.html for a
complete list of the available scripts. Everything in SystemTap is
also very well documented - there are tutorials, handbooks, and a
bunch of useful information on the wiki such as 'War Stories' and
deep-dives into other use cases i.e. there's no shortage of useful
info for new (and old) users. I won't cover any specific examples
here - basically all of the motivations and capabilities described
above for 'perf trace scripting' should apply equally well to
SystemTap, and won't be repeated here.

Support for remote targets: SystemTap supports a cross-instrumentation
mode, where only the SystemTap run-time needs to be available on the
target. The instrumentation kernel module derived from a myscript.stp
generated on host (stap -r kernel_version myscript.stp -m module_name)
is copied over to target and executed via staprun 'myscript.ko'.

However, apparently host and target must still be the same
architecture for this to work.

----

Tool: blktrace
URL: http://linux.die.net/man/8/blktrace
Architectures supported: all, nothing arch-specific

Still the best way to get detailed disk I/O traces, and you can do
some really cool things with it:

http://feedblog.org/2010/04/27/2009/

Support for remote targets: Uses splice/sendfile, so the target can
if it wants do nothing but generate the trace data and send it over
the network. blkparse, the data collection portion of blktrace, fully
supports this mode and in fact encourages it in order to avoid
perturbing the results that occur when writing trace data on the
target.

----

Tool: sysprof
URL: http://www.daimi.au.dk/~sandmann/sysprof/
Architectures supported: all, nothing arch-specific

A nice simple system-wide profiling UI - it profiles the kernel and
all running userspace applications. It displays functions in one
window, and an expandable tree of callees for the selected function
in the the other window, all with hit stats. Clicking on a callee in
the callee window shows callers of that function in a third window.

I don't know if this provides much more than OprofileUI, but the
interface is nice and it's popular in some quarters...

----

In summary, each of these tools provides a unique set of useful
capabilities that I think would be very nice to have in Yocto. There
are of course overlaps e.g. both SystemTap and trace-cmd provide
function-callgraph tracing, both trace-cmd and perf trace provide
event-subsystem-based tracing, SystemTap and perf trace scripting both
provide different ways of achieving the same kinds of high-level
aggregation goals, while blktrace, SystemTap, and perf trace scripting
all provide different ways of looking at block I/O. But they also
each have their own strengths as well, and do much more than what
they do in overlap.

At some point some of the these tools will be completely overlap each
other - for example SystemTap and/or perf trace scripting eventually
will probably do everything blktrace does, and will additionally have
the potential to show that information in a larger context e.g. along
with VFS and/or mm data sources. Making things like that happen -
adding value to those tools or providing larger contexts could be a
focus for future Yocto contributions. On the other hand, it may make
sense in v1.0 to spend a small amount of development time to actually
help provide some coherent integration to all these tools and maybe
contribute to something like perfkit (http://audidude.com/?p=504).
There may not be time to do that, but at least the minimum set of
tools for a great user experience should be available, which I think
the above list goes a long way to providing. Comments welcome...

Tom


Tracing/profiling tools for Yocto v1.0

Tom Zanussi <tom.zanussi@...>
 

Hi,

For the 1.0 Yocto release, we'd like to have as complete a set of
tracing and profiling tools as possible, enough so that most users will
be satisfied with what's available, but not so many as to produce a
maintenance burden.

The current set is pretty decent:

latencytop
powertop
lttng
lttng-ust
oprofile(ui)
trace-cmd
perf

but there seems to be an omission or two with respect to the current set
as packaged in Yocto, and there are a few other tools that I think would
make sense to add, either to address a gap in the current set, or
because they're popular enough to be missed by more than a couple
users:

KernelShark
perf trace scripting support
SystemTap
blktrace
sysprof

These are just my own opinions regarding what I think is missing - see
below for more details on each tool, and some reasons I think it would
make sense to include them. If you disagree, or even better, have
suggestions for other tools that you think are essential and missing,
please let me know. Otherwise, I plan on adding support for them to
Yocto in the very near future (e.g. starting next week).

Just one note - I know that some of these may not be appropriate for all
platforms; in those cases, I'd expect they just wouldn't be included in
the images for those machines. Actually, except for sysprof and
KernelShark, they all have modes that should allow them to be used with
minimal footprints on the target system, and even then I think both
KernelShark and sysprof could both be relatively easily retrofitted with
a remote layer like OprofileUI's that would make them lightweight on the
target.

Anyway, on to some descriptions of the tools themselves, followed by a
short summary at the end...

----

Tool: KernelShark
URL: http://rostedt.homelinux.com/kernelshark/
Architectures supported: all, nothing arch-specific

KernelShark is a front-end GUI interface to trace-cmd, a tracing tool
that's already included in the Yocto SDK (trace-cmd basically provides
an easier-to-use text-based interface to the raw debugfs tracing files
contained in /sys/kernel/debug/tracing).

Tracing can be started and stopped from the GUI; when the trace session
ends, the results are displayed in a couple of sub-windows: a graphical
area that displays events for each CPU but that can also display
per-task graphs, and a listbox that displays a detailed list of events
in the trace. In addition to display of raw events, it also supports
display of the output of the kernel's ftrace plugins
(/sys/kernel/debug/tracing/available_tracers) such as the function and
function_graph tracers, which are very useful on their own for figuring
out exactly what the kernel does in particular codepaths.

One very nice KernelShark feature is the ability to easily toggle the
individual events or event subsystems of interest; specifying these
manually is usually one of the most unpleasant parts of command-line
tracing, for this reason alone KernelShark is worth looking at, as it
makes the whole tracing experience much more manageable and enjoyable
(and therefore more likely to be used). Additionally, the extensive
support of filtering and searching is very useful. The GUI itself is
also extensible via Python plug-ins. All in all a great tool for
running and viewing traces.

Support for remote targets: The event subsystem and ftrace plugins that
provide the data for trace-cmd/KernelShark are completely implemented
within the kernel; both control and trace stream data retrieval are
accessed via debugfs files. The files that provide the data retrieval
function are accessible via splice, which means that the trace streams
could be easily sent over the network and processed on the host. The
current KernelShark code doesn't do that - currently the UI needs to run
on the target - but that would be an area where Yocto could add some
value - it shouldn't be a huge amount of effort to add that capability.
In the worst case, something along the lines of what OprofileUI does
(start/stop the trace on the target, and send the results back when
done) could also be acceptable as a local stopgap solution.

----

Tool: perf trace scripting support
URL: none, included in the kernel sources
Architectures supported: all, nothing arch-specific

Yocto already includes the 'perf' tool, which is a userspace tool that's
actually bundled as part of the mainline linux kernel source. 'perf
trace' is a subtool of perf that performs system-wide (or per-task)
event tracing and displays the raw trace event data using format strings
associated with each trace event. In fact, the events and event
descriptions used by perf are the same as those used by
trace-cmd/KernelShark to generate its traces (the kernel event
subsystem, see /sys/kernel/debug/tracing/events).

As is the case with KernelShark, the reams of raw trace data provided by
perf trace provide a lot of useful detail, but the question becomes how
to realistically extract useful high-level information from it. You
could sit down and pore through it for trends or specific conditions (no
fun, and it's not really humanly possible with large data sets).
Filtering can be used, but that only goes so far. Realistically, to
make sense of it, it needs to be 'boiled down' somehow into a more
manageable form. The fancy word for that is 'aggregation', which
basically just means 'sticking the important stuff in a hash table'.

The perf trace scripting support embeds scripting language interpreters
into perf to allow perf's internal event dispatch mechanism to call
script handlers directly (script handlers can also call back into perf).
The scripting_ops interface formalizes this interaction and allows any
scripting engine that implements the API to be used as a full-fledged
event-processing language - currently Python and Perl are implemented.

Events are exposed in the scripting interpreter as function calls, where
each param is an event field (in the event description pseudo-file for
the event in the kernel event subsystem). During processing, every
event in the trace stream is converted into a corresponding function
call in the scripting language. At that point, the handler can do
anything it want to using the available facilities of the scripting
language such as, for example, aggregate the event data in a hash table.

A starter script with handlers for each event type can be automatically
generated from existing trace data using the 'perf trace -g' command.
This allows for one-off, quick turnaround trace experiments. But
scripts can be 'promoted' to full-fledged 'perf trace' scripts that
essentially become part of perf and can be listed using 'perf trace -l'.
This involves simply writing a couple wrapper shell scripts and putting
them in the right places.

In general, perf trace scripting is a useful tool to have when the
standard set of off-the-shelf tools aren't really enough to analyze a
problem. To take a simple example, using tools like iostat you can get
a general statistical idea of the read/write activity on the system, but
those tools won't tell you which processes are actually responsible for
most of the I/O activity. The 'perf trace rw-by-pid' canned script in
perf trace uses the system-call read/write tracepoints
(sys_enter/exit_read/write) to capture all the reads and writes (and
failed reads/writes) of every process on the system and at the end
displays a detailed per-process summary of the results. That
information can be used to determine which processes are responsible for
the most I/O activity on the system, which can in turn be used to target
and drill down into the detailed read/write activity caused by a
specific process using for example the rw-by-file canned script which
displays the per-file read/write activity for a specific process.

To give a couple more concrete examples of how this capability can be
useful, here are some other examples of things that can only be done
with scripting, such as detecting complex or 'compound' events.

Simple hard-coded filters and triggers can scan data for simple
conditions e.g. someone tried to read /etc/passwd. This kind of thing
should be possible with the current event filtering capabilities even
without scripting support e.g. scan the event stream for events that
satisfy the condition:

event == vfs_open && filename == "/etc/passwd"

(This would tell you that someone tried to open /etc/password, but that
in itself isn't very useful - you'd really like to at least know who,
which of course could be accomplished by scripting.)

But a lot of other problems involve pattern matching over multiple
events. One example from a recent lkml posting:

The poster had noticed a certain inefficient pattern in block I/O data,
where multiple readahead requests resulted in an unnecessarily
inefficient pattern:

- queue first request
- plug queue
- queue second adjacent request
- merge
- unplug, issue, complete

In the case of readahead, latency is extremely important for throughput:
explicitly unplugging after each readahead increased throughput by 68%.
It's interesting to note that older kernels didn't have this problem,
but some unknown commit(s) introduced it.

This is the type of pattern that you would really need scripting support
in order to detect. A simple script to check for this condition and
detect a regression such as this could be quickly written and made
available, and possibly avoid the situation where a problem like this
could go undetected for a couple kernel revisions.

Perf and perf trace scripting also support 'live mode' (over the network
if desired), where the trace stream is processed as soon as it's
generated. Getting back to the "/etc/password" example - as mentioned,
something an administrator might want would be to monitor accesses to
"/etc/passwd" and see who's trying to access it. With live mode, a
continuously running script could monitor sys_open calls, compare the
opened filename against "/etc/passwd", get the uid and look up username
to find out who's trying to read it, and have the Python script e-mail
the culprit's name to the admin when detected.

Baically, live-mode allows for long-running trace sessions that can
continuously scan for rare conditions. Referring back to the readahead
example, one assumption the poster made was that "merging of a readahead
window with anything other than its own sibling" would be extremely
rare. A long-running script could easily be written to detect this
exact condition and either confirm or refute that assumption, which
would be hard to do without some kind of scripting support.

Perf trace scripting is relatively new, so there aren't yet a lot of
real-world examples - currently there are about 15 canned scripts
available (see 'perf trace -l') including the rw-by-pid and rw-by-file
examples described above.

The main data source for perf trace scripting are the statically defined
trace events defined in /sys/kernel/debug/tracing/events. It's also
possible to use the dynamic event sources available from the 'perf
probe' tool, but this is still an area of active integration at the
moment.

Support for remote targets: perf and perf trace scripting 'live-mode'
support allows the trace stream to be piped over the network using e.g.
netcat. Using that mode, the target does nothing but generate the trace
stream and send it over the network to the host, where a live-mode
script can be applied to it. Even so, this is probably not the most
efficient way to transfer trace data - one hope would be that perf would
add support for splice, but that's uncertain at this point.

----

Tool: SystemTap
URL: http://sourceware.org/systemtap/
Architectures supported: x86, x86_64, ppc, ppc64, ia64, s390, arm

SystemTap is also a system-wide tracing tool that allows users to write
scripts that attach handlers to events and perform complex aggregation
and filtering of the event stream. It's been around for a long time and
thus has a lot of canned scripts available, which make use of a set of
general-purpose script-support libraries called 'tapsets' (see the
SystemTap wiki, off of the above link).

The language used to write SystemTap scripts isn't however a
general-purpose language like Perl or Python, but rather a C-like
language defined specifically for SystemTap. The reason for that has to
do with the way SystemTap works - SystemTap scripts are executed in the
kernel, which makes general-purpose language runtimes off-limits.
Basically what SystemTap does is translate a user script into an
equivalent C version, which is then compiled into a kernel module.
Inserting the kernel module attaches the C code to specific event
sources in the kernel - whenever an event is hit, the corresponding
event handler is invoked and does whatever it's told to do - usually
this is updating a counter in a hash table or something similar. When
the tracing session exits, the script typically calculates and displays
a summary of the aggregation(s), or whatever the user wants it to do.

In addition to the standard set of event sources (the static kernel
tracepoint events, and dynamic events via kprobes) SystemTap also
supports user space probing if the kernel is built with utrace support.
User space probing can be done either dynamically, or statically if the
application contains static tracepoints. A very interesting aspect of
this is that via dtrace-compatible markers, the existing static dtrace
tracepoints contained in, for example, the Java or Python runtimes can
also be used as event sources (e.g. if they're compiled with
--enable-dtrace). This should allow any Python or Java application to
be much more meaningfully traced and profiled using SystemTap - for
example, with complete userspace support theoretically every detail of
say an http request to a Java web application could be followed, from
the network device driver to the web server through a Java servlet and
back out through the kernel again. Supporting this however, in addition
to having utrace support in the kernel, might also require some
SystemTap-specific patches to the affected applications. Users can also
instrument their own applications using static tracepoints
(http://sourceware.org/systemtap/wiki/AddingUserSpaceProbingToApps).

As mentioned, there are a whole host of scripts available. Examples
include everything from per-process network traffic monitoring,
packet-drop monitoring, per-process disk I/O times, to the same types of
applications described above for 'perf trace scripting). There are too
many to usefully cover here, see
http://sourceware.org/systemtap/examples/keyword-index.html for a
complete list of the available scripts. Everything in SystemTap is also
very well documented - there are tutorials, handbooks, and a bunch of
useful information on the wiki such as 'War Stories' and deep-dives into
other use cases i.e. there's no shortage of useful info for new (and
old) users. I won't cover any specific examples here - basically all of
the motivations and capabilities described above for 'perf trace
scripting' should apply equally well to SystemTap, and won't be repeated
here.

Support for remote targets: SystemTap supports a cross-instrumentation
mode, where only the SystemTap run-time needs to be available on the
target. The instrumentation kernel module derived from a myscript.stp
generated on host (stap -r kernel_version myscript.stp -m module_name)
is copied over to target and executed via staprun 'myscript.ko'.

However, apparently host and target must still be the same architecture
for this to work.

----

Tool: blktrace
URL: http://linux.die.net/man/8/blktrace
Architectures supported: all, nothing arch-specific

Still the best way to get detailed disk I/O traces, and you can do some
really cool things with it:

http://feedblog.org/2010/04/27/2009/

Support for remote targets: Uses splice/sendfile, so the target can if
it wants do nothing but generate the trace data and send it over the
network. blkparse, the data collection portion of blktrace, fully
supports this mode and in fact encourages it in order to avoid
perturbing the results that occur when writing trace data on the target.

----

Tool: sysprof
URL: http://www.daimi.au.dk/~sandmann/sysprof/
Architectures supported: all, nothing arch-specific

A nice simple system-wide profiling UI - it profiles the kernel and all
running userspace applications. It displays functions in one window, and
an expandable tree of callees for the selected function in the the other
window, all with hit stats. Clicking on a callee in the callee window
shows callers of that function in a third window.

I don't know if this provides much more than OprofileUI, but the
interface is nice and it's popular in some quarters...

----

In summary, each of these tools provides a unique set of useful
capabilities that I think would be very nice to have in Yocto. There
are of course overlaps e.g. both SystemTap and trace-cmd provide
function-callgraph tracing, both trace-cmd and perf trace provide
event-subsystem-based tracing, SystemTap and perf trace scripting both
provide different ways of achieving the same kinds of high-level
aggregation goals, while blktrace, SystemTap, and perf trace scripting
all provide different ways of looking at block I/O. But they also each
have their own strengths as well, and do much more than what they do in
overlap.

At some point some of the these tools will be completely overlap each
other - for example SystemTap and/or perf trace scripting eventually
will probably do everything blktrace does, and will additionally have
the potential to show that information in a larger context e.g. along
with VFS and/or mm data sources. Making things like that happen -
adding value to those tools or providing larger contexts could be a
focus for future Yocto contributions. On the other hand, it may make
sense in v1.0 to spend a small amount of development time to actually
help provide some coherent integration to all these tools and maybe
contribute to something like perfkit (http://audidude.com/?p=504).
There may not be time to do that, but at least the minimum set of tools
for a great user experience should be available, which I think the above
list goes a long way to providing. Comments welcome...

Tom


Re: Problem in running poky-qemu

Scott Garman <scott.a.garman@...>
 

On 11/12/2010 03:31 AM, sachin kumar wrote:
Hi:

i am sorry, again i am getting same message.
Sachin,

Please also see my reply to this thread. Try adding both "qemuppc" and "ext3" to the list of options you're passing to poky-qemu. I was able to reproduce your problem and workaround it by adding those two options.

Scott

--
Scott Garman
Embedded Linux Distro Engineer - Yocto Project