Direkt zum Hauptbereich

Small but valuable: automatic cleaning the clutter

If you are working with are larger stack of recipes and update them frequently, you'll inevitably reach the point where a recipe becomes obsolete.

Nothing in your stack will use it, so it basically is just a burden (and a potential hypothetical security risk).

I had this situation with the insane amount of npm packages I maintain for my meta-sca layer. Those change nearly on a daily basis, with new dependencies coming in and replacing old dependencies.

One could read all the change logs, but lets be honest, nobody does that but just for a small chosen few recipes - so the question remains...

How to I identify obsolete recipes?

Simple, by looking up all the dependencies of each recipe to another in a layer - kind of obvious isn't it :-).
Lucky me, I don't have to do that manually, we are programmers, we automate stuff - so I did:

the result can be found in my meta-buildutils layer - a small script called unused

this one can be used without setting up bitbake at all, it just works with the power of python.

What does it do?

It just scans all the recipes in a layer for their DEPENDS and RDEPENDS values and sets them into relation with each other - a recipe that doesn't have someone else depends on it can be considered obsolete and therefore can be removed

...but what about images and feature xyz?

you're absolutely right - some recipes - images for instance - usually have no one depend on them - they are the end of the line, the stuff we are actually looking for... but fear not, you can configure exceptions easily in the script, so in fact only the real obsolete ones will be identified

Automate that process

and as I said, I like to automate stuff so I introduced a per layer configuration file called .unusedignore (an example can be found here), in which one can define all the ignores of a layer as part of the code.
So after every recipe update attempt, the unused script in run and puts stuff into the bin, where it belongs.
And the charming thing is, that the script will automatically use this configuration file if found in the path - zero config, but 100% convenience...

Happy cleaning and be sure to check the other pieces in meta-buildutils

Kommentare

Beliebte Posts aus diesem Blog

Sharing is caring... about task hashes

The YOCTO-project can do amazing things, but requires a very decent build machine, as by nature when you build everything from scratch it does require a lot of compilation. So the ultimate goal has to be to perform only the necessary steps in each run. Understanding task hashing The thing is that bitbake uses a task hashing to determine, which tasks (such as compilation, packaging, a.s.o.) are actually required to be performed. As tasks depend on each other, this information is also embedded into a hash, so the last task for a recipe is ultimately depending on the variable that are used for this specific task and every task before. You could visualize this by using a utility called bitbake-dumpsig , which produces output like this basewhitelist: {'SOURCE_DATE_EPOCH', 'FILESEXTRAPATHS', 'PRSERV_HOST', 'THISDIR', 'TMPDIR', 'WORKDIR', 'EXTERNAL_TOOLCHAIN', 'FILE', 'BB_TASKHASH', 'USER', 'BBSERVER&

Making go not a no-go

Anyone that dealt with container engines came across go - a wonderful language, that was built to provide a right way of what C++ intended to do. The language itself is pretty straight forward and upstream poky support is given since ages... In the go world one would just run 1 2 go get github.com/foo/bar go build github.com/foo/bar and magically the go ecosystem would pull all the needed sources and build them into an executable. This is where the issues start... In the Openembedded world, one would have  one provider (aka recipe) for each dependency each recipe comes with a (remote) artifact (e.g. tarball, git repo, a.s.o.) which can be archived (so one can build the same software at a later point in time without any online connectivity) dedicated license information all this information is pretty useful when working is an environment (aka company) that has restrictions, such as reproducible builds license compliance security compliance (for instance no unpatched CVE) but when us

Speedup python on embedded systems

Have you ever considered to use python as a scripting language in an embedded system? I've been using this on recent projects although it wasn't my first choice. If I had to choose a scripting language to be used in embedded I always had a strong preference for shell/bash or lua, because they are either builtin or designed to have a significant lower footprint compared to others. Nevertheless the choice was python3 (was out of my hands to decide). When putting together the first builds using YOCTO I realized that there are two sides to python. the starting phase, where the app is initializing the execution phase, where the app just processes new data In the 2nd phase python3 has good tradeoffs between maintainability of code vs. execution speed, so there is nothing to moan about. Startup is the worst But the 1st phase where the python3-interpreter is starting is really bad. So I did some research where is might be coming from. Just to give a comparison of