Direkt zum Hauptbereich

Best of both worlds

In one of my posts last year I mentioned that one can make automated comments to GitLab very easily with the right tooling - especially if they are coming from linting tools.

So any author, reviewer and maintainer gets easy feedback as fast as possible on any proposed changes.

This is super easy and very convenient when you're always doing a full build and every possible source in your project is actually checked.

Why do we need something new, if that's working so well...?

In the bitbake world that is different, we have powerful tools like sstate cache along other mechanisms to avoid exactly building everything from scratch all the time.

This makes it tricky to map findings from the meta-sca layer (which fully supports sstate caching) to a pull or merge request, as we never can be sure to have the full picture.

Moving from the outside, right into it

So it was very clear that the commenting part of a CI pipeline needed to be done with the help of bitbake too... et voila scabot was born.

This tool does comment on PRs (GitHub)/MRs (GitLab) on all the stuff found directly or indirectly linked to the changes done - always having the full picture of the impact of any single change - a maintainer's dream come true.

Give it a try

To see what the bot can do for you just give it a try and read the documentation.

Feedback is highly appreciated, so in case you found a bug, have a suggestion or even a currently unsupported SCM system, please feel free to reach out

Kommentare

Beliebte Posts aus diesem Blog

Sharing is caring... about task hashes

The YOCTO-project can do amazing things, but requires a very decent build machine, as by nature when you build everything from scratch it does require a lot of compilation. So the ultimate goal has to be to perform only the necessary steps in each run. Understanding task hashing The thing is that bitbake uses a task hashing to determine, which tasks (such as compilation, packaging, a.s.o.) are actually required to be performed. As tasks depend on each other, this information is also embedded into a hash, so the last task for a recipe is ultimately depending on the variable that are used for this specific task and every task before. You could visualize this by using a utility called bitbake-dumpsig , which produces output like this basewhitelist: {'SOURCE_DATE_EPOCH', 'FILESEXTRAPATHS', 'PRSERV_HOST', 'THISDIR', 'TMPDIR', 'WORKDIR', 'EXTERNAL_TOOLCHAIN', 'FILE', 'BB_TASKHASH', 'USER', 'BBSERVER&

Making go not a no-go

Anyone that dealt with container engines came across go - a wonderful language, that was built to provide a right way of what C++ intended to do. The language itself is pretty straight forward and upstream poky support is given since ages... In the go world one would just run 1 2 go get github.com/foo/bar go build github.com/foo/bar and magically the go ecosystem would pull all the needed sources and build them into an executable. This is where the issues start... In the Openembedded world, one would have  one provider (aka recipe) for each dependency each recipe comes with a (remote) artifact (e.g. tarball, git repo, a.s.o.) which can be archived (so one can build the same software at a later point in time without any online connectivity) dedicated license information all this information is pretty useful when working is an environment (aka company) that has restrictions, such as reproducible builds license compliance security compliance (for instance no unpatched CVE) but when us

Speedup python on embedded systems

Have you ever considered to use python as a scripting language in an embedded system? I've been using this on recent projects although it wasn't my first choice. If I had to choose a scripting language to be used in embedded I always had a strong preference for shell/bash or lua, because they are either builtin or designed to have a significant lower footprint compared to others. Nevertheless the choice was python3 (was out of my hands to decide). When putting together the first builds using YOCTO I realized that there are two sides to python. the starting phase, where the app is initializing the execution phase, where the app just processes new data In the 2nd phase python3 has good tradeoffs between maintainability of code vs. execution speed, so there is nothing to moan about. Startup is the worst But the 1st phase where the python3-interpreter is starting is really bad. So I did some research where is might be coming from. Just to give a comparison of