Direkt zum Hauptbereich

Posts

Posts mit dem Label "GitHub" werden angezeigt.

Best of both worlds

In one of my posts last year I mentioned that one can make automated comments to GitLab very easily with the right tooling - especially if they are coming from linting tools. So any author, reviewer and maintainer gets easy feedback as fast as possible on any proposed changes. This is super easy and very convenient when you're always doing a full build and every possible source in your project is actually checked. Why do we need something new, if that's working so well...? In the bitbake world that is different, we have powerful tools like sstate cache along other mechanisms to avoid exactly building everything from scratch all the time. This makes it tricky to map findings from the meta-sca layer (which fully supports sstate caching) to a pull or merge request, as we never can be sure to have the full picture. Moving from the outside, right into it So it was very clear that the commenting part of a CI pipeline needed to be done with the help of bitbake too... et voila scabot ...

The journey through time and (disk)space

In this post I'm telling my journey which started with my blog post about using Github Action as my main CI provider - in case you missed it see here  - on how to do a full yocto/poky build with just ~14GB of disk space Typically disk space is cheap nowadays and can be used without thinking too much about it - which leads to disk usage of 50GB and more for a yocto/poky build. So what are the options, when disk space becomes precious? Constraints, constraints, constraints While Github Actions is free for open source project (like mine) it is highly limited in regards of resources. Currently (2020/04/11) you get ( source )  2-core CPU 7 GB of RAM memory 14 GB of SSD disk space maximum of 6 hours per pipeline that's not very much when you think about doing a poky/yocto build on that, so every byte actually counts. Constant pain As the involved layer are constantly growing one has to care about every byte that could be saved, without giving up the overall a...

Keeping it fresh - the lazy way

When you're working with bitbake, especially when you're maintaining recipes, you might have asked yourself how do I know which recipes I need to update As we all read the YOCTO manual from beginning till the very end (as all good participants;-), you might be aware that you can check the `upstream` status of a package/recipe be running devtool check-upgrade-status myrecipe which returns (if there is an update available) INFO:myrecipe            0.12.8          0.12.9          None 297cb2458a96ea96d5e9d6ef38f1b7305c071f32  that means, currently you're running on version 0.12.8 and there is an update to 0.12.9 available at the defined source you're pulling your sources from. So far so easy - but wait Do I have to do that all manually, each and everytime? Automating things No, of course not. I'm using GitHub for hosting my sources, so I thought it would be quite convenient to let some ...

Using a bitbake CI - For Free

This time I want to write something about an issue everybody maintaining a git repo might faced already - CI. In theory every push and every pull request should have been build with all supported layer-versions... well in theory. The issue If you have a local setup it's sometimes hard to switch layer-version - I agree the usage of repo  is highly recommended here, as it simplifies such work heavily. Nevertheless you might need multiple work spaces, which all need a lot of disk space. Roughly calculated you can expect ~50GB of data per architecture/distro-combination without the usage of " rm_work "-bbclass, round about 15GB if you're using it. So if you decided to support >3 layer versions of YOCTO, it's a lot of space blocked for a lot of redundant data. Not to mention that you need to build everything now and then to get results, if your code is still working or not. Solution: CI This is where you pick a CI provider - Jenkins immediately comes in...