THE LINUX FOUNDATION PROJECTS
All Posts By

josef

Project Unity: Consolidating the Autobuilder into a Single Monolithic Build

By Blog

Update April 2, 2026:

The launch of Project Unity has been postponed to April 1, 2027. The proof-of-concept completed its first full invocation on March 14. Every target built. Every test passed. The CI dashboard, for the first time in the recorded history of the project, is entirely green. No flaky tests. No intermittent failures. No unexplained segfaults. Nothing. The Fellowship has spent the past two weeks attempting to determine what has gone wrong. A build system that has never once produced a fully clean run across all targets does not simply begin doing so without cause. Purdie has recommended that no changes be merged until the anomaly is understood. Halstead has taken the autobuilder offline as a precaution. Knight has requested that nobody speak of it until further notice. The investigation continues.

________________________________

Background

For over a decade, the Yocto Project autobuilder has grown organically. What began as a manageable set of build targets has expanded into a sprawling matrix of architectures, BSPs, images, SDKs, and runtime test configurations — each with its own scheduling, resource allocation, and failure modes. The infrastructure has served us well, but maintaining it has become increasingly costly in both human effort and compute resources.

Over the past year, a pattern has emerged in TSC discussions: build failures in one configuration frequently mask failures in others. Flaky tests in one target cause cascading delays across the entire
matrix. Bisection is complicated by the sheer number of independent build jobs. The system, in short, has become fragmented — and that fragmentation is the root cause of many of our most persistent reliability issues.

Joshua Watt was the first to quantify the scope of the problem. His analysis of autobuilder logs from the past eighteen months showed that approximately 34% of developer time spent on build failures was attributable not to genuine regressions, but to infrastructure fragmentation — resource contention between parallel jobs, inconsistent environment state, and scheduling race conditions that would not exist in a unified build.

The Council

The matter was formally raised at the TSC meeting on February 19, 2026, chaired by Alexander Kanavin. The discussion — which Kanavin later described as “longer than any reasonable meeting should be” — ran for nearly four hours and involved every active TSC member and several invited contributors.

The core question was deceptively simple: could the autobuilder’s reliability problems be solved by eliminating the matrix entirely and building everything — every architecture, every BSP, every image, every SDK — in a single, sequential, monolithic build invocation?

Opinions were divided.

Richard Purdie, who has maintained the autobuilder infrastructure for longer than most contributors have been involved with the project, spoke at length about the historical context. “I’ve seen build systems come and go,” he noted. “Fragmentation always seems like the right approach until you’ve spent fifteen years watching the pieces drift apart. There’s something to be said for keeping everything in one place — where you can see it, where you can control it.” He cautioned, however, that the path would be long and that those who undertook it might not fully understand the burden until they were well along it.

Mark Hatle, who had been largely absent from recent TSC discussions due to other commitments, returned for this meeting and surprised several attendees by voicing strong support. Hatle pointed out that he had maintained unified build configurations for internal use at his previous employer for years, and that the approach was not as impractical as it might appear. “This is not a new idea,” he said. “It’s an old one whose time has come back around.”

Not everyone agreed. Philip Balister was openly skeptical, arguing that the compute requirements alone made the proposal impractical. “You are talking about a build that could run for days,” he warned. “The infrastructure costs would be enormous, and I am not convinced the reliability gains would materialize.”

Nicholas Dechesne took a different position. He agreed that consolidation was necessary, but argued that the resulting unified build should be directed toward his own multi-architecture validation efforts specifically. “If we are going to build everything, we should build it where it is needed most,” he said. He returned to this point several times throughout the discussion, each time with increasing conviction. Kanavin eventually asked him to table the proposal for a future meeting.

Andy Wafaa was initially dismissive, calling the idea “a solution in search of a mandate.” By the end of the meeting, however, after reviewing Watt’s data more closely, he reversed his position and agreed to support a trial run. “I don’t love it,” he said, “but I cannot argue with the numbers.”

Ross Burton, notably, did not attend the meeting. It was later learned that Burton had been independently developing his own build consolidation prototype — a highly optimized but architecturally incompatible approach that he had not shared with the TSC. When informed of the Council’s decision, Burton reportedly expressed displeasure, stating that his approach was “superior in every measurable dimension.” The TSC acknowledged his work but declined to adopt it, citing concerns about long-term maintainability and philosophical differences in design.

The discussion concluded with a unanimous decision — Balister abstaining — to proceed with a proof-of-concept. The question of who would lead the implementation proved more contentious than the technical decision itself.

The Fellowship

Several senior contributors immediately volunteered, but Purdie argued that the project lead should be someone who could approach the problem without the accumulated assumptions that come with years of autobuilder experience. The role ultimately fell to Megan Knight, a relatively recent contributor to the project’s build infrastructure. Knight was initially reluctant but accepted after Michael Halstead agreed to serve as co-lead, providing the operational continuity that the project would require.

“I wouldn’t have said yes without Michael,” Knight acknowledged. “He knows the autobuilder infrastructure better than anyone. Wherever this goes, it doesn’t go anywhere without him.”

The full implementation team — internally referred to as the “Fellowship,” a name that stuck after Halstead used it in an offhand IRC comment — consists of nine members:

  • Megan Knight — Project lead, build configuration design
  • Michael Halstead — Infrastructure and operations
  • Richard Purdie — Advisory role, BitBake integration
  • Mark Hatle — Legacy configuration migration
  • Paul Barker — Test framework integration and flaky test identification
  • Tim Orling — Hardware BSP consolidation
  • Nicholas Dechesne — Multi-architecture validation (scope limited to the unified build, despite ongoing requests to expand it)
  • Thomas Roos — Contributor tooling and developer experience
  • Marco Cavallini — Contributor tooling and developer experience

Roos and Cavallini, it should be noted, were not part of the original TSC decision. Both independently approached Knight after the meeting and asked to be included. Their enthusiasm, while not initially solicited, has proven valuable — particularly in areas of the build system that other team members had considered peripheral.

Barker and Orling have developed what can only be described as a productive rivalry over whose platform support layer causes more build failures in the unified configuration. At last count, Orling’s hardware BSPs lead by a narrow margin, a fact that Barker has mentioned in no fewer than three status emails.

Technical Approach

The unified build is implemented as a new bitbake multi-config configuration that chains every supported MACHINE, DISTRO, and image target into a single dependency graph. A new DISTRO_FEATURES flag enables the mode:

DISTRO_FEATURES:append = " unity"

When enabled, the build resolves the complete set of targets across all registered configurations. The resulting dependency graph is, to
our knowledge, the largest single BitBake invocation ever attempted.

Key configuration parameters are set in unity.conf:

# Project Unity - Unified Build Configuration
UNITY_MODE = "1"

# Build everything. In sequence. On purpose.
UNITY_TARGETS = "all"

# Estimated completion: see UNITY_ETA
# UNITY_ETA: Uncertain. Do not ask.

# Abort on failure: no. We see it through.
UNITY_ABORT_ON_FAILURE = "0"

# Resource allocation
UNITY_BB_NUMBER_THREADS = "1"
UNITY_PARALLEL_MAKE = "-j1"

# This is deliberate. Do not change this.
# We tried higher parallelism. It is not safe.

The single-threaded configuration is not a placeholder. Early testing with parallel execution produced nondeterministic results that the team was unable to fully diagnose. Knight made the decision to
serialize execution entirely, noting that “reliability is the goal, not speed. If we wanted speed, we would not be here.”

Current estimated build time for a full Unity invocation on the reference hardware is approximately nine days.

Current Status

The Fellowship has been working on the proof-of-concept since early March. Progress has been uneven.

Purdie provided critical guidance during the initial BitBake integration phase but has been intermittently available since, resurfacing periodically with detailed technical recommendations that have proven consistently correct despite limited context on recent changes. Knight has described this pattern as “frustrating but ultimately indispensable.”

Hatle’s legacy configuration expertise has been essential in resolving compatibility issues with older BSP layers that the current team had limited experience with.

The most significant setback occurred in mid-March, when Dechesne submitted a series of patches that expanded the Unity build scope to include several additional multi-architecture validation targets. The
patches were technically sound but outside the agreed scope. After a lengthy discussion, the patches were reverted, and the scope was formally documented to prevent further expansion. Dechesne accepted the decision gracefully but noted that “the offer remains open.”

Next Steps

The proof-of-concept is expected to complete its first successful full build by mid-April. If the results validate Watt’s analysis — specifically, a measurable reduction in infrastructure-attributable failures — the TSC will consider adopting Project Unity as the default autobuilder configuration for the next release cycle.

A more detailed technical report will follow. In the meantime, the team welcomes questions and feedback on the yocto-dev mailing list.

The build continues.

________________________________

Project Unity is a collaborative effort of the Yocto Project Technical Steering Committee. The views expressed in this post represent the consensus of the TSC and the Fellowship team members, who have given their consent to be quoted.

Klepsydra AI – Cloud detection onboard from space

By Blog, Featured

Snapshot: Klepsydra OBPMark-ML Cloud Detection Demo in Progress

Klepsydra AI – Cloud detection onboard from space with a custom Linux distribution built by the Yocto Project

 

What is Cloud detection?

Cloud detection is a crucial process in Earth Observation used to identify and mask clouds in satellite imagery. This is necessary because clouds can obstruct the view of the Earth’s surface, making it difficult to accurately interpret and analyse the data.

Cloud detection algorithms typically use a combination of spectral and spatial information to differentiate between clouds and other features in the imagery. For instance, they may utilize information from various wavelengths of light to distinguish between clouds and land or water surfaces. They may also use contextual information, such as the size and shape of features in the image, to aid in cloud identification.

Once clouds are detected, they can be masked or removed from the image so that the underlying land or water surface can be analysed. This is important for a wide range of applications, including land use and land cover mapping, crop monitoring, and climate studies.

Cloud detection is also utilised in real-time applications such as weather forecasting and disaster management, where monitoring cloud cover and its changes over time is crucial. In these applications, cloud detection algorithms can track the movement and formation of clouds, providing valuable information for predicting weather patterns and identifying areas that may be impacted by natural disasters.

Klepsydra AI Excels Beyond Tensorflow Lite in Performance

Cloud detection onboard

Performing cloud detection onboard Earth Observation satellite offers several benefits over performing cloud detection on the ground:

  1. Faster response time: Cloud detection onboard the satellite enables near real-time detection and removal of clouds, which is particularly useful for time-critical applications such as weather forecasting and disaster response.
  2. Reduced data transmission: Transmitting large amounts of satellite imagery data to the ground can be expensive and time-consuming. By performing cloud detection onboard, only the useful data (i.e. data without clouds) needs to be transmitted to the ground, reducing data transmission costs.
  3. Improved data quality: Cloud detection onboard the satellite can result in improved data quality because the detection algorithms can take into account the unique characteristics of the satellite’s sensors and the viewing geometry. This can result in more accurate and reliable cloud detection.
  4. Increased availability of cloud-free data: By performing cloud detection onboard, the satellite can provide a higher percentage of cloud-free data, which is particularly important for applications such as land use and land cover mapping, crop monitoring, and climate studies.
  5. Improved efficiency of downstream processing: Cloud detection onboard the satellite can improve the efficiency of downstream processing by reducing the amount of data that needs to be processed on the ground. This can lead to faster and more accurate analysis of the data.

In Collaboration with ESA, Barcelona Supercomputing Center developed OBPMark-ML’s Cloud Detection Algorithm

KATESU project

The current commercial version of Klepsydra AI has successfully passed validation in an ESA activity called KATESU for Teledyne e2v’s LS1046 and Xilinx ZedBoard onboard computers, achieving outstanding performance results. During this activity, two DNN algorithms provided by ESA, CME and OBPMark-ML, were tested.

Onboard cloud detection is important to filter images that are sent to ground.

Klepsydra on a custom Linux distribution built by the Yocto Project

The Yocto Project is an umbrella organization for a number of open-source technologies which simplify the process of building and customizing Linux-based operating systems for embedded devices. It provides a flexible and scalable infrastructure, enabling developers to create highly optimized and tailored Linux distributions for their specific embedded systems.

In Klepsydra, the Yocto Project plays a crucial role in our workflow, particularly when it comes to generating Linux images for our LS1046 based computer. To begin, we set the necessary Yocto Project build tools up and configure the build system to specifically target our desired hardware platform. This ensures that the resulting Linux image is optimized and compatible with our LS1046 device.

To tailor the Linux kernel and other system components to our specific requirements, we create custom Yocto Project recipes. These recipes allow us to incorporate the necessary changes and optimizations, ensuring that the resulting Linux image is finely tuned to meet our needs. Additionally, we introduce the meta-virtualization layer to our Yocto Project setup, which enables us to include Docker in the final root filesystem of the generated image.

Once the build has been configured and our customizations have been applied, the next step is to generate the Linux image for our target platform. This involves compiling and packaging all the requiredcomponents, including the kernel, device drivers, libraries, and applications, into a deployable image file. This image file can then be flashed onto an SD card, effectively preparing it for booting Linux on our LS1046 device.

With the Linux image successfully booted on the target platform, we proceed to create the necessary Docker image directly on the LS1046 device. Leveraging the Docker capabilities provided by the meta-virtualization layer, we prepare a Docker image that encapsulates the specific software, dependencies, and configurations required for our testing purposes.

Once the Docker image is prepared, we launch a container from it on the target device. This container serves as a controlled environment where we can perform various tests and evaluations on the LS1046 device. By executing tests within the container, we can isolate and evaluate specific functionalities or scenarios, ensuring the reliability and performance of our software on the target platform.

Demo online

https://klepsydra.com/klepsydra-ai-esa-obmark-ml-online-demo-ii-the-cloud-detection-dnn/

The demo showcases the Cloud Detection DNN model executed on three identical computers, each with a different optimisation. The first computer runs Klepsydra optimised for latency (kpsr.lat), the second uses TensorFlow Lite, and the third uses Klepsydra optimised for CPU (kpsr.cpu).

Klepsydra AI demonstrates remarkable elasticity and high-performance capabilities. The kpsr.lat configuration can process up to two times more images per second than TensorFlow Lite, while kpsr.cpu processes the same number of images as TensorFlow Lite but with fewer CPU resources. These improvements are evident in both the Intel and ARM versions of the demo.

In summary, Klepsydra AI provides customers with a unique capability to adapt to their specific needs, whether it be latency, CPU, RAM, or throughput. This feature makes Klepsydra AI highly suitable for onboard AI applications such as Earth Observation onboard data processing and compression, vision-based navigation for in-orbit servicing, and lunar landing.

Acknowledgments

This demo was prepared as part of ESA’s KATESU project to evaluate Klepsydra AI for Space use. For further information on this project, please refer to https://klepsydra.com/klepsydra-ai-technology-evaluation-space-use/.

The OBPMark-ML DNN was provided to Klepsydra by courtesy of ESA. This algorithm is part of ESA’s OBPMark framework (https://obpmark.github.io/). For further information on this framework, please contact OBPMark@esa.int.

PR: Exein, 4Y LTS, YPDD 2023

By Blog

 

Yocto Project Welcomes Exein as a Platinum Member, Announces Extended LTS Release Plan and One-Day Technical Summit

Yocto Project Invests in the Long Term, Continues Growth and Commits to Security with new Platinum Member.

SAN FRANCISCO – June 26, 2023 – The Yocto Project, an open source collaborative initiative helping developers create custom Linux-based systems, has evolved significantly over the last 12 years to meet the requirements of its community. The project continues to lead in build system technology with field advances in build reproducibility, software license management, SBOM compliance and binary artifact reuse. In an effort to support the community, The Yocto Project announced the first Long Term Support (LTS) release in October 2020. Today, we are delighted to announce that we are expanding the LTS release and extending the lifecycle from 2 to 4 years as standard.

The continued growth of the Yocto Project coincides with the welcomed addition of Exein as a Platinum Member, joining AMD/Xilinx, Arm, AWS, BMW Group, Cisco, Comcast, Intel, Meta and WindRiver. As a Member, Exein brings its embedded security expertise across billions of devices to the core of the Yocto Project. 

“Long Term Support (LTS) is one of our most asked about features, it is great the project is now able to commit to 4 years as standard for all our LTS releases,” said Richard Purdie, Yocto Project Lead Architect and Linux Foundation Fellow. “New members like Exein bring both specialist knowledge and funding, enabling us to do this and more. Exein’s involvement will truly bolster the security capabilities of the Yocto Project. The ability to offer enhanced embedded security is a major advancement in our pursuit of safer, more resilient systems.”

“The Yocto Project has been at the forefront of OS technologies for over a decade,” said Andrew Wafaa, Yocto Project Chairperson. “The adaptability and variety of the tooling provided are clearly making a difference to the community, we are delighted to welcome Exein as a member as their knowledge and experience in providing secure Yocto Project based builds to customers will enable us to adapt to the modern landscape being set by the US Digital Strategy and the EU Cyber Resilience Act” 

“We’re extremely excited to become a Platinum Partner of the Yocto Project,” said Gianni Cuozzo, founder and CEO of Exein. “The Yocto Project is the most important project in the embedded Linux space, powering billions of devices every year. We take great pride in contributing our extensive knowledge and expertise in embedded security to foster a future that is both enhanced and secure for Yocto-powered devices. We are dedicated to supporting the growth of the Yocto Project as a whole, aiming to improve its support for modern languages like Rust, and assist developers and OEMs in aligning with the goals outlined in the EU Cyber Resilience Act.” 

The Yocto Project is also excited to be hosting Yocto Project Dev Day on June 26 alongside the Embedded Open Source Summit in Prague, Czech Republic. Back for the first time since 2019, Yocto Project Dev Day brings together developers from across the Yocto ecosystem to participate in a variety of community sessions, presentations, and tutorials. For more information about the Yocto Project, visit yoctoproject.org.

 

Media Contact

Noah Lehman

The Linux Foundation

nlehman@linuxfoundation.org