4tlas.io

Articles, stories and news

Discover a wealth of insightful materials meticulously crafted to provide you with a comprehensive understanding of the latest trends.

blog hero section
4TL Icon Color 1 1

Categories

Tags

automated test

Test Automation Should Start with a Person in the Lab – Not a Script in the Repo

You’re automating your Hardware-in-the-Loop (HiL) testing, right?

Automation holds the promise of efficiency, scalability, and consistency, particularly automated testing in embedded systems development. You’re gonna want to automate. But when’s the right time?

Rushing to automate without the right approach or at the right time often leads to wasted costs, unnecessary delays, and inflexible systems.

Let’s explore the pitfalls of premature automation, how to identify the right moment, and the tools that help ensure automation succeeds.

Why Automating Too Soon Can Hurt Your Progress

“Automation applied to an inefficient operation will magnify the inefficiency.” – Bill Gates

While automation offers undeniable benefits, jumping in too early can derail your development process. Automation magnifies the efficiency, or inefficiency, of your workflows. Automation exacerbates existing problems if you automate too soon.

Complexity of Hardware Integration

Embedded systems tightly couple with specific hardware components, which makes testing inherently complex. Often, teams don’t receive production hardware until later in development.

When triaging test failures, they originate from one of three areas: 1) a “real” bug, 2) a test (content) bug, or 3) an infrastructure bug. The infrastructure bugs are the killers. They waste your time needlessly and destroy trust in the work your doing.

Even with simulation or emulation, teams that automate too early frequently overlook the subtle realities of real hardware integration and end up debugging too many infrastructure failures.

Wasted Resources

Building and maintaining automation systems requires investment in time, tools, and expertise. If your testing processes, hardware, and software continue to evolve, you may find yourself constantly reworking automation scripts, or even frameworks, which wastes both time and budget.

Loss of Flexibility

Automating immature processes locks you into workflows that may not be optimized. When changes inevitably occur, you’ll face the burden of revising your automation frameworks to keep up.

Slowing Down Development

Ironically, automating too early will hinder progress. Instead of improving workflows, your team may spend valuable time troubleshooting automation tools rather than developing and testing the product itself. Troubleshooting automation is inevitable, but it’s slows down progress and wastes time if you’re troubleshooting automation that eventually must be replaced.

When Is the Right Time to Automate?

“[Automate] comes last. The big mistake was that I began by trying to automate every step.” – Elon Musk, in his 5-step process

Automated testing should be introduced as an accelerator, not as an early development step. To achieve this, you must first establish a stable, repeatable, and optimized manual testing process. Here are three ways to know its the right time to automate. 

When You’ve Perfected a Process with People

“Yes, excessive automation at Tesla was a mistake. To be precise, my mistake. Humans are underrated.” – Elon Musk, response to a Wall Street Journal interviewer

Your people perfect the process better than any tool can. They are your most valuable resource. Let them run the workflow, refine it, and uncover what matters. Once you’ve done so, then you can dive in with automation. 

When You Have Something to Automate

Maybe that sounds simplistic, but it’s also truth. Processes that require frequent repetition, such as unit tests and hardware-in-the-loop (HIL) regression tests, make the best candidates for automation. If you must perform a task the same way dozens or hundreds of times, automation saves time and ensures consistency.

When You’re Ready to Ramp and Scale

As your team grows, ramps up production, or supports multiple hardware variants, complexity increases fast. To keep pace, you can rely on automation. Your team can handle manual regression tests and nightly builds for a small batch, but at scale, those repetitive, time-sensitive tasks will consume valuable engineering time and delay delivery. Automation absorbs that burden and keeps your momentum. You may be able to deliver a hundred of your products with your current process, but can you deliver a million?

People First: Automate the Process, Not the Problem

“Automation is not the enemy of jobs. It frees up human beings to do higher-value work.” – Andy Stern, SEIU President

Effective automation starts with people. The strength of your testing process lies in human insight. How your team identifies critical test scenarios, organizes workflows, and executes tests. Automation should amplify this work, not replace it.

The best automation enhances each person’s contribution to the team and the product. 

  1. Automate testing with the tools, equipment, and procedures that your people already use. 
  2. Automate in place.
  3. Optimize before you automate.

Best Practices for Successful Automated Testing

When you’re ready to automate, use these strategies to ensure your efforts deliver meaningful results.

The Right Tool: The Person in the Lab

We talked about it above. Start with the person and automate what they do. That means, in addition, that the framework tool that you use (whether you build it yourself, or use an OTS tool like Fuze) must allow you to automate what they do. It must take the position and perspective of the person in the lab.

Start Small and Build From There

Automate low-complexity tasks first, such as unit tests, HiL smoke/basic tests, and nightly regression runs. Building momentum with early wins helps build trust in the process and a solid foundation for more complexity over time. 

Embrace Continuous Integration (CI)

Integrate automation into your CI pipeline. Every pull request and merge to main should trigger automated builds and tests, providing immediate feedback to developers, the QA team, and management.

Build a Hierarchy of Coverage vs Runtime

Move from basic CI on PR’s and main merges into nightly, weekly, and release testing. Since you have a finate and inelastic DUT farm, design what to test when based on a hierarchy of coverage balanced against how long it takes to run the testing.

Aim for 10-15 minutes of testing during PR’s. Up that to 30 for a main merge. Aim for a couple hours each night. On the weekends, build in soak and repetitions for many hours at a time.

Automate as many tests as possible for release testing. If you have more than 30 days between releases, think about building a once-a-month release test operation.

Automate Results Publication

Make test results effortlessly visible to your team, or they’re useless. Design your test automation framework to automatically publish results and ensure they’re easy to find and understand in whatever communication tool/workspace that your company uses. Use pass/fail indicators, graphs, and simple visuals to help developers and stakeholders consume the information quickly and confidently.

Mindset Change

As you move toward automation and through the process, your team’s mindset moves from “I must execute testing” to “I must develop tests.” Encourage this because it’s an idicator that you’re on the right path.

Conclusion

Test automation only delivers value when it scales something that already works. For engineering leaders, that means building clarity and repeatability into the process before scaling it. For developers and testers, it means shaping the workflow by hand before encoding it into a framework.

Start with the person in the lab. Let the process mature. Then automate. Because if a process doesn’t work manually, it won’t work at scale—no matter how sophisticated your tooling.

 

embeded high performance

Embedded firmware and software engineers are a special breed.

Many of us have our roots in the hardware side of the product. Therefore, right off the bat, we are comfortable wearing multiple hats. We bring the lunchpail and DIY mentality to the team. “Yeah, sure, I can do that.” After all, software is just typing.

And that exact mentality and skill set often help make the team successful. At least at the beginning of the project and the company’s startup phase.

We often praise the brilliance of the individual engineers. The wizard who can make a microcontroller sing. The one who remembers every register name by heart. The developer who writes both the UART driver and the YAML code for the CI pipeline.

But if you zoom out and you look not at the engineer but at the system of engineers, you’ll find that sustained performance and scaling don’t come from individual genius.

They come from how the team works.

High-performance embedded engineering teams that successfully scale their product and company share a set of characteristics. Habits, mindsets, tools, and workflows that enable them to repeatedly ship complex, quality software that works, on real hardware, in the real world.

Let’s unpack what those teams have in common.

They’ve Conquered the Chaos

High-performance teams don’t operate in chaos. They likely started there (and that’s OK and probably necessary), but they don’t stay there. Not past the initial stages, anyway. Chaos is the enemy of scale and velocity.

The embedded world is uniquely chaotic. Discontinuous delivery. Toolchain fragmentation. Scarce and messy hardware. Compliance landmines and 3rd party regulatory testing. Everyone’s developing on their own laptop with their own makefile quirks and debug scripts from five years ago.

But the best teams know when and how to confront that chaos and tame it. They commit to tools and people who can help them (even hiring specialists). They create consistency across build environments. They stop hacking together YAML, spreadsheets, and Jira to track releases. They build configuration management into their daily process, so every build is reproducible in seconds, every delivery is auditable, and every engineer can develop with confidence.

They Test All Day, Every day, and Automatically on Real Hardware

Here’s the truth: if you’re testing on the hardware manually only after releasing, you’re just firefighting and paying a huge tax on each release.

The best teams push testing left into every stage of development. This is not news. This is what the web and mobile product teams do as a matter of course. They showed the way, and it’s the yellow-brick road that leads to Oz.

The farther to the right the bug is found, the more expensive and time-consuming it is to fix it. If it gets to the field and into a customer’s hand, it might be infinitely expensive to fix.

But for embedded teams, testing must include real hardware. Yes, simulation and emulation environments can be (and often should be) built and used for early-stage or hardware/software codevelopment. And newer sim technologies, such as Renode and Qemu, make simulation more accessible, but still, your custom hardware is what the real world gets.

Great teams figure out how to automate Hardware-in-the-Loop testing. That includes every pull request and merges to the main branch, and also time-based triggers such as nightly and weekly.

Automating HiL testing is usually a custom, DIY exercise. Hardware is messy, finicky, finite, and lives in geographic space. How does one do it and make it reliable, fast, and trustworthy? You commit to tools and people who can help. They design and build frameworks, consistent environments, and infrastructure like they do their product. Then they reuse and scale it.

Testing is not a phase. It’s a culture.

They Treat Traceability and Configuration Management as a Feature

Most teams treat traceability and configuration management as a manual chore or a regulatory checkbox. The best teams know it’s actually a superpower.

Want to know what version of firmware with what tuning parameters was shipped to Customer B six months ago? Need to recreate that exact build to patch a certification-blocking bug without disrupting the rest of the system? What about knowing and being able to recreate the pre-sales demo firmware that an FAE shared with a potential customer three months ago?

If you can’t answer those questions in under a minute, you don’t have traceability. You have wasted time and effort.

High-performance teams implement automated systems that embed ephemeral configuration management into each and every build. Tools, team, and workflow.

Traceability isn’t just for compliance. It’s for sales. It’s for debugging. It’s for velocity.

They’ve Embraced Professional Grade DevOps

Web and mobile figured out DevOps a decade ago. A whole industry full of standardized tools to support CI/CD, test automation, cloud builds, and deployment has been instantiated almost out of thin air (for the web).

But embedded has lagged behind. The platforms and tools aren’t built for the specific needs of embedded development. Every underperforming embedded team has a graveyard of homegrown scripts. They built their own test runner. Their own packaging tool. Their own delivery tracker. And now they’re stuck maintaining it.

The best teams, however, are using DevOps. They’ve realized a few important things about how to make it successful:

  1. They hire the team and bring in the tools to build it right. “Hello, world!” is easy. Production-worthy is difficult. They keep the top product engineers working on the product.
  2. Pushing CI and the other DevOps implementations onto the development and test engineers themselves just slows them down, creates wasted costs, and puts releases in jeopardy.
  3. Configuration management is more than just the git commit ID.
  4. Test on the hardware. All day. Every day.

Professional DevOps for embedded isn’t a trend. It’s a survival strategy. It’s what will push your team over the hump.

They Build a Culture of Shared Responsibility

The best teams don’t throw code over the wall to QA or system test. They don’t say, “My stuff works, must be the hardware.” They know that delivering reliable embedded systems is a shared responsibility. It’s all hands on deck.

These teams blur the lines between development, testing, and release. Everyone builds. Everyone tests. Everyone can answer: “What did we ship? To who? And can we recreate it?”

They know that an embedded team and a successful embedded product is more than just adding the software to the hardware. It’s not 1+1=2. If they work the right way, it’s 1+1=10.

It’s Not Just What You Build — It’s How

We glamorize the product. Of course, we do. It’s the purpose. The robot. The satellite. The device next to the hospital bed.

But what separates the great embedded teams from the rest isn’t just the usefulness of the end product. It’s the system they use to build, test, and deliver that product. Over and over, under pressure, on schedule.

If you’re stuck in the chaos — build inconsistencies, manual testing, delayed releases, config nightmares — you’re not alone. But the great teams have shown us the way forward. They build process like it’s a product. They automate what hurts. They trace everything. They embrace DevOps.

And they make the hard stuff repeatable.

Want to be a high-performance team?

Don’t just ship great code.

Build, test, and release it the smart way.

Fuze

Revolutionizing Embedded DevOps: How Fuze™ Transforms Build, Test & Deliver

In a recent feature on the ipXchange YouTube channel, 4TLAS CEO and Co-founder John Macdonald delves into the persistent challenges of embedded systems development and how the Fuze™ suite offers transformative solutions.

The Embedded Development Bottleneck

Embedded systems are integral to countless devices, from household appliances to advanced medical equipment. However, the development processes for these systems often lag behind, hindered by outdated workflows, fragmented toolchains, and manual testing procedures. These inefficiencies not only slow down innovation but also increase the risk of errors and compliance issues.

Introducing Fuze™: A Paradigm Shift

Recognizing these challenges, 4TLAS developed Fuze™, a comprehensive suite designed to modernize embedded development through automation and traceability. The suite comprises three core components:

  • Fuze™ Build: Automates and standardizes build processes, ensuring consistency across development environments. By utilizing containerized Common Build Environments (CBEs), it eliminates the notorious “it works on my machine” problem.

  • Fuze™ Test: Transforms Hardware-in-the-Loop (HiL) testing into a scalable and automated process. This ensures that issues are detected early, enhancing software reliability and reducing time-to-market.

  • Fuze™ Deliver: Offers end-to-end traceability for software deliveries, ensuring that the right versions reach the appropriate destinations securely and efficiently.

Real-World Impact

The implementation of Fuze™ has led to significant improvements in development cycles. By leveraging automated workflows and cloud-to-lab testing, teams have reduced the build-to-release cycle from weeks to mere hours. This acceleration not only enhances productivity but also allows for more frequent and reliable releases.

Looking Ahead

4TLAS envisions a future where embedded development is as agile and efficient as its web and mobile counterparts. With Fuze™, they aim to bridge the gap, offering tools that not only streamline current processes but also lay the foundation for continuous improvement and innovation in the embedded systems landscape.

To explore how Fuze™ can revolutionize your embedded development processes, contact 4TLAS.

Screenshot 2025 02 14 at 4.03.01 PM

If you have been a developer or leader in the embedded space for any longer than 5 minutes, here are some stories you know very well:

The “It Won’t Build for Me” Scenario

Jack and Jill are feverishly working on developing modules with each’s respective Windows PC and local development systems. Each has been using their locally installed cross-compile toolchains all day without issue. Jill commits her code to the branch when they are ready to test together, and Jack pulls to integrate.

Ugh! Jill’s code won’t build on Jack’s machine.

The “It Doesn’t Build for Production” Scenario

Jack is working on a delivery for today with his Linux machine and development system on his desk. Finally, at 7 pm, he’s got it working and commits his code to the repository. The build system picks up the changes and starts the build.

Ugh! The production build system, a Windows machine, won’t compile the code.

The “Disgruntled Windows User” Scenario

New hire Jack requests a machine with Ubuntu for his development environment. Sorry, our IT department only supports Windows. Here’s your Dell, and here’s Visual Studio.

Ugh! But wait, isn’t our target environment Linux?

For a software developer, their PC is like a chef’s set of knives — it’s a very personal tool. Most developers are intimately familiar with their machines. Partially because we have to be, but more so because we want to be. We configure them just the way we like it. We are comfortable and fast when using our favorite editor and development environment.

We want the IT team to leave us alone, thank you very much.

The Best of the Old Way

No alt text provided for this image

Good organizations do their best to minimize these issues by using a combination of organizational and personal discipline along with good configuration management. The purpose is always to keep the developers’ local build environment close to the production and each other’s build environment.

Toolchain and Build System/Environment

Embedded developers always need a cross-compile toolchain. Sometimes free (ie, ARM GCC), sometimes licensed (ie, ARM Keil). Managing this toolchain, its version, configuration, and environment is one of the most critical symphonies across the development team because it is a source of many “it doesn’t build for me issues.”

There is also the matter of the build system or build scripts. Whether it’s some flavor of make, autotools, yocto, an IDE project, or custom scripts in you-name-the-shell, this set of black magic can singlehandedly derail any release on any day.

The best organizations figure out how to get both the toolchain and build system under configuration management. One such method is putting the entire toolchain and build utilities into a repository.

Even though the golden master build system exists, most developers’ local build environment deviates from it. There are many reasons in the day-to-day operations where developers modify or update these elements.

We try stuff. It’s what we do.

Installation Script

Good organizations create a utility that installs and configures the golden-master build environment. This utility provides a common process and known state for getting someone up and running.

But the usefulness is limited over time since installation. Developers modify their environment. The build scripts can be easily updated locally via pulls from the repository, but the toolchain may not update in as streamlined a fashion.

And again, we try stuff.

Virtual Machines

There is also the not-so-little matter of the user’s host environment. In some cases, we’re all are using different OS’s. We’re all using the same OS in others, but the inevitable update and configuration nuances create subtly different build environments between each development and the production build machine.

Another reasonable practice is to use an officially supported Virtual Machine that includes the supported toolchain and build environment. That helps solve the host environment differences and even liberates the user from a particular host OS. But it opens up a new set of challenges because the VM is only a copy. As soon as the user copies it to their local system, now they’ve forked it, and it can start to deviate. The VM goes on its own path the first time the developer starts whacking at the build environment to debug issues. It’s essentially just like giving a developer yet another machine that they have to manage.

The DevOps Solution — Common Build Environment

No alt text provided for this image

Regardless of how good the organization’s configuration management is and how disciplined a developer is, a developer’s local environment can only ever get similar to the production environment with the practices described above.

However, we’ve solved the problem in our organization for both build configuration management and local build versus production build. We did so by applying the modern DevOps concept of containers.

We create production build containers with the golden master toolchain and access to the necessary repositories and build scripts. All developers, testers, and managers have access to these containers and use them locally to build. We generically name these containers as a Common Build Environment (CBE).

A CBE is not similar to the production environment, it is the production environment. Therein lies the magic. Since we first deployed the CBE to our firmware team about several years ago, we’ve enjoyed 100% success between local development and the production build. We’ve had not one single build failure attributed to the build environment.

Here’s how a CBE is deployed both locally and to the production build environment.

No alt text provided for this image

Another benefit to CBE is that it can liberate the developers from a particular platform, at least for the build. Docker containers are supported on Windows, Linux, and macOS.

If your build is for an embedded target processor, there is a good chance that you can create some flavor of Linux container for that target. However, even if you require Windows (*cough*…ARM Keil…*cough*), you can still build a Windows CBE. The downside is that a Windows host system is required to execute Windows containers. However, you may work around this by using a Windows VM on your Host OS and then executing the CBE build from within that VM.

How to Create Your CBE and its Workflow

You’ll need some development and configuration management to make the CBE useful for the development team. A benefit is that everything required for a CBE can be put into a source repository and a container repository. Therefore, it is always traceable and reproducible.

What You Need:

  1. Container in a container repository with the toolchain and 3rd party utilities (Dockerfile and/or container image)
  2. User build scripts/project and utilities (i.e., make, autotools, custom, etc.)
  3. Docker installation on the host machine

The CBE Container

Create your container, and install the toolchain and any 3rd party, non-custom, build-related utilities (note that these are NOT the “build” scripts/utilities themselves). The build-related utilities needed in a CBE are applications such as lint, statistics gatherers, linker and image tools, 3rd party static analysis, etc.

We found that in some cases, it was easier to start from a prototype container image (rather than work from the Dockerfile) and use it interactively to install the toolchain and utilities. That provided an easy and quick method for testing the install. We created the Dockerfile after we were satisfied with the container we built interactively.

You can use a single CBE that contains all the toolchains for the various targets you support or separate CBE’s with single toolchains. The choice is yours, with benefits and drawbacks to each. We have chosen to use many CBE’s, each with a particular target toolchain. This allows us to easily keep all the CBE’s very stable and, once created, rarely require updates. That helps with traceability and recreation of previous releases.

This diagram shows you the basics of how we configuration manage the CBE’s themselves.

No alt text provided for this image

We create a CBE per target processor toolchain and then version that CBE with the tag.

You must be aware about placing IP or internal repository access credentials into your containers. If you do, you should use an internal container repository. If you don’t have any IP or credentials in your CBE, then you can use the public hub.docker.io. We don’t have any IP in our containers, but we do have SSH keys for accessing our internal repositories. Therefore, we host our containers inside our network.

Build Project and Scripts

These are the Makefiles, autotools, Bazel, yocto, etc, or custom build scripts that are required to compile, link, and package your embedded image files.

Use good software design principles that include encapsulation and interface consideration. Encapsulation is essential since you will export the source workspace to the CBE to execute the build.

We have a relatively easy build and use a combination of Makefiles with some bash scripts on top.

User Level Scripts and Utilities: Working with the Container

Here is where you’ll find most of the new work required to effectively use a CBE in your build workflow.

You will need to create a setup of utilities/scripts that allow your developers to use the CBE easily. Ideally, a developer doesn’t even know that the build occurs in a CBE container. It appears, whether by CLI or IDE that the build is native to the host system.

Your requirements are as follows:

  1. The speed of the build must be on par with a local host OS build.
  2. All build options (i.e., CLI options, targets, flags, etc.) must be accessible as if the build was local.
  3. All build stdout, stderr, and log files must be presented just as if the build was local.
  4. All build artifacts, including image files, debug symbols, linker maps, etc, must be deposited into the same location as if the build was local.

Here is the basic user level script workflow for using a CBE:

No alt text provided for this image

This workflow assumes the following:

  • The user either has internet connectivity or already has the CBE locally. However, one of the benefits is that the internet is not required once the CBE is available locally.
  • Docker is present and executing on the developer’s system.

When we initially rolled out the CBE, we did it for a single particular target — an ARM using the GCC. The build was make with some bash on top for usability (referred to below as build.sh).

To support the developers’ use of CBE, we created another bash wrapper that implemented the workflow above. We’ll refer to it as cbe_build.sh from hereon.

Here are the main guts of that initial wrapper. The CBE is an Ubuntu 14 LTS image stripped down, and it contains the ARM GCC 4.9 and 6.3 series toolchains in it. The developer specifies which toolchain version as a command-line argument to this script.

Overall Structure of cbe_build.sh

######################################################################

# Main script

main()

{

cmdline $ARGS

get_container

 

# Grab the PID of the most-recently-launched container

DOCKER_PID=`docker ps -q -n 1`

clean

copy_source_code

build

get_artifacts

 

log “=== Destroying container…”

docker rm -f $DOCKER_PID

}

Get the Container

Since we host our containers internally, we use a service account to access and are not very strict with the credentials.

#########################################################################

# get_container()

#

# Pulls latest version of CBE container

get_container()

{

log “=== Pulling latest version of container…”

# No IP, can be loose with credentials

docker login -u $UNAME -p $PWORD

docker pull $CBE

# Launch the container, give it a no-op command to run so it will stop

# quickly and wait for us.

echo “echo ”” | docker run -i $CBE

}

Copy the Source Code

Note: We copy the source into the container rather than share a mounted volume due to the unreliability of shared mount in Windows Host OS. If your Host OS is all Linux or macOS, then skip this step and share the volume.

When copying the source code, also copy in your build tools/scripts. In our case, our build scripts are part of the source tree because we use Makefiles with some bash on top.

Note that our source code base for this target is very small (< 100K lines of code), therefore, we copy the entire source code tree into CBE.

We do have to hack the line endings if the developer is on Windows. I’m sure there is a more elegant solution to that part.

#########################################################################

# copy_source_code()

#

# Copies the source code from Host OS into the container

# ensures proper line endings

#

copy_source_code()

{

# Copy user’s sandbox into container FS

log “=== Copying source files…”

docker cp ./ $DOCKER_PID:/build

# if windows, change the EOL’s in the container

log “=== OS = ${OS}”

if [[ ${OS} =~ .*MINGW.* ]] || [[ ${OS} =~ .*CYGWIN.* ]]

then

log “=== Changing EOL’s IX style”

container_run $DOCKER_PID “find . -type f -exec dos2unix -q {} {} ‘;'”

else

log “=== EOL change NOT required”

fi

}

Build

I’ve left out some $ARGS preprocessing for brevity. That preprocessing sets the toolchain path as well as massages the $ARGS variable to ensure the build options (ie, target, etc) are correct. As mentioned previously, a bash script build.sh sits on top of make to perform the build inside CBE.

#########################################################################

# container_run (DOCKER_PID) (command)

#

# Runs (command) in the stopped foreground container with pid (DOCKER_PID)

# by piping it into stdin of “docker start -i (DOCKER_PID)”

container_run()

{

if [ -z “$2” ]

then

log_error “container_run(): missing parameter”

log_error “Usage: container_run (DOCKER_PID) (command string)”

return 1

fi

echo $2 | docker start -i $1

}

##########################################################################

# build()

#

# Runs the build command and captures time information

#

build()

{

# Run script in container

log “=== Building with CBE…”

bdstart=$(date +%s)

log ” – build ARGS: $ARGS”

# Note that toolchain path is set by preprocessing

container_run $DOCKER_PID “export PATH=$toolchain:\”$PATH\” && cd /build/tools && ./build.sh $ARGS”

bdstop=$(date +%s)

BDCOUNT=$((bdstop-bdstart))

}

#

Get the Artifacts

This step requires precision to keep the build time with CBE on par with a local build. Copy ONLY what is absolutely necessary from the CBE back to the host OS. We use some logic to determine if the build was a success or failure and modify the behavior accordingly.

#########################################################################

# container_run (DOCKER_PID) (command)

#

# Runs (command) in the stopped foreground container with pid (DOCKER_PID)

# by piping it into stdin of “docker start -i (DOCKER_PID)”

container_run()

{

if [ -z “$2” ]

then

log_error “container_run(): missing parameter”

log_error “Usage: container_run (DOCKER_PID) (command string)”

return 1

fi

echo $2 | docker start -i $1

}

##########################################################################

# build()

#

# Runs the build command and captures time information

#

build()

{

# Run script in container

log “=== Building with CBE…”

bdstart=$(date +%s)

log ” – build ARGS: $ARGS”

# Note that toolchain path is set by preprocessing

container_run $DOCKER_PID “export PATH=$toolchain:\”$PATH\” && cd /build/tools && ./build.sh $ARGS”

bdstop=$(date +%s)

BDCOUNT=$((bdstop-bdstart))

}

#

Integration with an IDE

We integrated the CBE build with a few IDE’s, but it requires another custom-developed utility. Unfortunately, each IDE is a different animal, and therefore, we haven’t found a common method for all. But here are some concepts to understand:

  1. Calling the build and how the targets and options are specified
  2. Debug symbols and files/info required for the debugger

Benefits and Challenges

Here are what we’ve experienced as both benefits and challenges to the CBE approach.

Benefits

  • All users are building in the production environment.
  • Configuration management is on a single item, the CBE, rather than spread across the user base.
  • All users get any updates to the build environment automatically
  • Internet on or off (assuming prereq’s are met)
  • Host OS independent

Challenges

  1. Docker on all the developer machines. Not much of an issue once it’s installed.
  2. Enabling developers to monkey around with the tools and scripts. This is a legitimate challenge because as embedded developers, we all need to (or like to) monkey with the tools and options sometimes. We solve this by showing a developer how to work in a CBE interactively, or the developer installs the build environment locally and works locally until happy.
  3. Windows containers have portability limitations. Windows containers must execute on Windows Host OS. Therefore, the developer must use Windows natively or a Windows VM to execute the build.
  4. Licensed toolchains require floating licenses. You are roped into a floating license scenario with a licensed toolchain, which is typically more expensive. This is how many organizations work, so it may not be a problem.

Summary

The Common Build Environment has eliminated the “doesn’t build for me” and “doesn’t build in production” problems for embedded target builds in our organization. The CBE uses a Docker container with the golden-standard embedded toolchain and build environment. Since all developers and the production build server use the same docker container, the developers are always building in the production environment.

 

Designer 2

In modern web and mobile application development, the concepts, toolsets, and services of DevOps have been employed to speed up product development while simultaneously increasing quality. DevOps has been a major win for the software industry for the last ten years.

What about the software and firmware in embedded systems? What makes them any different? Surely they’re benefitting as well, right?

Embedded systems are often the hidden, but critical piece to products in various industries, from automotive to medical to defense to consumer electronics. Unfortunately, their development and testing present unique challenges that traditional DevOps toolsets and services don’t address. These challenges arise from two very distinct differences between web and embedded software.

Discontinuous Delivery

Traditional DevOps workflows for web and mobile applications employ Continuous Delivery (CD). The build, automated test, and deployment pipelines live and scale entirely in the cloud and typically pump out new releases daily or many times per day as the HEAD of the codebase moves forward relentlessly. Anytime a mistake makes it into the field, the CD backend quickly and effortlessly pushes an update or rolls back to a previous version. Users are none-the-wiser.

Not so with embedded systems. The embedded firmware image is usually a line item on the assembly parts list that gets programmed into the units at manufacture time. Almost like a resistor. Even with IoT and other field-upgradeable embedded systems, pushing firmware updates is complex, discontinuous, and usually requires some user intervention.

The firmware that makes it out the door has to work. You might not get a second shot at it.

Hardware-in-the-Loop

At the end of the day, the firmware requires the specific product hardware for which it was built, including testing the system prior to deployment. Hardware, especially for testing, is inherently expensive, finite, and messy. Plus, it lives in a physical, geographical location on a lab bench or in some rack connected to a host or network switch.

Which version of the product hardware supports which version of the firmware? How do I know what version of the firmware is on that field unit? How do I perform continuous integration and automated test with hardware? How do I scale across my test farm? How do I get this build into that system in the lab?

HIL makes everything slower and more difficult.

The Five Unique Challenges of Embedded Systems

These two differences lead to several unique challenges with embedded systems firmware development, test, and deployment.

Complex Toolchains

Cross-compilation and the use of different build host environments add layers of complexity. Embedded developers typically work locally with the build toolchain installed on their laptop and a development system connected to their machine. They code, build, and test locally on their laptop and connected test system. When it’s time to push and integrate, they often face the “works on my machine” issue, where the build or testing fails for their colleague during integration or in the production build system due to differences in toolchains and build environment setups.

Complex Version Management

Embedded systems often run multiple firmware versions across different product lines and customer deployments that originate from the same code base. You’ve got different versions of the hardware out in the field. Likely different versions of firmware also. Knowing who has what, and who’s supposed to get what is non-trivial. Managing these versions, ensuring compatibility, and maintaining clear traceability of changes is a significant challenge.

Required Point-Fixes

The certification test house just kicked it back to you with a message that says, “Fix this bug only without changing anything else. If you change anything else, you will reset the certification testing process.”

Crap, you can’t give them your HEAD of the codebase, which is unfortunate, because you fixed that bug months ago. So now you have to recreate the source and build environment from that release, which happened four months ago. Does the release ticket contain all the commitIDs? What about the versions of the library dependencies? How about the specific version of the toolchain and build scripts?

Then, once you’ve fixed the bug, you’ve gotta prove that your new firmware contains only that fix.

Proof of Version and Forensic Analysis

Your customer is asking that you prove that the units that roll out of manufacturing actually has the firmware that was certified. Or, maybe, your support team has received a bunch of field returns. How do you know definitively what firmware they have, and how you get back to the source and build environment that created it?

Hardware-in-the-Loop

Software for embedded systems must be thoroughly tested on the actual hardware platforms it will run on. Discontinuous delivery puts more pressure on testing embedded systems prior to release. This dependency introduces complexity in testing workflows, requiring a heterogeneous test environment that can handle various hardware configurations and the specific test content that matches. Your test farm has finite and limited resources, but you need to parallelize across it as best you can. You want to test PRs, master merges, releases, and then conduct nightly and weekly soak tests. How do you manage all of this?

Addressing the Challenges

Although the challenges are formidable, we’ve had great success at using the concepts of DevOps with the addition of purpose-specific tools. Here are some ways we’ve addressed, overcome, and thrived through these challenges.

Common Build Environment

We create a common build environment (CBE) by using the once-but-no-longer-magic-in-the-embedded-world technology of containers. A CBE is a container that contains the correct version of the toolchain, plus access to the codebase and utilities repositories. Now, whether a developer builds locally or the pipeline is building in the cloud for integration, the exact production build machine is used. We’ve eliminated 100% of build issues due to mismatched versions or build environment differences.

Fuze Build, Package, Release, and Delivery Tool

We developed a tool that automates and integrates universal configuration management from build all the way through delivery. The key to solving complex versioning across a heterogeneous product family, recreating previous builds no matter how far in the past, and forensically proving what you have in your hand is what you said it would be, is to build configuration management into the process as a fundamental pillar of the automation. Much like Jira does with communications and git does with source code, making configuration management a fundamental pillar of the tool eliminates not only human time and process, but also human error.

Fuze starts with the build. As a generic build executive, it allows you to use the tools that you already have. It simply wraps your current build procedure, utilizes a CBE, and stores every piece of metadata associated with that build and the CBE so that you, the QA team, the product manager, and everybody else knows exactly what’s in there and how to recreate it. If you build the FuzeID into the image itself (based on your requirements of security and obfuscation), now you have the perfect forensic tool to prove and know everything about this firmware image.

What goes into the package for the system delivery to manufacture? Images for multiple processors? Static config files? Documentation? No problem. Fuze also builds the release packages for you with 100% configuration management at its core.

When it’s time to release and deliver, if even just a demo build through an FAE, Fuze is again the answer. Find the FuzeID of your intended release package, release it, and then deliver it, all without leaving the tool.

We create a single source of truth for configuration management — the FuzeID.

With a FuzeID you know the following:

  • The build – who, when, what CBE/tools, build command(s)
  • Source commitIDs
  • Dependencies and versions
  • Build package contents
  • Test results
  • Release status – stage, by who, when
  • Delivery status – to who, by who, when

Cloud2Lab Automated Test Framework

HIL testing gets little attention in the open-source and generic automated test framework world, so we built our own — C2L.

C2L, like Fuze, takes the perspective of the person-in-the-lab. We believe and have seen the benefits of using the person as the focal point for automation. We ask, “what does a person do?” And then, we build the automation tool to allow that.

C2L uses the tools, test commands, and sequences that you already use if you’re testing it yourself manually. No new language, API’s, or scripting environment required. It gives you a straight-forward and easily understood environment for fully automating what you already do.

C2L also bridges the cloud to the lab. The build pipeline ran, or you built it locally, and now you have to push that image into a particular device or farm of devices on a bench or in a rack in some lab. We built C2L so it can do that, all through configuration. Plus, it handles the different versions and products across which you must test. Do you have 3 of these and 5 of those? C2L can parallelize according to how you configure it.

Debuggers, scopes, robots, and other control and acquisition required in your test setup? No problem. C2L handles it all and orchestrates your heterogeneous test content across the entire farm, then retrieves and organizes the test results.

You Can Do It (We Can Help)

Although embedded systems present some unique challenges, DevOps is still the way to help your team go faster and pump out products with higher quality. You just gotta know how to make it work for you.

 

Conquering Chaos

Embedded firmware development isn’t the same animal as web and mobile application development. We can’t always treat it the same. Discontinuous delivery places an emphasis on the need for proper configuration management (CM) for firmware that starts with the build and continues all the way through delivery.

CM serves as the anchor that prevents chaos from derailing the firmware development, test, and delivery workflow. Embedded firmware must be meticulously managed from conception to deployment in order to know who has what, who gets what, what version supports which hardware, and to recreate previous releases for bug fixing and forking.

Without robust CM practices, the complexity of tracking code changes, build configurations, and deployment statuses can quickly spiral out of control, leading to costly mistakes and system failures.

Unfortunately, it’s easy to miss some crucial information.

This article will guide you through the essentials of configuration management in embedded firmware development. We’ll explore what needs to be tracked, how to implement these practices effectively, and the critical role that automation tools like Fuze play in maintaining order.

The Essentials of Configuration Management for Embedded Firmware

Configuration management in embedded firmware is the disciplined practice of ensuring that every aspect of the firmware build process—from source code, build environments, and dependencies to build artifacts and release statuses—is meticulously tracked and documented. The goal is to create a repeatable, fully traceable, and auditable process that ensures consistency and quality across all stages of firmware development and delivery.

Key Elements to Track for Each Firmware Build

CM should set you up to fully reproduce a build – bit-exact (except for any purposeful dynamic elements).

Let’s start with what information needs to be tracked to ensure proper CM.

Tracking these details ensures that the build process is fully reproducible. If a bug is discovered in the firmware, knowing precisely who built it, when, and with what tools can be crucial for debugging and fixing the issue.

The Build

  • Who: Identify the person or automated system that initiated the build.
  • When: Document the exact date and time the build occurred.

Pro Level:

  • Build Command(s): Capture the exact commands and parameters used to initiate the build. Arguments, options, and input files all affect the build output.

Source

  • Source CommitIDs and Branch Names: Document the specific commitIDs from all source repositories. This includes any build tools and config files (memory map, etc) that are tightly coupled to your source code. If branch names are important in your vernacular, then record them as well.
  • Tags: Tag your repo(s) and record those values.

Pro Level:

  • Diff/Patchset: CommitIDs can disappear. Now what? Branches get deleted, and the associated commitID’s disappear with them. You can/should create a diff/patchset with each build against a tag regularly timed tag.

Tools and Environment

  • Toolchain: Record the versions of compilers, linkers, and any other tools used during the build process.
  • Build Environment: Machine and HostOS details.
  • Script/ConfigCommitIDs: Document the commitIDs of any build scripts or configuration files loosely coupled to the source repositories. For example, you may have a “utilities” repository that isn’t part of the source code repo but contains build utilities.

Pro Level:

  • Dockerfiles and Containers: Use containers for the production build toolchain and then put its dockerfile or the container itself under CM.

Dependencies:

  • Standard Libraries: Record version/config info for all third-party and standard libraries.
  • Linked Package Libraries: Record the version information for all custom and self-built libraries linked in the firmware image.

Pro Level:

  • Source for Linked Package Libraries: Extend your traceability back through the source of the linked binary library. Now you can build a dependency traceability graph all the way back to the source.

Build Package:

  • Package Contents: In addition to a particular firmware image, create a traceable package of all images and supporting files (configuration, documentation, etc) that comprise a particular release.
  • Package Structure: Document the package structure so that any deployment/programming tools know where to find the important files.

Pro Level:

  • Test Results: Include a reference to the test results or the test results themselves of the build package.

Release Status:

  • State: Document the state of the release.
  • Who: Identify the person who changed the release state
  • When: Document when it happened.

Pro Level:

  • Delivery Status and Traceability: To whom, by who, when, and what method.

Commonly Overlooked Aspects of Configuration Management

Even with the best intentions, certain aspects of CM are often overlooked:

  • Complete Environment Documentation: Teams often fail to fully document the build environment, including specific versions of the operating system, installed tools, and environmental variables. This can lead to inconsistencies and bugs that are difficult to trace back to their source.
  • Missing CommitIDs: CommitIDs can be deleted.
  • Comprehensive Artifact Management: It’s easy to focus solely on the final firmware binary, but all intermediate artifacts and logs should also be tracked and stored. These can be invaluable for debugging and future maintenance.
  • Single Source of Truth: CM data can become fragmented across different configuration management methods and teams, leading to inconsistencies and gaps. Establishing a centralized location where all CM data is stored and accessible is crucial for maintaining integrity.

Creating a Single Source of Truth

A Single Source of Truth (SSOT) is vital for ensuring that all team members have access to consistent and up-to-date configuration data. This can be achieved through centralized systems that integrate all aspects of CM, from source code and dependencies to build artifacts and test results.

Methods to Implement SSOT:

  1. Manifest Files: These files list all components, their versions, and configurations used in the build. Manifest files serve as a quick reference and can be stored in the same repository as the source code.
  2. Jira Tickets: Using issue tracking systems like Jira helps log every change and decision made throughout the development process. By linking Jira tickets to specific commits, builds, and releases, you can maintain a traceable history that spans the entire project.
  3. Databases: A dedicated CM database can store all configuration items, their versions, and relationships. This structured approach ensures that all relevant data is stored in a consistent, searchable format.

Pro Level:

Use a tool that automates the entire configuration management process from build through test and delivery and then allows easy, yet secure access to all of the CM information for each build.

Automating Configuration Management with Fuze

Automation plays a critical role in maintaining effective CM, especially as projects scale. Fuze is an automation tool designed specifically for embedded firmware development, helping teams to manage and streamline the CM process.

Key Features of Fuze for CM Automation:

  • FuzeID: Every build managed by Fuze is tagged with a unique identifier, the FuzeID. This ID encapsulates all relevant data about the build, including the source commit ID, toolchain versions, build commands, dependencies, test results, and more. The FuzeID provides a complete traceable history for each firmware version, ensuring that it can be reproduced and analyzed at any time.
  • Centralized Management with Distributed Operation: Fuze is a distributed build executive that can execute locally or in cloud infrastructure, but it centralizes all elements of the firmware build and configuration management process. This helps in maintaining a single source of truth and ensures that all team members have access to consistent data.
  • Environment Replication: Fuze supports the creation of reproducible build environments using containerization. This ensures that the same environment can be used across different systems, reducing the risk of environment-related bugs.
  • Automated Configuration Management: Fuze generates and tracks all of the information described above for each build. These reports are invaluable for maintaining accountability and ensuring that the build process meets all required standards.

Conclusion

Effective configuration management (CM) is crucial for the success of embedded firmware development. In an industry where the stakes are high and the environments are complex, CM serves as the backbone that keeps every aspect of the development process organized and traceable. By ensuring that each firmware build is meticulously documented—from who initiated the build to the exact tools and environments used—you safeguard against the chaos that can arise in embedded systems.

CM is not just about tracking versions; it’s about creating a system where every build is fully reproducible, where every issue can be traced back to its source, and where every deployment is precise and error-free. In this increasingly complex landscape, automation tools like Fuze elevate CM practices by centralizing and streamlining these processes, ensuring that your firmware is not only reliable but also fully auditable.

As embedded systems continue to evolve, robust configuration management is no longer optional—it’s essential. Whether you’re managing a small project or a large-scale deployment, the ability to control and replicate every aspect of your firmware builds is key to maintaining quality, reliability, and innovation.

 

Scroll to Top