Docker for amd64
Posted underTools

Docker for amd64

This article will provide you all you need to build docker containers across multiple platforms. With an ever-increasing need to support multiple different device types (especially in areas like IoT and robotics which we touch later), it’s important to utilize the systems we have to do that…

Brandon Fan
Brandon Fan

Quick Intro

This article will provide you all you need to build docker containers across multiple platforms. With an ever-increasing need to support multiple different device types (especially in areas like IoT and robotics which we touch later), it’s important to utilize the systems we have to do that. Luckily. Docker has a built-in buildx command that helps us with this process.

This article shows us how to use Google Cloud Build to do this at scale.

The Configuration

Our goal is to build the images on Google Cloud build, but then push these images to Docker Hub rather than Artifact Registry. This involves creating a cloudbuild.yaml to define the build steps.

This is the configuration we used (be sure to change shade-prod to your GCP project id):

steps:
  - name: 'gcr.io/cloud-builders/docker'
    entrypoint: 'bash'
    args: ['-c', 'docker login --username=$$USERNAME --password=$$PASSWORD']
    secretEnv: ['USERNAME', 'PASSWORD']
  - name: gcr.io/cloud-builders/docker
    args:
      - run
      - '--privileged'
      - 'linuxkit/binfmt:v0.8'
    id: initialize-qemu
  - name: gcr.io/cloud-builders/docker
    args:
      - buildx
      - create
      - '--name'
      - buildxbuilder
    id: create-builder
  - name: gcr.io/cloud-builders/docker
    args:
      - buildx
      - use
      - buildxbuilder
    id: select-builder
  - name: 'gcr.io/cloud-builders/docker'
    entrypoint: 'bash'
    args: ['-c', 'docker buildx build --platform $_DOCKER_BUILDX_PLATFORMS -t $$USERNAME/<your-tag> . --push']
    secretEnv: ['USERNAME']
options:
  env:
    - DOCKER_CLI_EXPERIMENTAL=enabled
substitutions:
  _DOCKER_BUILDX_PLATFORMS: 'linux/amd64,linux/arm64,linux/arm/v7,linux/arm/v8'
availableSecrets:
  secretManager:
    - versionName: projects/<your-project-id>/secrets/DOCKER_PASSWORD_SECRET_NAME/versions/1
      env: 'PASSWORD'
    - versionName: projects/<your-project-id>/secrets/DOCKER_USERNAME_SECRET_NAME/versions/1
      env: 'USERNAME'

Run a yaml config with the command gclouds builds submit --config cloudbuild.yaml .

Explanation

  1. The first thing to note is that DOCKER_CLI_EXPERIMENTAL=enabled is set as an environment variable. This allows google to use buildx.
  2. The workflow first runs a privileged linuxkit/binfmt:v0.8 image. This is used to initialize qemu which is used for virtualizing the later images.
  3. We then create a builder named buildxbuilder
  4. This builder is used in the next step which builds for the platforms defined in _DOCKER_BUILDX_PLATFORMS. In this case linux/amd64,linux/arm64,linux/arm/v7,linux/arm/v8. This set of targets provides a wide range of compatibility. This step also pushes to docker hub.

Note: this YAML file assumes that you’re pushing to docker hub, which is why we pass in the secrets of the username and password that you need to pass to the google secrets manager.

You can optionally skip this step and push to google artifact registry similar to below.

Deploying to Google Artifact Registry

Alternatively, if you wanted to deploy to Google Artifact Registry, you can simply change the tag and let Cloud Build know of the images built.

steps:
  - name: gcr.io/cloud-builders/docker
    args:
      - run
      - '--privileged'
      - 'linuxkit/binfmt:v0.8'
    id: initialize-qemu
  - name: gcr.io/cloud-builders/docker
    args:
      - buildx
      - create
      - '--name'
      - buildxbuilder
    id: create-builder
  - name: gcr.io/cloud-builders/docker
    args:
      - buildx
      - use
      - buildxbuilder
    id: select-builder
  - name: 'gcr.io/cloud-builders/docker'
    entrypoint: 'bash'
    args: ['-c', 'docker buildx build --platform $_DOCKER_BUILDX_PLATFORMS -t ${_LOCATION}-docker.pkg.dev/$PROJECT_ID/${_REPOSITORY}/${_IMAGE} . --push']
images:
	- '${_LOCATION}-docker.pkg.dev/$PROJECT_ID/${_REPOSITORY}/${_IMAGE}'
options:
  env:
    - DOCKER_CLI_EXPERIMENTAL=enabled
substitutions:
  _DOCKER_BUILDX_PLATFORMS: 'linux/amd64,linux/arm64,linux/arm/v7,linux/arm/v8'

And then simply call it in the terminal like so

gcloud builds submit --config=cloudbuild.yaml \
  --substitutions=_LOCATION="us-east1",_REPOSITORY="my-repo",_IMAGE="my-image" .

The Importance of ARM in Robotics and Modern Software

There are two schools of thought when it comes to CPU: you have the ARM architecture CPUs, and you have the x86 architecture CPUS. The core difference relies on how they operate machine instruction. ARM-based CPUs only rely on registers to handle operations while x86 can utilize memory and ALU units. Though no single type of architecture is better than the other, they do lead to major incompatibility issues when it comes to building modern software.

Recently, this problem has only exponentiated with more and more flavors or ARM being built out with their own configurations (even the modern M1 Mac that is built on ARM has its own flavor), leading developers and maintainers to ensure that their code truly runs on every container.

Now in the Robotics space, ARM is not something that’s really new. The Jetson family of boards all run on ARM based processors. The Arduino family also follows suit. With that in mind, learning how to build your code, more specifically docker containers, to support both ARM and x86 has become incredibly important to support as many people as possible.

Docker’s official stance on ARM and x86 is that it will “try to do its best to emulate” ARM if you ONLY build for x86. Unfortunately, this rarely cuts it. For example, attempting to run any ROS docker containers that were built on x86 architecture almost never works on ARM (even trying to get two containers to work on the same computer won’t work, let alone on a network).

Though ROS does offer its ros_cross_compile functionality, this article will focus on creating a build system that can work in production using Google Cloud Build. Google Cloud Build allows you to parallelize builds for multiple architectures and store the artifacts in an online artifact registry, docker hub, or a private registry. Utilizing cloud resources, we can build our docker containers faster and ensure that they work on any version or device.

This article will go more into depth on how to get Google Cloud Build set up to build on both x86 and ARM and provide an example configuration that you can go ahead and use for any of your ROS builds.

Why Use Docker for Robotics?

Avoiding Colcon Build Hell

Now some may wonder why use Docker for the robotics use case, specifically with ROS 2? Isn’t it better to try to install everything in the same workspace and follow the traditional paradigm of creating various packages, running colcon build , and compiling it directly from the source? The answer is yes and no. Yes….if it was a perfect world. In reality, ROS 2 oftentimes fails to properly link up different dependencies, leading to a dependency hell. The common failure is that one package relies on this specific version of this specific C++ file and this other package relies on another version. This ultimately leads to more and more errors that simply are not related to your control code, or your perception code at all. Indeed, this is simply a version of “it works on my computer but not yours”, which is where docker truly thrives.

Using docker containers enables us to follow a “separation of concerns” pattern that allows different pieces of code to not have to rely on the same dependencies. It removes the issues of dependency hell at the expense of a little bit more storage usage (of which can be optimized via thing such as multi-stage builds, only compiling what you need). Docker also enables the sharing and integration of code into an existing system much faster. Now, all we have to do is start up a docker container and its integrated directly into the ROS2 ecosystem.

TaggedDeploymentDockerSoftware


Cover Image for A Guide to Best Practices in Digital Asset Management

A Guide to Best Practices in Digital Asset Management

In the fast-paced digital landscape, where the volume of digital assets continues to soar, implem

Jasmine Xu
Jasmine Xu
Cover Image for Digital Asset Management Workflows: From Creation to Archival

Digital Asset Management Workflows: From Creation to Archival

Workflows play a pivotal role in the dynamic landscape of Digital Asset Management (DAM), serving

Jasmine Xu
Jasmine Xu