A gentle introduction to Elixir IoT

· 11min · pxp9
IoT Elixir

Introduction

IoT (Internet of Things) has become a popular term in the society, at least between software and hardware enthusiasts. A lot of people want to experiment at least once in life how is controlling real world devices with Software. It is super interesting because it has many applications, cars, household appliances, manufacturing industry and many more.

If you are reading this article, it is because you are either interested in learning embedded systems, or how to program your first firmware in a board in Elixir.

Basic IoT concepts

For doing IoT you will need to know that you have 2 main types of processors:

  • Micro processors units (MPU) devices like Raspberry Pi family, Beagle Bone or Arduino UNO Q (Arduino UNO Q is an exceptional board because it has a MPU and MCU in the same board).
  • Micro controllers units (MCU) devices like ESP32 family, STM32 family or Raspberry Pi pico family.

Peripherals or Sensors.

Peripherals or Sensors are external to MPUs, therefore, you will need to implement a driver for this, or take one already done. Peripherals or sensors can be inside an MCU, but usually are outside of them, you will need a driver as well.

Most driver implementations are taking one communication protocol as a standard like I2C, SPI or UART and giving some meaning to the messages sent. If none of these protocols are being used, it means that the device is controlled by raw signals (also called parallel interface), and you will need to read the datasheet of the peripheral.

Every good MPU, MCU, sensor, or peripheral has an open datasheet. I would say, do not buy a peripheral if you cannot find easily its datasheet online.

Main differences between MPU and MCU.

  • Performance, MPUs are much faster than MCUs. MPUs clock for a embedded device can run at 2.4Ghz compared to the MCUs, which can go between 800Mhz (STM32V8), 240 MHz (ESP32-S3) or 16 MHz (slowest Arduino).
  • OS vs No OS or RTOS. Conventional operating systems do not have time guarantees when they take control in a system call, they are gonna return you the execution control whenever the OS scheduler wants. For time constraint applications, this is really important, thats why MCUs do not use OS, or they use a RTOS.
  • Power consumption. MCUs consume way less energy than MPUs, which makes them a perfect fit for field applications.
  • Applications. MPUs can run AI models, Complex web applications, any kind of heavy processing tasks. MCUs can run applications which are soft processing tasks and real time responses, for example, controlling the servos of one mechanical hardware (you do not want a conventional OS to block the execution of a robot for a unknown period of time).

Why Elixir for IoT ?

Reduce the amount of languages in your stack.

Consider you want to do a Full system from the smallest MCU chip to Cloud.

How would you do it without the Elixir way?

  • You will use C , C++ or MicroPython for the MCU.
  • You will use a small Linux distribution (Alpine or Debian based like Raspian), which you will have to install, the language of your preference to code in the MPUs and make sure that the Linux distribution has the proper drivers for the hardware you want to use. Make sure that the OS starts the application you did, via systemd unit.
  • Make an application to run in the Cloud with any stack, Go, Python, or whatever language of your preference.

Why is so hard ?

In Elixir you will use:

  • AtomVM for MCUs, it ships with RTOS or not depending on the MCU chip.

  • Nerves for MPUs.

  • Elixir for making cloud applications.

Feature rich in embedded systems which you cannot imagine or not even come close.

  • Fault tolerant. Your application can recover from a unexpected failures. This feature is builtin in any Erlang VM application.

  • Distributed computing by construction. Erlang VM concurrency model, allows concurrency and distribution on any application.

Vibrant and active community.

If you are new experimenting with this stuff, you might have skill issues like me and you might want help from smarter or with more experienced people than you.

Nerves and AtomVM core maintainers and enthusiasts developers like me are active in the Elixir and Erlang forums. If you have a question, you can always ask in those channels or in the Elixir Discord channel.

Nerves project. MPUs IoT in Elixir.

Nerves is an Elixir project for doing embedded Linux systems. What this Embedded Linux framework does for you ?

  • Prebuilt Linux kernel builds (built with BuildRoot) for most popular boards, Nerves provides customized kernels for each board which boots the Linux system, starts your Erlang VM and then Erlang VM starts your application. Even the customized Linux kernel, setups a watchdog for tracking the Erlang VM OS process, so even if the Erlang VM crashes, the system is smart enough to restart the Erlang VM.

  • Ability to customize the kernel for your board, Nerves give you the ability for extend the kernel that you are gonna run in the MPU, for supporting more hardware.

  • Support most common Communication protocols in embedded systems, I2C, SPI and Uart (via Elixir Circuits), popular device drivers, and so on ...

  • Supported Networking via Nerves Networking libraries. WiFi, mDNS, ethernet etc...

    How to start a Nerves project ?

    You just need to use Nerves bootstrap. once installed, you can run this command to create an empty nerves project which will create an empty Nerves project.

    mix nerves.new my_project
    

By default, Nerves creates a project which is supported in all the boards that have official support, but it will not download a specific Linux build until MIX_TARGET env var is set and mix deps.get is executed.

mix.exs file.

      {:nerves_system_rpi, "~> 1.24", runtime: false, targets: :rpi},
      {:nerves_system_rpi0, "~> 1.24", runtime: false, targets: :rpi0},
      {:nerves_system_rpi2, "~> 1.24", runtime: false, targets: :rpi2},
      {:nerves_system_rpi3, "~> 1.24", runtime: false, targets: :rpi3},
      {:nerves_system_rpi3a, "~> 1.24", runtime: false, targets: :rpi3a},
      {:nerves_system_rpi4, "~> 1.24", runtime: false, targets: :rpi4},
      {:nerves_system_rpi5, "~> 0.2", runtime: false, targets: :rpi5},
      {:nerves_system_bbb, "~> 2.19", runtime: false, targets: :bbb},

you will need to specify your specific board,

export MIX_TARGET=rpi3

Note: it could be any of the supported targets, this is an example.

then you will get the precompiled Linux kernel for your board.

mix deps.get

once you got all the things installed, you can start developing.

the development lifecycle would be something like this.

  flowchart LR;
    firmware((Initial Firmware))
    flashed_firmware((Flashed firmware))
    compiled_firmware((Compiled firmware))
    f_compiled_firmware((First Compiled firmware))
    firmware_disupdated((Code updated, but unflashed in the board))
    firmware-- "<a href=https://hexdocs.pm/nerves/Mix.Tasks.Firmware.html>mix firmware</a>" -->f_compiled_firmware
    f_compiled_firmware-- "<a href=https://hexdocs.pm/nerves/Mix.Tasks.Burn.html>mix burn</a>" -->flashed_firmware
    flashed_firmware-- "Edit firmware code"-->firmware_disupdated
    firmware_disupdated--"<a href=https://hexdocs.pm/nerves/Mix.Tasks.Firmware.html>mix firmware</a>"-->compiled_firmware
    compiled_firmware-- "<a href=https://hexdocs.pm/nerves/Mix.Tasks.Upload.html>mix upload</a>"-->flashed_firmware

The reason mix burn and mix upload are different is because one will erase all the partitions and the existing data in the board, and the other will just update the board via SSH (it is setup by default). Another reason to keep both tasks splitted is because Nerves supports blue-green deployments, mix upload will upload the firmware to a partition not used to boot, and it will try to boot in the partition where the new firmware was flashed, if the new firmware does not boot, it will rollback to the previous working firmware.

Now you should be able to develop your own stuff, with Nerves and its ecosystem.

AtomVM project. MCUs IoT in Elixir.

AtomVM is an Erlang and C project which makes a different implementation of the Erlang VM, but works with Elixir and Erlang compilers since both compile to BEAM standard.

The reason they need to implement the Erlang VM is because are more hardware constrained and it cannot run Linux, apart from the Real-Time needs, these devices require.

What AtomVM does for you ?

  • Prebuilt AtomVMs images from boards contained in the releases.

  • Supported most communication protocols i2c, uart and spi

  • Supported Networking in their standard library AtomVM estdlib libraries. WiFi, mDNS, ethernet etc...

    CAVEATS: AtomVM is a quite new project and the devices might have partially supported features, but all of the features are available in ESP32 family.

    How to start an AtomVM project ?

    Unfortunately, at the date I am writing this article, this PR has not been merged which will allow an automatic setup similar to Nerves bootstrap.

    To illustrate how the setup will be, these are the steps you will need to do.

    1. Create a normal mix new project
    2. edit your mix.exs file, it needs to have exatomvm dep and atomvm options which might differ depending on your board. Here you also define an starting point which is compulsory for AtomVM apps.
defmodule MyProject.MixProject do
use Mix.Project

    def project do
    [
        app: :my_project,
        version: "0.1.0",
        elixir: "~> 1.13",
        start_permanent: Mix.env() == :prod,
        deps: deps(),
        atomvm: [
          start: MyProject,
          ## change this offset depending on your board
          ## just an example
          flash_offset: 0x250000
        ]
    ]
    end

    def application do
    [
        extra_applications: [:logger]
    ]
    end

    defp deps do
    [
        {:exatomvm, git: "https://github.com/atomvm/ExAtomVM/", runtime: false}
    ]
    end
end
  1. define the start function in the given module.
  2. Execute mix deps.get
  3. Search for a precompiled firmware for your board in the latest AtomVM release.
  4. Put the .avm files from the latest release in avm_deps folder (create it if not present).
  5. Flash the AtomVM itself in the board.
  6. Compile the AtomVM firmware.
  7. Flash the .avm file, the result of the compilation of your AtomVM firmware

once you got all the things installed, you can start developing.

the development lifecycle would be something like this.

  flowchart LR;
    empty_board((Empty Board))
    flashed_firmware((Board with AtomVM))
    avm_file((AVM file generated))
    firmware_disupdated((Code updated, but unflashed in the board))
    empty_board-- "Flash AtomVM"-->flashed_firmware
    flashed_firmware--"Edit firmware code"-->firmware_disupdated
    firmware_disupdated--"<a href=https://github.com/atomvm/exatomvm/blob/main/lib/mix/tasks/packbeam.ex>Compile AVM file</a>"-->avm_file
    avm_file--"Flash AVM file"-->flashed_firmware

As you can see, you just need to flash once the AtomVM itself.

Note that each time, you want to flash, packbeam task is called, so the .avm is always generated just before flashing.

ESP32

Fortunately, for ESP32

if we have the exatomvm dep installed since it will provide,

  • the mix task esp32.install, which will automate steps 5,6,7

  • the mix task esp32_flash, which will automate steps 8 and 9

for ESP32 the development lifecycle will look like this

  flowchart LR;
    empty_board((Empty Board))
    flashed_firmware((ESP32 with AtomVM))
    firmware_disupdated((Code updated, but unflashed in the board))
    empty_board-- "<a href=https://github.com/atomvm/exatomvm/blob/main/lib/mix/tasks/esp32.install.ex>mix esp32.install</a>"-->flashed_firmware
    flashed_firmware--"Edit firmware code"-->firmware_disupdated
    firmware_disupdated--"<a href=https://github.com/atomvm/exatomvm/blob/main/lib/mix/tasks/esp32_flash.ex>mix esp32_flash</a>"-->flashed_firmware

What kind of stuff I built with this tech ?

The reason I am writing this article is because I have built an LLM agent which controls, the hardware via prompts.

The project is open source and you can check it here, github.com/pxp9/try_nerves

What this project does is the following thing:

  • You request a LLM prompt via a Telegram Bot which is running in the Nerves firmware.
  • The LLMAgent implemented in the Nerves firmware, will use the sensors available which are registered as a tool.
  • Each peripherial/sensor will register itself when available in the LLM agent.
  • An Arduino board implements a green led blinking and a photoresistor, connected by Uart to the rasberry pi 3 where the Nerves firmware is running.
  • An Raspberry pi pico 2W implements a RGB led, connected by Wifi to the raspberry pi 3.

Then you can do stuff like this:

prompt that should appear

The prompt given to the system, and the result in the hardware.

an image of my hardware should appear

I hope you liked the article.

Consider give a star to the github.com/pxp9/try_nerves repo.

Thank you for your attention.