In 2021 I started playing around with the SX1303 based RAKwirless RAK5146 USB LoRaWAN concentrator in mini-PCIe form factor ( 1, 2 ). Back at the time I used the RAK2287 Pi Hat which - even though the website states otherwise ("Note: RAK2287/RAK5146 Pi HAT is compatible only with the SPI version of the RAK2287 and RAK5146 LPWAN concentrators.") is actually compatible with the USB version of RAK5146. In the second post I even want so far to hack the RAK2287 and bridge the PPS (Pulse Per Second) output of the included GPS onto the RPi GPIOs so that I could turn the RPi into a precise GPS NTP server - and I even added an I2C/"poor-mans-QWIIC connector" to it.
While all these hacks were successful and the balenaOS backed RPi 3 LoRaWAN concentrator was still working (with regular updates) in 2024, I finally wanted to streamline my overall IT setup: I had an WD My Cloud Mirror Gen2 NAS I updated to Debian 11 in 2022, the LoRaWAN concentrator/NTP server/Room sensor RPi 3 and an very old Medion Laptop was I was abusing as the main "server" - it was time to unify those systems and move on.
Luckily, I had gotten a TuringPi 2 and Turing RK1 SBCs which were a powerful combination: My plan was to use one RK1 in Node 3 position on the TuringPi 2 which did give the SBC one NVMe slot, a PCIe attached Dual Port SATA bridge and the additional USB 2.0 port which could be switched to any Node on the board. With the RK1 as a way more powerful CPU (in contrast to the old AMD 1 GHz Dual Core on the Medion Laptop...) and the Dual Port SATA bridge I already got Server and NAS functionality solved. But what about LoRaWAN and the other functionalities?
Well, as I bought the RAK5146 USB (with LBT and GPS), I could just use the RAK2287 to plug a cable from the Micro USB cable into the additional USB 2.0 port of the TuringPi 2 - and this was done. However, what about the GPS and PPS input to allow for an NTP server? And did I really wanted to install the RAK2287 into an ugly box - or leave it open for further projects?
Thats when I realized I still had the GeeekPi RPi 4G Hat at hand, an RPi "Hat" I bought back then when I thought that might be the cheaper alternative, but went out of my way to also buy the RAK2287 and never really tested it.
The GeeekPi is just an ordinary hat, well - lets say it does not even has any connection to the RPi, just an USB C port - not even some GPIO connection - its basically just an "USB LTE Modem in mini-PCIe to USB breakout" in the shape of an RPi Hat.
Looking at the SIM Card Slot I had some mixed feelings and hoped that it would not interfere with the RAK5146 - so I rechecked the pinout:
Luckily it turned out that the RAK engineers thought about this and marked the SIM Card Slot connectors as NC/not connect. With this, this card could be inserted into any Laptops WWAN slot or an SBC like the balenaFin. Neat!
After the pinout situation was clear, there was just one thing left: Soldering the needed connections directly to the mini-PCIe connector:
The needed pins were GND, PPS, PI_UART_TX and PI_UART_RX which I broke out onto a 2.54mm female header and fixed them into place with a bit of hotglue as strain relief:
Then I just needed to add an FTDI USB UART to the mix and connect these pins:
GND to GND
PPS to DCD
PI_UART_TX to TX
PI_UART_RX to RX
Afterwards I just needed to install the RAK5146 and the LoRaWAN and GPS antennas before powering up the unit via the USB C port of the Hat and connect the FTDI UART USB as well.
And the end I also broke out the RESET_GPS and STANDBY_GPS pins just for good measure, but as they are active low I did not need to pull them to any potential. But then again, it could come in handy in the future.
With those changes I can now use the RAK5146 directly over an USB connection and still get the GPS data and PPS signal for use in gpsd. I ended up printing an enclosure for the overall construction, added an 4 port USB hub and connected one of my RAK11300 breakouts as a Meshtastic node and another Waveshare RP2040 Zero as an environmental sensor which terminates a BME280 and an Amphenol Telair T6713 CO2 sensor to measure the room climate. I also rewired the USB 2.0 Hub to an USB C connector and now everything has its place - and I still have my empty RAK2287 board lying around in case I where to get another RAK5146 for other jobs ;).
I just released my latest project, a CCSDS reassembler running on an NXP i.MX RT1060 in RT-Thread OS. You can find it on Github: https://github.com/nmaas87/CCSDS-TM-FSM
The folks over at RT-Thread decided to make a IOT-Contest using hardware from different vendors and their RT-Thread OS. I was luckily enough to get chosen for a networking project with the NXP i.MX RT1060 EVKB, so I decided to document my journey first getting started with RT-Thread. Here we go! 🙂
Getting started
First of all, its important to know that there are two version of the MIMXRT1060 - the EVK and the EVKB. The EVK seems to be an older version which also includes a Camera sensor module, the EVKB version the recent one. RT-Threads example on Github is for the EVK version, which means things like User LED blinking does not work out of the box, but we will take care of this later.
NXP i.MX RT1060 EVKB files and documentation
You'll need to create a free user account on NXPs webpage to download the User manual and Schematic, otherwise the last two links will not work.
Clone the repo ( git clone https://github.com/RT-Thread/rt-thread.git ) to your system, (e.g. to D:\RT-ThreadGithub)
First project
Start RT-Thread Studio
File -> Import -> RT-Thread Bsp Project into Workspace
Bsp Location (within the Github Repo): D:\RT-ThreadGithub\rt-thread\bsp\imxrt\imxrt1060-nxp-evk
Project Name: What you want, I choose blinky
Chip Name: MIMXRT1060
Debugger: DAP-LINK
click finish
This will lead to an error, pointing you to the workspace folder ( e.g. mine is D:\RT-ThreadStudio\workspace.metadata ) where a .log file resides. Open it up and scroll to the end. At this time in the development, there seems to be an error with the initial compilation shown with this error: "!MESSAGE D:\RT-ThreadGithub\rt-thread\bsp\imxrt\imxrt1060-nxp-evk>scons --dist-ide --project-path=D:\RT-ThreadStudio\workspace/blinky --project-name=blinky"``. To fix this issue, navigate with the windows explorer to the folder ``D:\RT-ThreadGithub\rt-thread\bsp\imxrt\imxrt1060-nxp-evk``. Within the folder, hold shift and right-click and choose ``ConEmu Here`` - the envTools will open up. Just copy and paste the complete scons command (``scons --dist-ide --project-path=D:\RT-ThreadStudio\workspace/blinky --project-name=blinky) into the envTools window and press enter. It should compile now.
After this step, click finish in the still open import menu in RT-Thread Studio again, it should work now and generate the new project.
Navigate within Project Explorer through the Projectname to applications\ and open the main.c file.
Replace
/* defined the LED pin: GPIO1_IO9 */
#define LED0_PIN GET_PIN(1, 9)
int main(void)
{
#ifndef PHY_USING_KSZ8081
/* set LED0 pin mode to output */
rt_pin_mode(LED0_PIN, PIN_MODE_OUTPUT);
while (1)
{
rt_pin_write(LED0_PIN, PIN_HIGH);
rt_thread_mdelay(500);
rt_pin_write(LED0_PIN, PIN_LOW);
rt_thread_mdelay(500);
}
#endif
}
with
/* defined the LED pin: GPIO1_IO8 */
#define LED0_PIN GET_PIN(1, 8)
int main(void)
{
#ifndef PHY_USING_KSZ8081
/* set LED0 pin mode to output */
rt_pin_mode(LED0_PIN, PIN_MODE_OUTPUT);
while (1)
{
rt_pin_write(LED0_PIN, PIN_HIGH);
rt_thread_mdelay(500);
rt_pin_write(LED0_PIN, PIN_LOW);
rt_thread_mdelay(500);
}
#endif
}
and save the file. The User LED on the EVKB board is not on IO PIN 9, but 8. Also, this pin is shared with the ethernet controller - so if we enable this later, the User LED will not work anymore.
Click on the hammer icon ("Build 'Debug'") and it should compile the new software.
Click on the downward green arrow ("Flash Download") to download the program to the hardware board. The User LED should now be flashing.
There can be multiple issues at Download:
A window with "J-Link Emulator selection" pops up and asks for connection methods. This error means that RT-Thread Studio tries to program via Segger Link, which is the incorrect flash tool for the EVKB. If this comes up, please click no on the J-Link screen. Then check on the little black arrow attached to the Flash Download icon, that "DAP-LINK" is checked. Afterwards try Downloading again.
"pyocd.core.exceptions.TargetSupportError: Target type 'mimxrt1060' not recognized." If this error arises it can mean two things:
You did not enter the Chip Name correctly. Please check that the error is really mimxrt1060 - and no spelling issues are there. If there are, go to the Cogwheel Icon ("Debug configuration"), Debugger tab and correct the Chip Name within the Device name area. Click Ok to save and try again.
Scroll up through the error list and you might see the path of the pyocd software, e.g. RealThread\PyOCD\0.1.3 - this would mean you're running the default PyOCD 0.1.3 - which has some errors that will mean you cannot download to flash. Directly next to the "Flash Download" icon is the "SDK Manager", open it up and scroll down to the "Debugger_Support_Packages", "PyOCD". Choose the latest version (e.g. 0.2.0) and click on "Install packages". You can then select the old version(s) you have installed and click on "Delete packages". Afterwards close the SDK Manager. This should fix the issue.
If there are no issues with the Download, can you also "Open a Terminal" (computer screen icon close to "Flash Download"). And start with the correct settings (e.g. 115200 BAUD and the correct Serial port, should be chosen automatically if you already flashed a program before). You should see the RT msh console running on your EVKB and be able to send a "help" to get an overview over the device
\ | /
- RT - Thread Operating System
/ | \ 5.0.1 build May 28 2023 14:25:59
2006 - 2022 Copyright by RT-Thread team
msh >help
RT-Thread shell commands:
clear - clear the terminal screen
version - show RT-Thread version information
list - list objects
help - RT-Thread shell help.
ps - List threads in the system.
free - Show the memory usage in the system.
pin - pin [option]
reboot - reset system
msh >
To get a little bit further into a project, I replaced the main.c PIN definition and main() with following code
// wrong definition, GPIO1_IO9 is ethernet leds on EVKB
// #define LED0_PIN GET_PIN(1, 9)
// D8 (GPIO01-08) is the user led
#define LED0_PIN GET_PIN(1, 8)
// SW5 (GPIO5-00) is the user button
#define SW5_PIN GET_PIN(5, 0)
int main(void)
{
#ifndef PHY_USING_KSZ8081
// set LED0 pin mode to output
rt_pin_mode(LED0_PIN, PIN_MODE_OUTPUT);
// set SW5 pin mode to pullup
rt_pin_mode(SW5_PIN, PIN_MODE_INPUT_PULLUP);
while (1)
{
if (!rt_pin_read(SW5_PIN)) {
rt_pin_write(LED0_PIN, PIN_HIGH);
} else {
rt_pin_write(LED0_PIN, PIN_LOW);
}
/*
rt_pin_write(LED0_PIN, PIN_HIGH);
rt_thread_mdelay(500);
rt_pin_write(LED0_PIN, PIN_LOW);
rt_thread_mdelay(500);
*/
}
#endif
}
This will couple the LED to the status of the user switch (SW5): If its pressed, the LED will turn on, if its not pressed, the LED will stay off. Just save, re-compile and re-download.
Bugs
As seen there are some bugs already found:
RT-Thread Studio error upon trying to import a project
RT-Thread Studio failing to choose the correct debugger even though it was selected on creation/import of the project
pyocd error upon download of firmware to the NXP MCU due to old pyocd version shipped with RT-Thread Studio
Integration of the menuconfig tooling as "RT-Thread Settings" within RT-Thread Studio - but it just does not have any effect on the project
I hope that these issues get solved soon - but with the infos above you should be able to get started. I will see you in the next post - probably going through the project I made :).
I remarked that the hardware was great, however the software support and update capability of the system was severly lagging behind for an "industrial floor, always on" type of machine. Luckily, thats exactly what balena has been created for.
Even better, their environment already support Nvidia Jetson devices - also Nvidia Jetson Xavier NX modules. With the AIR-020X being a really nice carrierboard (and housing) for this module, I went to work.
Installing balenaOS on the AIR-020X
1.) I setup an Ubuntu 20.04 LTS machine, installed npm and setup jetson-flash
2.) I went to https://www.balena.io/os and downloaded the latest NVIDIA JETSON XAVIER NX DEVKIT EMMC image (2.107.10) in the development version.
3.) Unzip the file after setting up jetson-flash and getting your AIR-020X into recovery mode. This means opening the bottom of the case by unscrewing the 4 philipps head screws, connecting the Micro USB port of the AIR-020X with your Ubuntu host computer, applying power to the AIR-020X, but do not yet press the power switch.
4.) There is foil/recovery switch next to the Micro USB connector and LAN port. You need to press and hold this switch and at the same time press the power on button of the unit for about 4 seconds.
5.) On Ubuntu, run lsusb | grep Nvidia - this should return a similar line to this
Bus 003 Device 005: ID 0955:7023 NVIDIA Corp. APX
Import is the ending "APX", which means it is in recovery mode.
The .img value points to the unzipped image file, the -m tells the jetson-flash tool that we are running a Xavier NX system and want to install balenaOS on the internal eMMC module.
7.) This will now start the process which will take some minutes and also ask you for your sudo password. At the end you should see something like this:
[ 255.8670 ] Flashing completed
[ 255.8670 ] Coldbooting the device
[ 255.8696 ] tegrarcm_v2 --ismb2
[ 255.9454 ]
[ 255.9502 ] tegradevflash_v2 --reboot coldboot
[ 255.9530 ] Bootloader version 01.00.0000
[ 255.9984 ]
*** The target t186ref has been flashed successfully. ***
Reset the board to boot from internal eMMC.
8.) As soon as you reboot the device, you will be greeted with the balenaOS logo and can use it as any other balenaOS device.
Adding the AIR-020X to a fleet
If you want to use it e.g. in a fleet, I would recommend creating a new one with the device type Nvidia Jetson Xavier. This is important to allow sample projects to correctly work, as its basically the same thing as the more specialized version "jetson-xavier-nx-devkit-emmc" - but most demo projects just implement the former one :).
To now join the installed device onto your new fleet, download and install balenaCLI - login to your balena Cloud account and do a balena scan using balenaCLI to find your AIR-020X on the network.
.\balena join 192.168.178.112
? Select fleet <yourFleetNameToSelect>
? Check for updates every X minutes 10
[Success] Device successfully joined balena-cloud.com!
... and voila, its online!
What does work?
The AIR-020X has a lot of custom GPIO chips, 2x RS485/RS232 interface, 1x CANbus interface, a second network interface and even a NVMe. Luckily, everything just works out of the box.
- HDMI works
- USB works
- onboard network card (dmesg + dhcp test, gets ip / works)
[ 29.231807] eqos 2490000.ether_qos eth0: Link is Up - 1Gbps/Full - flow control rx/tx
- 2nd network card (dmesg + dhcp test, get ip / works)
[ 104.307175] igb 0004:05:00.0 enP4p5s0: igb: enP4p5s0 NIC Link is Up 1000 Mbps Full Duplex, Flow
- NVMe is recognized (lsblk)
nvme0n1 259:0 0 119.2G 0 disk
|-nvme0n1p1 259:1 0 96G 0 part
|-nvme0n1p2 259:2 0 64M 0 part
|-nvme0n1p3 259:3 0 64M 0 part
|-nvme0n1p4 259:4 0 448K 0 part
|-nvme0n1p5 259:5 0 448K 0 part
|-nvme0n1p6 259:6 0 63M 0 part
|-nvme0n1p7 259:7 0 512K 0 part
|-nvme0n1p8 259:8 0 256K 0 part
|-nvme0n1p9 259:9 0 256K 0 part
|-nvme0n1p10 259:10 0 300M 0 part
`-nvme0n1p11 259:11 0 22.8G 0 part
- can bus interface is auto loaded on boot (see ifconfig -a)
can0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
NOARP MTU:16 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:10
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Interrupt:63
- gpio/dio, works, but bit3 does sadly not work
( more info: http://ess-wiki.advantech.com.tw/view/File:AIR-020-nVidia_GPIO.docx )
Pin Number AIR-020X AIR-020T AIR-020N
GPIO bit1 393 269 38
GPIO bit2 421 425 149
GPIO bit3 265 411 65
GPIO bit4 424 264 168
GPIO bit5 418 476 202
GPIO bit6 436 396 246
GPIO bit7 417 337 169
GPIO bit8 268 338 194
# set bit 1 as GPIO pin
echo 393 > /sys/class/gpio/export
# get value 0=low, 1=high
cat /sys/class/gpio/gpio393/value
# set direction out or in
echo out > /sys/class/gpio/gpio393/direction
# get direction
cat /sys/class/gpio/gpio393/direction
out
# set value on out pin
echo 1 > /sys/class/gpio/gpio393/value
test:
# 265, bit3 did not work on export
echo 393 > /sys/class/gpio/export
echo 421 > /sys/class/gpio/export
echo 265 > /sys/class/gpio/export
echo 424 > /sys/class/gpio/export
echo 418 > /sys/class/gpio/export
echo 436 > /sys/class/gpio/export
echo 417 > /sys/class/gpio/export
echo 268 > /sys/class/gpio/export
echo out > /sys/class/gpio/gpio393/direction
echo out > /sys/class/gpio/gpio421/direction
echo out > /sys/class/gpio/gpio265/direction
echo out > /sys/class/gpio/gpio424/direction
echo out > /sys/class/gpio/gpio418/direction
echo out > /sys/class/gpio/gpio436/direction
echo out > /sys/class/gpio/gpio417/direction
echo out > /sys/class/gpio/gpio268/direction
echo 1 > /sys/class/gpio/gpio393/value
echo 1 > /sys/class/gpio/gpio421/value
echo 1 > /sys/class/gpio/gpio265/value
echo 1 > /sys/class/gpio/gpio424/value
echo 1 > /sys/class/gpio/gpio418/value
echo 1 > /sys/class/gpio/gpio436/value
echo 1 > /sys/class/gpio/gpio417/value
echo 1 > /sys/class/gpio/gpio268/value
- com ports, running as RS-232 or RS-485 (not tested, but recognized)
( more info: http://ess-wiki.advantech.com.tw/view/AIR-020-RS-485 )
root@56e8bf3:/# ls /dev/ | grep ttyTH
ttyTHS0 <- COM1
ttyTHS1 <- COM2
ttyTHS4
More info to the hardware can be found in the Advantech Wiki.
GPU Demos
Last but not least I want to point you towards the nice balena Jetson tutorial which can be found here.
It will help you getting started with Jetson samples that are hosted here.
In the end I was able to also get CUDA acceleration to work and see this smoke demo:
With that I am closing this post. It was surprisingly easy to get this device to work - the only thing left would be to get it to boot and to work from its internal NVMe storage, but other than that its a nice tool for working with GPU workloads like Edge Impulse.
Normally, I am not getting review units. This is due to the fact that I am only hosting this small weblog, along some conference talks - and most companies would probably be better off to send their units along someone with a reach of Linus Tech Tips, or similar.
On the other hand - when I get the possibility to do a review, it can be a bit worrisome for the companies as well, as I am a very honest person. I have been working in tech for some time now and had the honor to build stuff which went to space - and came back to tell the tale. I know what I want in a unit - and what could be a problem.
With this out of the way, I was one lucky winner of the Advantech Edge AI Challenge 2022 and got an AIR-020X-S9A1 unit at no charge to be able to realize my labSentinel 2 project. By doing this project I learned a bit about the box and thought it would not be a bad idea to share my ideas with the readers of my blog - and also Advantech, so that they can improve upon their product. This review is not paid for, reflects my own thoughts and I got the mentioned unit for my project - the review was not a part of that deal. With that out of the way, lets get started.
The hardware
The AIR-020X comes very well packaged - having its own foam jacket which will save it from all but the most horrible abuse from postal services. Not that it would matter: The roughly 14 cm x 12 cm x 4,5 cm compact unit weighs in at nearly 850 gr and is built sturdy and robust - like a tank:
The most obvious part of the unit is its heatsink, which it does put to good use - but more on that topic later. Along with the computer itself comes a chinese printed starting guide and a short USB A to Micro B cable, which will be needed to factory reset and reflash the unit.
All in all, the AIR-020X is an impressive unit, including an Nvidia Jetson Xavier NX module with 8 GB RAM, 16 GB onboard eMMC, 128 GB M.2 Flash, 2x RS232/422/485, 1x CANbus, 1xDIO ("GPIO"), 2x 1 Gbit ethernet, 1x Fullsize mPCIe with nano SIM holder, 1x 4k HDMI Output, 2x USB 3.0 Type A, 1x USB Type C. The unit is powered by a 12-24 V DC power supply, which is an optional accessory.
Being an industrial unit, it uses an industrial type connector for power, which is an HT5.08 2 pole type:
As this connector is also not part of the base package and the USB C connector does not accept power delivery (and neither works in Display Port Mode) - it becomes a bit harder to power up the unit after receiving it. Finding a usable power supply within the sizable voltage range of 12 - 24 V (e.g. from an old Laptop) is fairly easy, but without the connector - it becomes a dead end until the next delivery is there. It would be useful to at least include one connector with the base unit. The usb cable is a nice addition, but could be left out (even though its very high quality) - along with the chinese manual. This could be replaced with a small card with direct links to the english and chinese PDF versions of the manual.
Opening up the unit reveals the internals - but not without a fight:
The used screws are perfectly fixed to the structure by using blue loctite - a touch I cannot recommend enough for the vibration resistance of the overall unit - but the screws themselves are made from extremely soft metal, so that - using the correct screwdriver - I stripped nearly all screws and had really issues removing all of them. Somehow this problem seems to exist for all the external black screws, the internal silver ones were of a lot higher quality. In my case I fixed the issue by replacing the screws with new ones and never had an issue anymore with them.
The internal structure is very well laid out, raising the M.2 drive onto a pole to keep it a bit further from the heat source / Xavier NX module which is just sitting on the other side of the PCB and directly sandwiches with the big heatsink.
Very welcome are also the addition of the two Raspberry Pi Style Camera connectors, although they are a bit hidden by the serial console cables. I understand that the unit should be as closed as possible for the use in factories, but I would have loved to see two small slits (possibly even with some IP/EMC gaskets to allow for protective shielding of those entry points) so that cameras on the outside of the case can be easily attached.
The mPCIe slot gives the system an additional expansion slot for e.G. UMTS or LoRaWAN modules and also the internal CR2032 cell for the RTC is a small but valuable detail.
The AIR-020X has some mounting points available on both system sides for additional wall mounting rails. Looking at the mounting points and the obvious use of the AIR-020 series in lab and factory settings, the inclusion of a DIN rail mount as available accessory could prove very useful to directly mount this small computer into an electrical cabinet.
The software
Booting up the system greats one with a very familiar picture: Ubuntu 18.04 is running on the machine in form of a tailored version of Nvidia Jetpack. This version by Advantech is only using the eMMC of the Xavier NX module to start the bootloader, but the actual data is kept on the M.2. This is a great idea for the longevity of the eMMC on the (currently hard to find) Xavier NX module - but comes with the drawback of additional needed customization other than "only" the PCB, included hardware, drivers and other changes made by Advantech in comparision to an Nvidia Developerboard for the same module.
This is a problem I also learned the hard way: I realized that the board was delivered with L4T 32.5.2 - not the current 32.7.x (JetPack 4.6.1) - so I updated this by hand. Just to have the board bootloop. This was the moment I took a closer look to the online presence of Advantech and the manual - just to learn that the recovery process was neither described, nor was the download of the image available. I got the needed recovery file as well as the documentation (which also included vital information on how to use the DIO (GPIO), RS422 and CANbus interface) and as able to restore the board to working order. Obviously there were multiple problems with this: First, the online available manual should contain all needed information regardings settings, ports, recovery, etc - secondly, the current (and maybe even last) images also need to be available online on their website - with checksums to be able to deploy these images safetly.
I also voiced my concerns regarding the high impact security issues / CVEs found in 32.5.2 - which would make the use of AIR-020 series an absolute liability in a production environment. I am glad to report that Advantech reacted to these concerns with providing a beta version of a new JetPack 4.6.1 Image. A short time afterwards, Advantech did add some information to their wiki:
On the download page you can find the AIR020A2AIM20UIV00004 entry for the Jetson NX JetPack 4.6.1 from 2022-07-20. This links to a Dropbox folder containing a the latest image (AIR020A2AIM20UIV00004_194.tar.gz / 2022-09-16).
With this latest image I was able to upgrade the AIR-020X to JetPack 4.6.1 and even do and apt upgrade to upgrade to L4T 32.7.2, at the time the latest L4T. However, this did not go as planed: After doing the upgrade and rebooting the device, it got caught in a bootloop. This bootloop kept on repeating for about 10 minutes until the device mysteriously started then working and came back on without issues. Obviously this would not be a graceful upgrade and did instill some concerns why this was a reproducible issue.
I am glad to report that Advantech has provided the latest image - which will eliminate several security issues. However, the changes needed in the manual as well as the provision of the recovery images (now via Dropbox?) and the secure provision of security updates to the unit remain. Maybe Advantech would think about starting to use balena.io to handle these issues?
Verdict
The Advantech AIR-020X is an extremely capable unit in a small form factor, sturdy built and highly reliable. Even with the latest JetPack 4.6.1 and abuse of the formerly not available 20 Watt mode I could not get this unit to heat up too much in my testing with labSentinel 2. There is still enough headroom available to use it in any kind of environment, which makes it a perfect choice for labs and factories - if Advantech can tackle the presented issues. Especially the ones regarding timely and secure availability of security patches and software updates. This also means availability of these images, fast adaption after release of official Nvidia updates and all needed documentation in one manual for public download. With these exceptions and some small kinks, Advantech is so close to building the perfect unit for their envisioned use case. I really hope they can close that last (security/software/manual) gap to an otherwise nearly perfect hardware - and with that create an recommendable product.
I have been using multiple CAM-M8Q breakouts by Watterott and really am loving these units. They are small, reasonable priced and have the advantage of an integrated chip antenna. However, this also their small shortcoming: While the antenna is good enough for most outdoor jobs, you can run into sensitivity issues when deploying it indoors - if not setup next to a window. Luckily, the module has two additional u.fl connectors for RFin and RFout, meaning you can use an external antenna.
To accomplish this, you just need to remove the resistor R3 to position R1 - as outlined by the schematics:
Overview over the CAM-M8Q, copyright by Watterott
With this, you could attach an passive antenna, but an active one will not work, as no power is supplied from the module. But you can add this power insert with an inductor and an capacitor.
I did this with some SMD components, but did not add the insert "behind" the u.fl connector, but between both jumper points R1 should be using. So I can make this a part of the module.
This worked perfectly and the reception is great
As an antenna I am using the Navilock NL-202AA - I have not received any Galileo signal (even though it should be possible), but other than that I am very happy with the solution.
Thanks again to Mr. Watterott to pointing me to this StackExchange post which contained the solution for the power insert.
About nearly a year ago, I wrote the labSentinel project for my Nvidia Jetson AI Specialist certification. The basic idea of the project is to be able to supervise old Lab Equipment which does not poses any kind of log output or interface other than a graphical user interface, running on an Windows 3.11 / 95 / NT - maybe even XP system. I solved this issue by using a video grabber attached to a Jetson Nano and "out-of-band" grabbing the screen output of the experiment computer. I then learned good and bad system states via Nvidias Inference tools and finally got the system to report via MQTT as soon as something did go wrong. (As a "test system" I designed a flashy GUI application to try to mimic the old interfaces - specifically thinking about a lab power supply with multiple outputs - and the ability to simulate errors.)(https://developer.nvidia.com/embedded/community/jetson-projects#labsentinel / https://github.com/nmaas87/labsentinel)
While the project did work, there was still a lot left to be desired:
The system did capture the complete screen in full size. Running inference on a 1024x768 or even higher resolution picture is not efficient and has a high failure rate.
Training, testing and improving the model was time consuming and did not yield the precision and results I was hoping for.
The system could differentiate between "good" and "error" states - however if an error occurred, I would have loved to get more information - "reading the GUI" and its output. For example in the lab power supply use case, getting the specific voltages of the different lines to see which line failed or what is wrong - maybe even with the possibility to cross check if the detected error is an error in the first place
While the Nvidia Jetson Nano Development Board is an awesome tool for development, it is not hardend enough / suited for a lab or even factory floor environment.
These were all points I wanted to address, but as time was lacking - I did not take up the project again - until the start of this year Advantech and Edge Impulse started their Advantech Edge AI Challenge 2022. They wanted to know about specific use cases and how to solve them with factory hardend Jetson products (e.g. Advantechs AIR-020 series) and Edge Impulse Studio.
Well, that reminded me of the first labSentinel - and I thought I'd give it a try. As luck would have it, I actually was one of the two lucky guys who were picked to be able to realize their project. Advantech sent me one of their AIR-020X boards (review is here :)) and I was good to go:
Let me introduce you to labSentinel 2:
Build from the ground up, it does solve the above mentioned issues:
The actually GUI window is found and extracted from the "full size Desktop screenshots" via OpenCV 2 - and resized to 320x320 pixels to neatly fit the inference model
All model training, testing and optimization is done with Edge Impulse, which makes handling a breeze
If an error is detected and included OCR module using tesseract can extract text from predesignated / labeled areas on the non-resized GUI and sent this information along with the MQTT alert
The AIR-020X board is more than robust enough for all normal lab and factory floors
Its been a while - but good things come to those who wait ;).
Trying to work out a new system you're unfamiliar with can be quite a challenge. In my case I got my first LoRaWAN concentrator along with some CubeCell HTCC-AB01 and tried to get them to work. It turned out - it was quite hit and miss. On the one hand, the firmware support for the RAK5146 with USB, GPS and LBT was not really ready yet - on the other hand, the CubeCell Arduino code has a fatal flaw with the preamble size so that those cannot join a LoRa network if used in EU868 MHz Band (the perfect fix by 1rabbit is linked as well!).
In the end, as I wanted to get this working as best as possible, I bought myself the RAK2287 Pi Hat and started modifying it. I was quite sure that the I2C signals should be available somewhere on that board - as well as 5V + 3v3 along with the raw PPS signal of the GPS module within the RAK5146. I was right and could bridge the PPS signal to an used RPi GPIO pin.
PPS Pin bridged to GPIO 04
Using the I2C signal lines, Ground and 3v3 I added an I2C sensor interface (call it an ugly QWIIC connector ;)).
PPS hack and "poor mans QWIIC connector added"
I installed the latest UDP Packet Forwarder package by Xose - and everything was working perfectly since then.
I even added brocaars Packet Multiplexer and started running a local ChirpStack instance on my home server. Now my sensors are feeding their data directly to my local InfluxDBv2 and Grafana - but at the same time my Gateway is still available for TTNv3 users to receive their data. Its awesome and with that I even receive my private data during WAN outages. Nice!
As added "bonus", my gpsTime project is running on the same RPi, using the GPS time of the RAK5146 and its PPS signal to be an extremely precise GPS timeserver in my network - and an additional BME280 is running as the "room sensor", because adding another battery operated device - if you are having more than enough CPU power (in form of the RPi ;)) is really not needed. The whole device fits behind the TV.
"Not Great, Not Terrible"
All in all, the project was very successful, I am working on some new ideas regarding the sensors, but this is pending on my KiCad 6 skills and deliveries of new RAK hardware currently on the way ;).
I still keep all infos in the balena Forums, so head over if you want more details.
I think there is nothing more pleasing than having extremely precise measurements at your fingertips. Like time. While in the past it was quite problematic to measure time accurately (not talking about sundials, but... why not? ;)) - mankind has created one precise time source as the byproduct (read: "waste") for usage in accurate navigation: GNSS and their different kinds like GPS, Glonass, Galileo, BaiDou and others.
Taping into this time source and providing it to your local computer network via NTP has been done by countless people and is an extreme rewarding task. Is it necessary? Maybe not. Is it really cool? Yes. And now it is even easier as you don't need to configure it yourself, but can use the balenaHub and the preconfigured gpsTime project.
We do not waste time on fancy logos 😉
Basically you just need an RPi B+ (2/3/4), Micro SD Card, Powersupply and 3v3 TTL Level GPS Module with PPS Output. The rest is just done by going on the balenaHub entry shown above, creating a free account and flashing balenaOS onto your SD card, booting the RPi on the internet for the first time and let it get the needed containers. Afterwards you can use the RPi offline and still enjoy your precise time source.
More details can be found in the Github Repo and you can work and improve that project to your hearts content. I am probably going to do an PiAndMore talk about it - and use the project myself as a block for precise timing in some support equipment.
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok