About nearly a year ago, I wrote the labSentinel project for my Nvidia Jetson AI Specialist certification. The basic idea of the project is to be able to supervise old Lab Equipment which does not poses any kind of log output or interface other than a graphical user interface, running on an Windows 3.11 / 95 / NT - maybe even XP system. I solved this issue by using a video grabber attached to a Jetson Nano and "out-of-band" grabbing the screen output of the experiment computer. I then learned good and bad system states via Nvidias Inference tools and finally got the system to report via MQTT as soon as something did go wrong. (As a "test system" I designed a flashy GUI application to try to mimic the old interfaces - specifically thinking about a lab power supply with multiple outputs - and the ability to simulate errors.)(https://developer.nvidia.com/embedded/community/jetson-projects#labsentinel / https://github.com/nmaas87/labsentinel)
While the project did work, there was still a lot left to be desired:
The system did capture the complete screen in full size. Running inference on a 1024x768 or even higher resolution picture is not efficient and has a high failure rate.
Training, testing and improving the model was time consuming and did not yield the precision and results I was hoping for.
The system could differentiate between "good" and "error" states - however if an error occurred, I would have loved to get more information - "reading the GUI" and its output. For example in the lab power supply use case, getting the specific voltages of the different lines to see which line failed or what is wrong - maybe even with the possibility to cross check if the detected error is an error in the first place
While the Nvidia Jetson Nano Development Board is an awesome tool for development, it is not hardend enough / suited for a lab or even factory floor environment.
These were all points I wanted to address, but as time was lacking - I did not take up the project again - until the start of this year Advantech and Edge Impulse started their Advantech Edge AI Challenge 2022. They wanted to know about specific use cases and how to solve them with factory hardend Jetson products (e.g. Advantechs AIR-020 series) and Edge Impulse Studio.
Well, that reminded me of the first labSentinel - and I thought I'd give it a try. As luck would have it, I actually was one of the two lucky guys who were picked to be able to realize their project. Advantech sent me one of their AIR-020X boards (review is here :)) and I was good to go:
Let me introduce you to labSentinel 2:
Build from the ground up, it does solve the above mentioned issues:
The actually GUI window is found and extracted from the "full size Desktop screenshots" via OpenCV 2 - and resized to 320x320 pixels to neatly fit the inference model
All model training, testing and optimization is done with Edge Impulse, which makes handling a breeze
If an error is detected and included OCR module using tesseract can extract text from predesignated / labeled areas on the non-resized GUI and sent this information along with the MQTT alert
The AIR-020X board is more than robust enough for all normal lab and factory floors
Since 2017 I have been using an Western Digital My Cloud Mirror Gen 2 which I bought at Amazons Black Friday (or similar) - because the included 2x 8 TB WD Red were even cheaper with the NAS than standalone. Using the NAS had been quite ok, especially the included Docker Engine and Plex Support were a nice to have, the included Backdoor in older Versions - not so much. Recently WD had their new "My Cloud OS 5" replace the old My Cloud OS 3 - and made things worse for a lot of people. As I don't want any more surprises - and more control over my hardware - I decided to finally go down the road and get Debian 11 with an LTS (5.15) Kernel running on the hardware. This is how it went.
Warning
Warning, these are just my notes on how to convert a My Cloud OS 3 / My Cloud Mirror Gen 2 device to a "real" Debian system. You will need to take your device fully apart, solder wires and lose the warranty. Additionally you will lose all your data and even brick the hardware if something goes wrong, I am taking in no way responsibility, neither can I give support. You're on your own now.
Step 0: Get Serial Console Access
Without a serial console, you will not be able to do anything here. You will need to completely disassembly the NAS and will lose all warranty. The plain motherboard will look like this. On the most right side you will see the pins for the UART interface you will need to solder to.
When you're done with that, connect your 3v3 TTL UART USB device like this:
... and connect to it via 115200 BAUD with Putty, TeraTerm Pro or any other software (Do not connect the 3v3 pin :)). It would be wise starting without the hard drives installed.
Step 1: Flashing U-Boot
The current U-Boot on the NAS is flawed, you need to replace it. I will be CyberPK here which did an awesome job explaining everything:
We have to prepare an usb drive formatted in Fat32, and extract the uboot at link into it and connect to usb port#2.
Connect the device to the serial adapter, poweron the device and start pressing '1' (one) during the boot until you can see the 'Marvell>>' Command Prompt press ctrl+c then
We will start here to change stuff and break stuff. But if I could give you one tip before you start: Please execute printenv once. Copy and paste all env variables and everything Uboots spits out. It could save your hardware one day. Thanks, Nico out!
usb start
bubt u-boot-a38x-GrandTeton_2014T3_PQ-nand.bin nand usb
reset
This will reboot the device. Access again the Command prompt and add the following envs, a modified version of the ones provided by bodhi at this post:
setenv set_bootargs_stock 'setenv bootargs root=/dev/ram console=ttyS0,115200'
setenv bootcmd_stock 'echo Booting from stock ... ; run set_bootargs_stock; printenv bootargs; nand read.e 0xa00000 0x500000 0x500000;nand read.e 0xf00000 0xa00000 0x500000;bootm 0xa00000 0xf00000'
setenv bootdev 'usb'
setenv device '0:1'
setenv load_image_addr '0x02000020'
setenv load_initrd_addr '0x2900000'
setenv load_image 'echo loading Image ...; fatload $bootdev $device $load_image_addr /boot/uImage'
setenv load_initrd 'echo loading uInitrd ...; fatload $bootdev $device $load_initrd_addr /boot/uInitrd'
setenv usb_set_bootargs 'setenv bootargs "console=ttyS0,115200 root=LABEL=rootfs rootdelay=10 $mtdparts earlyprintk=serial init=/bin/systemd"'
setenv bootcmd_usb 'echo Booting from USB ...; usb start; run usb_set_bootargs; if run load_image; then if run load_initrd; then bootm $load_image_addr $load_initrd_addr; else bootm $load_image_addr; fi; fi; usb stop'
setenv bootcmd 'setenv fdt_skip_update yes; setenv usbActive 0; run bootcmd_usb; setenv usbActive 1; run bootcmd_usb; setenv fdt_skip_update no; run bootcmd_stock; reset'
saveenv
reset
(This code was also modified by me to use the fatload instead of the ext2load)
Get all dependencies installed according to this repo, I installed it on a Debian 11 machine
Replace the file content of wdmc2-kernel/dts/armada-375-wdmc-gen2.dts with the content of the real and improved dts for the WDMCMG2 (original from this link, copy available here) - but keep the file name still armada-375-wdmc-gen2.dts
Replace the file content of wdmc2-kernel/config/linux-5.15.y.config with the file from here (please know this config ain't perfect, but it will get you running. You can always file a PR and help me out ;))
Start the build process in wdmc2-kernel with ./build.sh
Mark: Linux Kernel, Clean Kernel sources, Debian Rootfs, Enable ZRAM on rootfs
Kernel -> Kernel 5.15 LTS
Build initramfs -> Yes
Debian -> Bullseye
Fstab -> usb
Rootpw -> whateverYouWant
Hostname -> whateverYouWant
Locales -> no changes, accept (or whatever you want)
Default locale for system -> en_US.UTF-8 (or whatever you want)
Tzdata -> Your region
Now your kernel and rootfs will be build
While this is on-going, get yourself a nice USB 2.0 or USB 3.0 stick prepared with
partition table: msdos
1st partition: 192 MB, FAT32, label set to boot, boot flag enabled
2nd partition: rest, ext4, label set to rootfs
When the kernel is done compiling and your usb stick is done, copy all the files (sda is the name of my usb stick
tar -xvzf wdmc2-kernel/output/bullseye-rootfs.tar.gz --directory=/mnt/root/
rm -rf /mnt/root/etc/fstab
cp /mnt/root/etc/fstab.usb /mnt/root/etc/fstab // within /mnt/root/etc/fstab: // change all /dev/sdb to /dev/sdc if all two drive slots on the NAS are used <- this! // change all /dev/sdb to /dev/sda if no drive slots on the NAS are used
umount /mnt/boot /mnt/root
Step 3: First boot and getting things running
Insert the USB stick into the 2 slot of the NAS. Leave the drives still out and boot it up for the first time, watch it via terminal. Login at the end with root and your chosen password.
If it boots, you can shut it down again with shutdown -P now, unplug power, insert the drives and reboot.
First thing after the first boot with drives, your own initramfs / Ramdisk from your current setup:
cd /root/
./build_initramfs.sh
cp initramfs/uRamdisk /boot/boot/uInitrd
Second, install MDADM for the RAID:
apt update
apt install mdadm
mkdir /mnt/HD
edit your /etc/fstab and add a mount point for your md/raid. I used the old drives with my old data on it like this (depending on the fact as which mdX it launches...)
(You can change low temp and high temp in the /usr/sbin/fan-daemon.py to get the Fan to kick in later and also set DEBUG = True if you want to see some details in the systemctl status fan-daemon)
Having the need to discuss this topic in 2022 is something I would not have dreamed of - but still, we're here to address the elephant in the room: Yes, Windows 10 does support long path names - no, it does not support it by default.
You need to enable it using the AD config or the registry.
* launch regedit with admin rights
* navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem
* Add a new DWORD (32-bit) named LongPathsEnabled with value 1
* reboot
Alternatively, you can also have the same result by enabling this value in the Group Policy (Computer Configuration > Administrative Templates > System > Filesystem > Enable NTFS long path).
However, this will only help with your explorer and "new" applications, some old apps can still suffer from issues (and we're not going to talk about potential WSL/WSLv2/Docker issues here by mounting paths...)
Another interesting thing is to find files / paths which are "a little abundant". There is a nice tool called TLPD for this, HOWEVER (warning), I need to highlight that only the version 4.6 is considered ok ( https://sourceforge.net/projects/tlpd/files/v4.6/ ). The latest version 4.6.0.1 is infected by some kind of Trojan - and about half of all scanners on Virus Total are also confirming this issue. So if you want to use this tool - please only download the 4.6 version - and for good measure scan it before use. Just to be sure that in the future not someone plays around with the files...
Its been a while - but good things come to those who wait ;).
Trying to work out a new system you're unfamiliar with can be quite a challenge. In my case I got my first LoRaWAN concentrator along with some CubeCell HTCC-AB01 and tried to get them to work. It turned out - it was quite hit and miss. On the one hand, the firmware support for the RAK5146 with USB, GPS and LBT was not really ready yet - on the other hand, the CubeCell Arduino code has a fatal flaw with the preamble size so that those cannot join a LoRa network if used in EU868 MHz Band (the perfect fix by 1rabbit is linked as well!).
In the end, as I wanted to get this working as best as possible, I bought myself the RAK2287 Pi Hat and started modifying it. I was quite sure that the I2C signals should be available somewhere on that board - as well as 5V + 3v3 along with the raw PPS signal of the GPS module within the RAK5146. I was right and could bridge the PPS signal to an used RPi GPIO pin.
PPS Pin bridged to GPIO 04
Using the I2C signal lines, Ground and 3v3 I added an I2C sensor interface (call it an ugly QWIIC connector ;)).
PPS hack and "poor mans QWIIC connector added"
I installed the latest UDP Packet Forwarder package by Xose - and everything was working perfectly since then.
I even added brocaars Packet Multiplexer and started running a local ChirpStack instance on my home server. Now my sensors are feeding their data directly to my local InfluxDBv2 and Grafana - but at the same time my Gateway is still available for TTNv3 users to receive their data. Its awesome and with that I even receive my private data during WAN outages. Nice!
As added "bonus", my gpsTime project is running on the same RPi, using the GPS time of the RAK5146 and its PPS signal to be an extremely precise GPS timeserver in my network - and an additional BME280 is running as the "room sensor", because adding another battery operated device - if you are having more than enough CPU power (in form of the RPi ;)) is really not needed. The whole device fits behind the TV.
"Not Great, Not Terrible"
All in all, the project was very successful, I am working on some new ideas regarding the sensors, but this is pending on my KiCad 6 skills and deliveries of new RAK hardware currently on the way ;).
I still keep all infos in the balena Forums, so head over if you want more details.
Working with the guys over at balena comes with some perks - including the peer pressure of getting into new technologies and trying them out. My go-to person this time was balena Developer Advocate Marc Pous - who is not only no stranger to soldering - but also deeply rooted in the LoRaWAN field (and he is also doing balenaHub projects for LoRaWAN applications 1, 2).
I heared about LoRaWAN a while back, when I was working with Sigfox - but never really gave it a try, other than trying to run a basestation for a certain satellite using LoRa for its communication.
This changed through Marcs constant LoRa presentations and work - I just gave in and wanted to try it. I had already three projects at hand: One was getting some BME280s into a network (because due to 2.4 GHz spectrum hell induced by too many neighbours with too many WiFi APs cranked to 11 using WiFi with ESP8266 did not work anymore...). This project would be the "first hello-world" to maybe be allowed to deploy a network in my company to care for our sensors - and lastly, strapping LoRa technology to a rocket and see whether we can muster the about 250 km+ downrange.
But things start small, so I wanted to get into the field with best-in-class technology - and the latest available products (especially thinking about ultra-long communications like on my last usecase). Luckily, the new Semtech SX1303 came to market 8 months ago and RAKwireless decided to make their first concentrator on its base, the RAKwireless Wislink RAK5146 - and I wanted to be an "early adopter".
Wrong expectations
If I am getting into a new product, I am used to finding a certain amount of good documentation, software and support. I must admit that I was really spoiled with my latest endeavors using Paul Stoffregens excellent Teensy series (loyal customer since Teensy 2.0/2010 ;)), some new STM32s and also NVIDIAs Jetson lineup.
So I went into this whole project with wrong expectations, as RAKwireless had made a ton of useful stuff available already for their older systems like the RAK2287 - and I thought everything would be in place for the new system as well, which was wrong.
I started the project more than a month ago and at that time not even a real firmware was available. The only project I found was this Github Repo - which could not even install for my RAK5146 USB because someone forgot the chmod +x on the install script, which gave a bad first impression regarding how tested this official project would be - and I was not disappointed, as I had to try to figure out how to get it working with TTN - because the important step (after the installation was done) to changing the server address in the configuration file - was not included in the setup instructions, so my gateway never connected.
Also, no RPi Hat was available - which brought me into looking for own solutions, which I found in an USB WWAN adapter card to be stacked onto the RPi.
A last thing which was confusing, especially with trying to figure out how to connect everything were the Pinout and Blockdiagram. I remind you, there are 6 different configurations of the RAK5146:
SPI (always without LBT)
with GPS
without GPS
USB
with LBT
with GPS
without GPS
without LBT
with GPS
without GPS
Both the Blockdiagram and Pinout for all these variations were handled in one graphic. While one could handle the block diagram, the pinout is just confusing. Which Pins are actually on the card using the USB version with LBT and GPS? What is going on with the SPI pins? Are they not even routed?
I tried to use my new TTNv3 gateway with Heltec Cubecells - and while I got it working, there was just unreliable data transmission and huge packet loss.
I must admit, I went to this project with wrong expectations - because I saw the RAK5146 now freely available on RAKwireless shop - thinking it should be another product like the RAK2287 - but that was wrong. It seems to be a product which is firstly marketed towards huge OEM customers - and not for the hackers at heart.
Moving forwards
To try to get this issues resolved, I wrote a bunch of PRs and Issues on the Github repo and documented my findings on balenas Forum. Luckily, those comments did not fall onto deaf ears and the situation improved:
RAKwireless worked in enhancing the documentation of the RAK5146 - but sadly the Blockdiagram and Pinout is still in the same state. Also, no quickstart guide was added.
Thanks to Taylor there is now a firmware package made available, which can be downloaded from the RAK5146 page.
Regarding the documentation on using it, this is not available yet. Please stick to the RAK2287 Quickstart guide. You can set it up as "RAK Gateway LoRa Concentrator" for TTN like shown there - but need to edit the packet-forwarder config afterwards (see the main menu of gateway-config) and replace the "server_address": "router.us.thethings.network" with your things network router (e.g. eu1.cloud.thethings.network in Europe TTNv3). You should then restart the Gateway / Forwarded. In adding it to the TTNv3 stack, the Quickstart guide is a bit outdated. You can still find out the EUI of your gateway using gateway-version and use this for adding it to TTNv3, but please be careful not to choose the frequency plan "Europe 863-870 MHz (SF9 for RX2 - recommended)" - but the RX2 SF12 option to improve the signal quality.
I still have problems with bad performance - and the helpful Jose Marcelino decided to send me a RPi Hat for switching out my contraption, which should improve quality. However, after waiting a month I have not yet received this Hat yet and could therefore not test.
Also Xose Pérez did try to help and I am grateful for both of them trying to resolve the situation and getting the RAK5146 also working in the hands of power users.
After all, it looks like an extremely capable platform - and I would really like to see it performing accordingly.
What is next?
Even though the start was a bit rocky, I want to continue working on this project - after all, my room sensors need some LoRa, as well as my other projects - and when these issues are resolved, thr RAK5146 looks like a good offer. I will report back on my blog with updates - and wait for the RPi Hat which hopefully will resolve the issues - or helping me pin down the issue, if it were to be somewhere else.
I took part in balena's IoT Happy Hour #63 on 30.07.2021 to talk with balenas Marc and David - as well as DynamicDevices Alex about Bees, gpsTime and space communications. You can watch the talk here on balenas Youtube channel.
I think there is nothing more pleasing than having extremely precise measurements at your fingertips. Like time. While in the past it was quite problematic to measure time accurately (not talking about sundials, but... why not? ;)) - mankind has created one precise time source as the byproduct (read: "waste") for usage in accurate navigation: GNSS and their different kinds like GPS, Glonass, Galileo, BaiDou and others.
Taping into this time source and providing it to your local computer network via NTP has been done by countless people and is an extreme rewarding task. Is it necessary? Maybe not. Is it really cool? Yes. And now it is even easier as you don't need to configure it yourself, but can use the balenaHub and the preconfigured gpsTime project.
We do not waste time on fancy logos
Basically you just need an RPi B+ (2/3/4), Micro SD Card, Powersupply and 3v3 TTL Level GPS Module with PPS Output. The rest is just done by going on the balenaHub entry shown above, creating a free account and flashing balenaOS onto your SD card, booting the RPi on the internet for the first time and let it get the needed containers. Afterwards you can use the RPi offline and still enjoy your precise time source.
More details can be found in the Github Repo and you can work and improve that project to your hearts content. I am probably going to do an PiAndMore talk about it - and use the project myself as a block for precise timing in some support equipment.
I took part in balena's IoT Happy Hour #56 on 11.06.2021 to talk about the past, present and upcoming apex missions. You can watch the talk here on balenas Youtube channel.
Loading Comments...
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok