For my German readers: I took part in Leonid Lezner's Allesnetz Podcast 11 - about Resin.io and Docker - together with Resin.ios very own Jonas Hermsmeier - you can check out the podcast here.
Category: Docker
CUDA and Tensorflow in Docker
In this howto we will get CUDA working in Docker. And - as bonus - add Tensorflow on top! However, please note that you'll need following prereqs:
GNU/Linux x86_64 with kernel version > 3.10 Docker >= 1.9 (official docker-engine, docker-ce or docker-ee only) NVIDIA GPU with Architecture > Fermi (2.1) NVIDIA drivers >= 340.29 with binary nvidia-modprobe
We will install the NVIDIA drivers in this tutorial, so you should only have the right kernel and docker version already installed, we're using a Ubuntu 15.05 x64 machine here. For CUDA, you'll need a Fermi 2.1 CUDA card (or better), for tensorflow a >= 3.0 CUDA card...
Which Graphicscard Model do I own?
lspci | grep VGA sudo lshw -C video
Output i.e.:
product: GF108 [GeForce GT 430] vendor: NVIDIA Corporation
You should lookup on google if it works with cuda / Fermi 2.1, i.e. on https://developer.nvidia.com/cuda-gpus
GeForce GT 430 - Compute: 2.1
Ok, that one works!
I got additional infos from: https://www.geforce.com/hardware/desktop-gpus/geforce-gt-430/specifications
CUDA and Docker?
You can find out more about that topic on https://github.com/NVIDIA/nvidia-docker
Getting it to work will be the next step:
Download right CUDA / NVIDIA Driver
from http://www.nvidia.com/object/unix.html
I choose Linux x86_64/AMD64/EM64T, Latest Long Lived Branch version: 375.66, but please check in the description of the file, if your graphics card is supported!
After Download, install the driver:
chmod +x NVIDIA-Linux-x86_64-375.66.run sudo ./NVIDIA-Linux-x86_64-375.66.run
It will ask for permission, accept it. If it gives info that the nouveau driver needs to be disabled, just accept that, in the next step, it will generate a blacklist file and exit the setup. Afterwards, run
sudo update-initramfs -u
and reboot your server. Then, rerun the setup with
sudo ./NVIDIA-Linux-x86_64-375.66.run
You can check the installation with
nvidia-smi
and get an output similar to this one:
Mon Jul 24 09:03:47 2017 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 375.66 Driver Version: 375.66 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GT 430 Off | 0000:01:00.0 N/A | N/A | | N/A 40C P0 N/A / N/A | 0MiB / 963MiB | N/A Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 Not Supported | +-----------------------------------------------------------------------------+
which means that it worked!
Install nvidia-docker and nvidia-docker-plugin
wget -P /tmp https://github.com/NVIDIA/nvidia-docker/releases/download/v1.0.1/nvidia-docker_1.0.1-1_amd64.deb sudo dpkg -i /tmp/nvidia-docker*.deb && rm /tmp/nvidia-docker*.deb
Test nvidia-smi from Docker
nvidia-docker run --rm nvidia/cuda nvidia-smi
should output:
Using default tag: latest latest: Pulling from nvidia/cuda e0a742c2abfd: Pull complete 486cb8339a27: Pull complete dc6f0d824617: Pull complete 4f7a5649a30e: Pull complete 672363445ad2: Pull complete ba1240a1e18b: Pull complete e875cd2ab63c: Pull complete e87b2e3b4b38: Pull complete 17f7df84dc83: Pull complete 6c05bfef6324: Pull complete Digest: sha256:c8c492ec656ecd4472891cd01d61ed3628d195459d967f833d83ffc3770a9d80 Status: Downloaded newer image for nvidia/cuda:latest Mon Jul 24 07:07:12 2017 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 375.66 Driver Version: 375.66 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GT 430 Off | 0000:01:00.0 N/A | N/A | | N/A 40C P8 N/A / N/A | 0MiB / 963MiB | N/A Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 Not Supported | +-----------------------------------------------------------------------------+
Yep, you got it working in Docker!
Running an interactive CUDA session isolating the first GPU
NV_GPU=0 nvidia-docker run -ti --rm nvidia/cuda
Input our first Hello World program
echo '#include <stdio.h> // Kernel-execution with __global__: empty function at this point __global__ void kernel(void) { // printf("Hello, Cuda!\n"); } int main(void) { // Kernel execution with <<<1,1>>> kernel<<<1,1>>>(); printf("Hello, World!\n"); return 0; }' > helloWorld.cu
Compile it within the Docker container
nvcc helloWorld.cu -o helloWorld
Execute it...
./helloWorld
and you get,...
Hello, World!
Congrats, you got it working!
Encore, Tensorflow
Getting Tensorflow to work is straight forward:
nvidia-docker run -it -p 8888:8888 tensorflow/tensorflow:latest-gpu
It will output something like:
Copy/paste this URL into your browser when you connect for the first time, to login with a token: http://localhost:8888/?token=d747247b33023883c1a929bc97d9a115e8b2dd0db9437620
you should do that 🙂
Then enter the 1_hello_tensorflow notebook and run the first sample:
from __future__ import print_function import tensorflow as tf with tf.Session(): input1 = tf.constant([1.0, 1.0, 1.0, 1.0]) input2 = tf.constant([2.0, 2.0, 2.0, 2.0]) output = tf.add(input1, input2) result = output.eval() print("result: ", result)
by selecting it and clicking on the >| (run cell, select below) Button.
This worked for me:
result: [ 3. 3. 3. 3.]
however... sadly not the GPU was calculating the results as shown by the Docker CLI:
Kernel started: 2bc4c3b0-61f3-4ec8-b95b-88ed06379d85 [I 07:31:45.544 NotebookApp] Adapting to protocol v5.1 for kernel 2bc4c3b0-61f3-4ec8-b95b-88ed06379d85 2017-07-24 07:32:17.780122: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-07-24 07:32:17.837112: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:893] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2017-07-24 07:32:17.837440: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 0 with properties: name: GeForce GT 430 major: 2 minor: 1 memoryClockRate (GHz) 1.4 pciBusID 0000:01:00.0 Total memory: 963.19MiB Free memory: 954.56MiB 2017-07-24 07:32:17.837498: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0 2017-07-24 07:32:17.837522: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: Y 2017-07-24 07:32:17.837549: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] Ignoring visible gpu device (device: 0, name: GeForce GT 430, pci bus id: 0000:01:00.0) with Cuda compute capability 2.1. The minimum required Cuda capability is 3.0.
So, CUDA >= 3.0 devices only for tensorflow 🙁 - but, it still works, as it is using the CPU (however, not as fast as it could :/)
Infos taken from:
https://github.com/NVIDIA/nvidia-docker
https://developer.nvidia.com/cuda-gpus
https://hub.docker.com/r/tensorflow/tensorflow/
[resinOS] Build resinOS from scratch
As the time of writing, resinOS is available for Download at Version 2.0.6+rev3.dev for Raspberry Pi 3. This build, however, is nearly 2 weeks old and in the meantime, something great happend: Docker has finally updated to Version 17.03.1 - upgraded from the old ~10 (ten-ish) version - which was not that cool (and without Swarm ;)). So, it is a good idea to get to know how to build your own resinOS in case you really want to live on the bleeding edge ;).
Install Dependencies (Ubuntu 16.04 LTS)
sudo apt-get install gawk wget git-core diffstat unzip texinfo gcc-multilib \ build-essential chrpath socat cpio python python3 python3-pip python3-pexpect \ xz-utils debianutils iputils-ping libsdl1.2-dev xterm
goto /, because this build will create very long filenames
cd /
clone the repo, maybe some root power is needed here 😉
git clone https://github.com/resin-os/resin-raspberrypi cd resin-raspberrypi git submodule update --init --recursive
you would be done here and could build your own resinOS with the build command,
however, if you really want to pull the latest upgrades...
cd layers/meta-resin git checkout master git pull cd ../..
finally build resinOS for Raspberry Pi 3
./resin-yocto-scripts/build/barys -r --shared-downloads $(pwd)/shared-downloads/ --shared-sstate $(pwd)/shared-sstate/ -m raspberrypi3
after quite some time, you'll find the image in
build/tmp/deploy/images/raspberrypi3/resin-image-raspberrypi3.resinos-img
There is quite a lot of stuff you can change on your resinOS, so be sure to check out https://resinos.io/docs/custombuild/ for more documentation on that topic. Have fun :)!
[resinOS] Dockerize your own Flask GUI with resinOS
I have been working with Docker and resin.io as well as resinOS for quite some time now and I am actively using those services actively for different projects: The possibility to just throw away a container and build it anew in record breaking time is just awesome and also the fact that I can now just ship images to some device, deploy it there - and it will just work. As some of you know, I am working as volunteer in different NPOs, i.e. the RepairCafé Trier in which I repair Computers and Mobile phones for free, the PiAndMore / CMD e.V. on which I teach interessted guys and gals about Raspberry Pi and Linux (and even did two presentations on resin.io and resinOS, both linked on this blog - and as video :)) - and also one other NPO which is trying to give information about Japanese Culture. For those events I already created an Tickting System called JCTixx, which uses QR Codes to sell and check tickets for those named festivals. However, I created the Scanner Hardware in 2010, just starting my second year in an Apprenticeship as IT Systemselectronics guy, without anything as a Raspberry Pi available. However, being a bit creative, I just took some old Linux routers, threw the good old OpenWRT on those and soldered directly to the its testpoints / CPU to get the GPIO pins I needed ;). That worked very well - however, nowadays, TLS 1.2 and such eat away those small MIPS cores - and I could - with a lot of work - just get them working with said crypto in 2015. However, being 7 years old now, I need new Scanners - and more flexible way of delivery software as well. So resinOS it is! 🙂
Full disclosure: As I am currently developing the Hardware for the Scanners and I am always looking for some Raspberry Pis and other hardware, I stumbled upon resin.ios #blog4swag program and decided to make this tutorial because of two reasons: 1.) I actually need an dockerized GUI for the scanners and therefore am working on a good solution to the problem - and as usually - I want my dear readers to get some interesting stuff to read. 2.) Maybe I can acquire some hardware support from the resin guys which then would help me out to get my NPO helped :). So in the end, everyone wins and no one dies - to cite Doctor Who :).
So, lets get started!
1.) Getting some GUI working at all
I saw some users experimenting with Docker and Chromium along the x86 systems - however, the first resin project I stumbled upon which really made me think about putting the GUI within a container, was resin.ios ElectronJS example. This thing really rocked, but I saw that I needed to work on something a bit easier which would just deploy some sort of GUI - and being as simple as it gets. So I started to look around and stumbled across different projects, namely
RPiBrowser-resin.io and resin-electron-app. Those two were a great starting point and using resin.ios ElectronJS example as well, I started hacking the code together, which I finally submitted to this Git Repo:
docker-raspbian_xinit. With the great help of Gergely Imreh from the resinIO Team (thanks a lot :)!) I got the project working and pushed it onto Docker Hub. To walk you through the Git Repo: I made two versions, one nailed to the release of the 26.04.2017 Raspbian and one made from the latest version. I basically grabbed resin.ios Raspbian from Docker Hub, injected the qemu files (which they did show in one tutorial) and used this as a starting point. It enables Docker Hub to build the Raspberry Pi Images, so that you can use them on your Pi.
If you want to build the docker-raspbian_xinit image yourself, you need to remove the RUN [ "cross-build-start" ] and RUN [ "cross-build-stop" ] command from the Dockerfile :).
The Dockerfile itself builds a basic system with xinit / xorg, alsa and even touchscreen support (xinput-calibrator). It generated german locales and a user called pi (which is both not needed, but I included it as matter of personal preference), copys the needed files and enables systemd. Systemd will start the xinit-docker.service which invokes the start.sh which does prepare the system, i.e. adds the container name to the /etc/hostname, so that sudo will work, activates its own SSH server on port 22, includes a touchscreen calibiration file - if available, creates a very needed config folder, removes files from old X11 sections, sets volume to 100% on speakers and phones and then starts the xinit process with the launch_app.sh :).
The launch_app.sh then works in the xinit context and does xinit specific stuff: It disables screen blanking, sets the keyboard to the created german locales (you can comment this out as well if you want or overwrite the file ;)) - then - finally - starts the matchbox-window-manager without titlebar, but WITH keyboard and mousesupport and launches....
... gedit.
Yeah. Right. I know that is a real disappointment. But I needed a small tool which would not add too much file size and show that keyboard and mouse are working - so I just went for gedit ^^'. Sorry if you were hoping to find something really awesome and cool at this place. But nonetheless, it works, and this image as well as the Git Files should serve as a starting point for your own Xinit adventure - so, thats the reason :).
After I got it finally working, I thought about my personal usecase: I will be using a lot of Python and thought about using Flask as a GUI. However, Flask is just a webframework and does not have the ability to shown something "GUI-like" - thus needing some kind of Webbrowser - and this part could be found in shape of pywebview. Pywebview just includes some website, app or similiar into a small GUI Frame with Webkit Browser in it. Cool! Exactly what I needed. However, I did not have time get to work on my own UI - and wanted to jumpstart the project by getting the Docker Container to work - so I decided to grab a cool small Flask Web GUI project and use this to showcase how easy it is to built a self-starting Docker Container GUI - on resinOS. And with that in mind, I went for the very cool helloflask Calculator by Grey Li.
2.) Getting a Flask GUI working - using 1. 😉
Ok, as soon as I got my Xinit project working, I decided to use this a base image, just overwriting changes in the system and injecting the files needed to run the pywebview'd Calculator.
With that in mind, the Dockerfile became quite small (you'll find the Source Files on Github and the Image on Docker Hub as well :))
FROM nmaas87/docker-raspbian_xinit:jessie-20170426 MAINTAINER Nico Maas <mail@nico-maas.de> ENV DEBIAN_FRONTEND noninteractive RUN [ "cross-build-start" ] RUN apt-get update \ && apt-get install -yq --no-install-recommends \ python3-pip python3-pyqt5 python-gi gir1.2-webkit-3.0=2.4.9-1~deb8u1+rpi1 gir1.2-javascriptcoregtk-3.0=2.4.9-1~deb8u1+rpi1 libjavascriptcoregtk-3.0-0=2.4.9-1~deb8u1+rpi1 libwebkitgtk-3.0-0=2.4.9-1~deb8u1+rpi1 \ && apt-get autoremove -qqy \ && apt-get autoclean -y \ && apt-get clean && rm -rf /var/lib/apt/lists/* && mkdir /var/lib/apt/lists/partial \ && pip3 install Flask pywebview \ && mkdir /usr/src/app/templates /usr/src/app/static # copy program COPY src /usr/src/app # start init system ENV INITSYSTEM on RUN [ "cross-build-end" ]
I just needed to include the cross-build-start and end for Docker Hub again, installed python3-pip and pyqt5 dependencies with webkit (in a special version, otherwise it did not work...) and then installed Flask and pywebview. I then proceded to inject the pywebview-erized Calculator:
def start_server(): app.run(host="0.0.0.0",port="80"); if __name__ == '__main__': t = threading.Thread(target=start_server) t.daemon = True t.start() webview.create_window("Calculator", "http://127.0.0.1:80/") webview.toggle_fullscreen() sys.exit()
I had to create a start_server function to let Flask run in its own thread while pywebview would show the GUI in fullscreen mode and connect to the Flask server.
As last step, I need to rewrite the launch_app.sh
#!/bin/bash # Disable DPMS / Screen blanking xset -dpms xset s off xset s noblank # Change Keyboard Layout from US to German setxkbmap de # Debug Tools #xinput --list #evtest # Start Window Manager sudo matchbox-window-manager -use_cursor yes -use_titlebar no & #sudo matchbox-window-manager -use_cursor no -use_titlebar no & # Start Payload App #gedit python3 /app/app.py
As seen, I only changed the "Start Payload App" line and now initialize Python 3 with the pywebview/Flask/Caculator app.
And thats it :).
To use this app with resinOS, just go to resinos.io, download the latest 2.0.3 release for the RPi, flash the image to your SD Card using i.e. etcher, boot your RPi and ssh to your system IP, using user root and port 22222. From then on, you can just run the app via
docker run --name pywebview --privileged --restart=unless-stopped nmaas87/docker-raspbian_pywebview:jessie-20170426
or you upload the Git src to the /mnt/data folder and build your own version of this pywebview using
docker build -t pywebview .
Please do NOT forget to comment out the RUN [ "cross-build-start" ] and RUN [ "cross-build-end" ] commands!
After that worked, you can start the app via
docker run --name pywebview --privileged --restart=unless-stopped pywebview
You can also use the app with resin.io by creating an resin.io account, a new RPi App, pushing either the latest or jessie-20170426 tag to resin - and it should built and work :). However, I am more a fan of the flexibility resinOS offers in terms of developing Docker Apps - so I decided to describe this way here.
And with that said, you can now starting working on your own GUI Apps - working in resinOS / resin.io on your RPi or similar device :)! Have fun - and if you'll excuse me, I now have 4 JCTixx Ticket Scanners to build ^^'.
[Talk] Docker Grundlagen Workshop
My Docker 101 Workshop @ T3C Trier (Trier Tech Talk Conference) (29.04.2017, Hochschule Trier, https://t3c.uni-trier.de/).
Docker_T3C2017.pdf (0,74 MB, PDF).
Sourcecode / Example can be found on https://github.com/nmaas87/docker-demo
How to run pi-hole in a Docker container
Pihole is an awesome little DNS Server with Blacklists for Ad Sites and the ideal tool to install a small and powerful ad filter for the whole network (Intro Video here).
As diginc designed an Docker Image around the Pihole server (which was normally run on a RPi :)) - and made it x86, you can also run it on your normal Homeserver :)!
Important things just before we start: The Docker container needs to bind to ports 53 (DNS) and 80 (HTTP) - so, if you need to run your own DNS - that could interfere. If you need port 80 for some other website - you'll have to make an reverse proxy.
To make the setup easier, I wrote an little docker-compose.yml:
pihole: restart: unless-stopped container_name: pihole image: diginc/pi-hole:alpine volumes: - /var/pihole:/etc/pihole environment: - ServerIP=YOURLANIPHERE cap_add: - NET_ADMIN ports: - "53:53/tcp" - "53:53/udp" - "80:80"
You'll need to change the YOURLANIPHERE to the IP Address of your Servers LAN Interface - and you'll need to create the folder /var/pihole and make it writable for your Docker User.
sudo mkdir /var/pihole sudo chown -R MYLINUXUSER:MYLINUXUSER /var/pihole
After that, we can start the service via docker-compose up -d.
You'll have access to the Web interface of pihole on YOURLANIPHERE/admin
However, this interface is NOT protected - so we'll do this now:
docker exec -it pihole /bin/bash # create an password protection for your pihole web interface pihole -a -p somepasswordhere # You can also remove the password by not passing an argument. pihole -a -p
Also, pihole does create a lot of log files, which should be removed from time to time, the block lists should be updated and pihole itself should be updated. This can also be achieved via an cron file, available here.
# [...] # Your container name goes here: DOCKER_NAME=pihole PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # Pi-hole: Update the ad sources once a week on Sunday at 01:59 # Download any updates from the adlists 59 1 * * 7 root PATH="$PATH:/usr/local/bin/" docker exec $DOCKER_NAME pihole updateGravity > /dev/null # Update docker-pi-hole by pulling the latest docker image ane re-creating your container. # pihole software update commands are unsupported in docker! 30 2 * * 7 root PATH="$PATH:/usr/local/bin/" docker exec $DOCKER_NAME pihole updatePihole > /dev/null # Pi-hole: Flush the log daily at 00:00 so it doesn't get out of control # Stats will be viewable in the Web interface thanks to the cron job above 00 00 * * * root PATH="$PATH:/usr/local/bin/" docker exec $DOCKER_NAME pihole flush > /dev/null
I actually did just open my cron with crontab -e and entered the last lines into there - so that should work. You can now test your new Adblocker by entering the IP of your Server as DNS on your Clients - and if you're happy with it, just replace the DNS server entry on your DHCP server with that IP - to rollout pihole to your complete network :).
More Info:
https://github.com/diginc/docker-pi-hole
https://discourse.pi-hole.net/t/how-do-i-set-or-reset-the-web-interface-password/1328
https://www.reddit.com/r/pihole/comments/5rudb3/running_pihole_in_a_docker_container/
Odroid U3 Kernel Upgrade + Docker
I wrote this back in January 2017. Since then I had not much time to work on the Odroid - however, user hexdump did just came up with a new repo, supporting the Odroid U3 with Kernel 5.4.x - you can find the overview over his awesome work here and the repo with complete releases (i.e. Ubuntu Bionic or Debian Buster i.e. odroid_u3-armv7l-debian.img.gz) here
I am using an trusty old Odroid U3 which I acquired years ago. With its SAMSUNG Exynos 4412 Cortex-A9 Quad Core 1,7 Ghz, 1MB L2 cache and 2 GB of RAM, this little puppy was an real beast - compared to the Raspberry Pi 1 at that time. However, Hardkernel did drop the support - again, which left the Users back with very old Kernel versions. However, thanks to some users and the fact that all needed support for the Exynos is now included in the current kernel - well, we can build our own. This write up is the distilled result of days of work and a lot of research - and the work of other people which I found on the net (which I will try to give proper credits at the right locations :)).
EDIT: I upgraded the Kernel Configuration GIST for my Kernel Config + Docker on 10.02.2017. Thanks to an E-Mail from Tobias Jakobi I found the pieces I missed about adding the Kernel Internal Fanservice into the Config. This works now, however - I still like my tweaked program a bit better, as it cools the system more aggressivly, while the kernel default one is a lot more silent, but runs in the 80's°C while mine will stay at 70° on max load.
It is important that these instructions, especially if it comes down to installing stuff - is written for the usage of eMMC memory, NOT THE SDCARD! Also, there be dragons and something could go wrong - so please, as usual, advance at your own pace and risk! 🙂
0.) Get an Serial Interface for 1.8V
Important. The UART is 1.8V LVTTL ONLY! If you connect 3.3V or 5V, you'll blow the U3! I used an regular 5V TTL USB Adapter as well as an Sparkfun BiDir Level Converter: https://www.sparkfun.com/products/12009 With that set to 1.8V from the UART of the U3, it worked flawlessly with the usual 115000 BAUD.
Pinout:
http://odroid.com/dokuwiki/doku.php?id=en:u3_hardware
_____UART____ |Pin 4 - GND| |Pin 3 - RXD| |Pin 2 - TXD| |Pin 1 - VCC| ___________| 1.8V LVTTL
1.) Build U-Boot
A lot of stuff is taken from here, thanks a lot for your great work, SnakeBite!
We asume you're working as root, as all this stuff will need root rights :).
# update your packages apt-get update # needed for building u-boot apt-get install device-tree-compiler # get ODROID signed u-boot wget http://odroid.in/guides/ubuntu-lfs/boot.tar.gz tar xzf boot.tar.gz # get patched u-boot & build for the U3 git clone https://github.com/tobiasjakobi/u-boot cd u-boot make odroid_config make #copy fresh u-boot to ODROID directory cp u-boot-dtb.bin ../boot/u-boot.bin cd ../boot ## install on SDCard - not what we want, just as an remark for me #bash sd_fusing.sh /dev/mmcblk0
Copy the needed files (u-boot.bin, E4412_S.bl1.HardKernel.bin, bl2.signed.bin, E4412_S.tzsw.signed.bin) to your PC, reboot your Odroid U3 into fastboot via connecting the UART to the U3 and aborting the boot. After that, you can issue the fastboot command on the UART. The U3 will now wait for filetransfer over the Micro USB Port, which you'll need to connect to your PC. Also, for the sake of an easy upgrade, use an Linux PC (more infos here: http://odroid.com/dokuwiki/doku.php?id=en:u3_building_u-boot ).
# install needed programs sudo apt-get update sudo apt-get install android-tools-adb android-tools-fastboot # and - being in the right folder, start the transfer # u-boot.bin install sudo fastboot flash bootloader u-boot.bin # bl1.bin install sudo fastboot flash fwbl1 bl1.HardKernel # bl2.bin install sudo fastboot flash bl2 bl2.HardKernel # tzsw.bin install sudo fastboot flash tzsw tzsw.HardKernel # If installation is done, you can reboot your ODROID-U3 with fastboot. sudo fastboot reboot
You should now have a more recent U-Boot install.
Old: U-Boot 2010.12-svn (May 12 2014 - 15:05:46) for Exynox4412 New: U-Boot 2016.11-rc3-g8a65327 (Jan 07 2017 - 23:00:56 +0100)
By the way, we needed to download this boot.tar.gz, because it contains the keys needed to sign our new U-Boot install. More Infos about U-Boot and Keys: https://github.com/dsd/u-boot/blob/master/doc/README.odroid
The Installation of a more recent U-Boot version was necessary to facilitate the boot of the to-be-build new Kernel zImage with bootz.
1b.) eMMC recovery in case something goes wrong:
http://forum.odroid.com/viewtopic.php?f=53&t=969
DL the tool [ exynos4412_emmc_recovery_from_sd_20140629.zip ]
- Prepare a microSD card and flash the attached image.
- Insert microSD into U2/U3, disconnect eMMC
- Turn on U2/U3 and wait for a few seconds and blue LED will blink.
- Plug your eMMC module into U2/U3
4b - wait 10 seconds! - Plug micro-USB cable into U2/U3 and connect other side to your PC USB host or ODROID's USB host port. (This is a trigger to start the recovery)
- After recovery process (only a few seconds), the blue LED will turn off automatically.
- Finish. Install OS on your eMMC with as usual.
2.) Building Next Kernel for Odroid U3 with eMMC
And now to start the real work:
apt-get update apt-get install live-boot u-boot-tools cd ~ git clone --depth 1 git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git linux_odroid cd linux_odroid # we could make an default config, but this is not needed, we take rglinuxtech config in the next step # make exynos_defconfig # Odroid Config Kernel 4.4 from http://rglinuxtech.com/?p=1656 curl -o .config http://pastebin.com/raw/NveRajaZ # Or you can use my Config which enables Docker as well (Gist at the End of the page) curl -o .config https://gist.githubusercontent.com/nmaas87/81818c1db9dc292a4c21125bd2602658/raw/7e4e14fa15d7c68b177f31b9e2348d62c52cf83c/u3_docker_config make menuconfig make prepare modules_prepare make -j4 bzImage modules dtbs make modules_install cp arch/arm/boot/dts/exynos4412-odroidu3.dtb /media/boot/exynos4412-odroidu3_next.dtb cp arch/arm/boot/zImage /media/boot/zImage_next cp .config /boot/config-`cat include/config/kernel.release` update-initramfs -c -k `cat include/config/kernel.release` mkimage -A arm -O linux -T ramdisk -C none -a 0 -e 0 -n uInitrd -d /boot/initrd.img-`cat include/config/kernel.release` /boot/uInitrd-`cat include/config/kernel.release` cp /boot/uInitrd-`cat include/config/kernel.release` /media/boot/ cd /media/boot/ vi boot.txt # now we have to rework the boot.txt / config # comment out the old values and set in the new ones # please do NOT copy blindly, you need to adjust the zImage, uInitrd and eyxnos4412***.dtb file names according to your system! setenv initrd_high "0xffffffff" setenv fdt_high "0xffffffff" #setenv bootcmd "fatload mmc 0:1 0x40008000 zImage; fatload mmc 0:1 0x42000000 uInitrd; bootm 0x40008000 0x42000000" setenv bootcmd "fatload mmc 0:1 0x40008000 zImage_next; fatload mmc 0:1 0x42000000 uInitrd-4.10.0-rc2-next-20170106-v7; fatload mmc 0:1 0x44000000 exynos4412-odroidu3_next.dtb; bootz 0x40008000 0x42000000 0x44000000" #setenv bootargs "console=tty1 console=ttySAC1,115200n8 root=/dev/mmcblk0p2 rootwait ro mem=2047M" setenv bootargs "console=tty1 console=ttySAC1,115200n8 root=/dev/mmcblk1p2 rootwait ro mem=2047M" boot #After you have done that, write the commands to the boot.scr file mkimage -C none -A arm -T script -d boot.txt boot.scr # sync and reboot and it should work sync reboot now
With this in mind I really upgraded my system from kernel 3.8.13 from 2015 - to the most recent 4.10.0-rc2 next Kernel 🙂
Old: Linux odroid 3.8.13.30 #1 SMP PREEMPT Fri Sep 4 23:45:57 BRT 2015 armv7l armv7l armv7l GNU/Linux New: Linux odroid 4.10.0-rc2-next-20170106-v7 #3 SMP PREEMPT Mon Jan 9 19:17:32 CET 2017 armv7l armv7l armv7l GNU/Linux
2b.) FAN does not work, warning!
The CPU Fan does somehow not work right out of the box, so we will now enable it manually. [EDIT, with the new Kernel Config it works out of the box, but you can still decide to use this software to have a more aggressiv cooling :)]
# Fan to full speed echo 255 > /sys/devices/platform/pwm-fan/hwmon/hwmon0/pwm1 # Read out current temperature in °C cat /sys/devices/virtual/thermal/thermal_zone0/temp
To get things working again, I forked and updated the odroidu2 fan tool. Install it via:
git clone --depth 1 https://github.com/nmaas87/odroidu2-fan-service.git cd odroidu2-fan-service make # install it as upstart service, i.e. < Ubuntu 16.04 make usi # install it as systemd, i.e. Ubuntu 16.04 / Xenial make systemd reboot
Useful Commands:
# Read max CPU Speed: cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq # Get current CPU Speed: cat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_cur_freq # Torrture Test: openssl speed -multi 4
2c.) Upgrade to Xenial
As I upgraded to Xenial with do-relase-upgrade, I had some problems:
Authentication Problem:
It was not possible to authenticate some packages. This may be a transient network problem. You may want to try again later. See below for a list of unauthenticated packages. create /etc/update-manager/release-upgrades.d/unauth.cfg with
[Distro] AllowUnauthenticated=yes
After upgrade, remove this file.
from: http://askubuntu.com/questions/425355/error-authenticating-some-packages-while-upgrade
After that, apt-get clean did not work:
apt-get clean
W: Problem unlinking the file apt-fast - Clean (21: Is a directory)
Solution was:
rm -rf /var/cache/apt/archives/apt-fast
from: http://askubuntu.com/questions/765274/error-problem-unlinking-in-apt-get-clean
2d.) MAC address changes every reboot:
One solution, which did not work, was following:
rm /etc/smsc95xx_mac_addr
from: http://forum.odroid.com/viewtopic.php?f=7&t=1070
Which worked better, was to really set the MAC address to a static value:
add in /etc/network/interfaces
auto eth0 iface eth0 inet dhcp hwaddress ether bb:aa:ee:cc:dd:ff
from: http://forum.odroid.com/viewtopic.php?f=111&t=8198
2e.) Control the CPU speeds via cpufrequtils:
apt-get install cpufrequtils vi /etc/default/cpufrequtils ENABLE="true" GOVERNOR="ondemand" MAX_SPEED=1704000 MIN_SPEED=200000
However, I chose "performance" as GOVERNOR and a MIN_SPEED=800000
from: http://forum.odroid.com/viewtopic.php?f=65&t=2795
2f.) Install Docker
If you chose my .config with Docker enabled, you can install Docker with a fast
curl -sSL https://get.docker.com/ | sh
Thanks a lot to the Guys over at Hypriot, I took their RPi Kernel Configs as an example and merged those with the U3 Configs to get to this results. And yes, AUFS is still missing but... it is ok 😉
Additional stuff:
- Gist of my Kernel Config: https://gist.github.com/nmaas87/81818c1db9dc292a4c21125bd2602658
Following sites helped:
- https://blogs.s-osg.org/install-ubuntu-run-mainline-kernel-odroid-xu4/
- http://rtisat.blogspot.de/search/label/odroid-u3
- https://github.com/umiddelb/armhf/wiki/How-To-compile-a-custom-Linux-kernel-for-your-ARM-device
- http://rglinuxtech.com/?p=1622
- http://rglinuxtech.com/?p=1656
- http://forum.odroid.com/viewtopic.php?f=81&t=9342
[Talk] Docker Grundlagen
My Docker 101 presentation @ Softwaretest, Testautomatisierung und -management Saar 2017 (23.01.2017, https://www.meetup.com/de-DE/Saarland-Softwarequalitatssicherung-Testautomatisierung/).
Docker_Testen2017.pdf (0,44 MB, PDF).
Sourcecode / Example can be found on https://github.com/nmaas87/docker-demo
[Docker] Upgrade Sonarqube from 5.6 to 6.2
I just updated Sonarqube from 5.6 (LTS) to 6.2. Before that, I upgraded all my plugins in Sonarqube itself and made an backup of my installation and database. Then, I replaced image: sonarqube:lts-alpine in my docker-compose.yaml with image: sonarqube:6.2-alpine. I did an docker-compose up and it started, however, I had some errors so it just kept on restarting. Following this advice I then proceded to delete $SONAR_HOME/data/es/. I then restarted Sonarqube, which worked. I then pointed my Webbrowser to the URL of my Sonarqube Instance and added an /setup and allowed the Database Upgrade. After that, I had a working new Sonarqube instance :).
[PiAndMore] resinOS Presentation
Here is my presentation to resinOS @ PiAndMore 9 1/2 (Hochschule Niederrhein / Krefeld, 14.01.2017)
resinOS_PiAndMore9_1_2.pdf (0,6 MB, PDF)
Videorecording of the talk can be found here