Time Format after Devuan 3 and Debian 10 Update

After updating to Devuan 3 the date command shows 12hours am/pm but my days have 24 hours. The locale was always en_US.UTF8 to keep sane command and error output.

Debian 10 thinks they had to fix the correct hour display to the complicated one.

Therefor all sysadmin like me have to apply the following workaround, to keep both sane command output and reasonable time format.

update-locale LC_TIME=C.UTF-8

This changed the locale for time output to “C” in /etc/default/locale. Date looks correct again:

# date
Mon Jun  8 21:20:33 CEST 2020

Mikrotik OSPF Routing Distance Ignored

Discussion: Every routing protocol has a default distance to help the router to decide which route to use in case of multiple routes for the same destination. For Mikrotik routers these distances are listed here:
If you want to configure a backup link that is only activated when the OSPF main route is missing, you can use a static route with distance 120 which is higher than the OSPF default 110.

If you enter this route in a mikrotik after the ospf route is learned it works as expected. The static 120 route is ignored until the ospf route vanishes.

But the other way arround does not work. If the 120 route is active, the OSPF route is ignored even it has the better distance of 110. And worse the Mikrotik keeps this route in its own OSPF announcement.

This is a known bug: https://forum.mikrotik.com/viewtopic.php?t=119493

Work around: You can make two smaller routes for the preferred path.
eg. two /26 should always overrule a /25 route.

Version: Mikrotik RouterOS 6.46.1

ARP and Broadcast Packets Missing

Problem: A Linux box with Debian 9 (kernel 4.9) on a HP server with Intel i40e (X710) network cards, is not reachable from neighbor machines, because ARP does not work.

Discussion: while testing with tcpdump ARP worked, but later ARP stopped working again. When tcpdump is used with “-p” (non promiscuous mode) you can see the problem. The server does not receive any broadcasts. Which means neighbors can not find the machine with ARP. Outgoing ARP does work though because ARP responses are not sent to broadcast the address (ff:ff:ff:ff:ff:ff).

Solution: A quick fix was to use “ifconfig eth0 promisc”. In this mode broadcasts are received and ARP is working. A better fix is to upgrade the Linux kernel to Debian 9 backports (4.19) or probably upgrade to Debian 10.

Versions: Debian 9 kernel 4.9 intel driver: 2.3.2-k intel firmware: 6.00 0x800034ea 18.3.6

MITMProxy and IOS 13

Problem: if you want to debug a IOS app with MITMProxy, the iPhone needs to trust the MITMProxy CA. This is done by going to http://mitm.it/ and clicking on the apple symbol. Then you have to accept the “profile” in Settings “downloaded profiles”. Then you have to trust this new CA cert in “Settings” “General” “About” “Trust Root Cert” “mitmproxy”. But then the certs generated by the MITMProxy are still not trusted.

Discussion: Starting with IOS 13, TLS server certificates must have a validity period of 825 days or fewer and MITMProxy generates certs with an expiration period of 1095 days.

Solution: I changed the py file of MITMProxy to shorten the cert validity, by changing the file /usr/lib/python2.7/dist-packages/netlib/certutils.py

# DEFAULT_EXP = 94608000  # = 24 * 60 * 60 * 365 * 3
DEFAULT_EXP = 31536000  # = 24 * 60 * 60 * 365

Versions: test with MITMProxy 0.18.2-6+deb9u2 but it looks as if current versions of MITMProxy on github still use 3 years as default expiration.

Linux Live-boot Fails after Debian/Devuan Update

Problem: after updating from Debian 8 to Devuan 2 the overlay live-boot failes with “no such device”

Discussion: I use a bootable USB stick combined with live-boot. In this case the USB stick partition 3 is a normal ext4 file system used as read only “plainroot” filesystem. Live-boot overlays this with an ramfs.
As I don’t know the /dev/sdaX file on the target system I use “root=LABEL=KROOT” to find the USB root image. This worked before but it does not any more. The reason is the following line in /lib/live/boot/9990-overlay.sh in the “plain root system” section:

mount -t $(get_fstype "${image_directory}") -o ro,noatime "${image_directory}" "${croot}/filesystem"

get_fstype “LABEL=KROOT” results in “unkown” and this mount command fails.

Solution: I removed the get_fstype part -t $(get_fstype “${image_directory}”) in /lib/live/boot/9990-overlay.sh. Mount guesses the filesystem type automatically.

After that you have to rebuild initramdisk with update-initramfs.

Version: tested with devuan 2.1, and this kernel boot options: “read-only boot=live root=LABEL=KROOT rootdelay=10 ignore_uuid plainroot”

Greenlock(-express) Letsencrypt Fails with ECONNRESET

Problem: after upgrading vom greenlock-express v2.0 to v2.5 and switching from acme-v1 to acme-v2 every attempt to register a new TLS cert with Letsencrypt fails with “ECONNRESET”

Discussion: the new version of greenlock tries to validate the .well-known/acme-challenge file before asking letsencrypt for the certificate.
If your webserver is behind a loadbalancer or firewall and the webserver can not request itself using the official public IP, this loopback request may fail. In this case only this cryptic error message is shown:

[acme-v2] handled(?) rejection as errback:
Error: read ECONNRESET
    at TCP.onStreamRead (internal/stream_base_commons.js:200:27)
Error loading/registering certificate for 'your.webserver':
Error: read ECONNRESET
    at TCP.onStreamRead (internal/stream_base_commons.js:200:27) {
  errno: 'ECONNRESET',
  code: 'ECONNRESET',
  syscall: 'read'

Solution: You can redirect these local loopback web requests using iptables to the local web server and bypass the loadbalancer/firewall:

iptables -t nat -I OUTPUT -d PUBLIC_WEBSERVER_IP -p tcp --dport 80 -j REDIRECT --to-port LOCAL_WEBSERVER_TCP_PORT

Apache Start Hangs during Reboot of a KVM Virtual Server

Problem: Apache needs very long to start on a virtual server running on a KVM/QEMU virtual maschine.

Solution: Apache needs a RNG (random number generator) for startup, probably because of TLS. A pure virtual maschine has no RNG device per default. If you add an RNG device to the virtual maschine configuration, apache startup is lightening fast.

    <rng model='virtio'>
      <backend model='random'>/dev/random</backend>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>

Versions: tested with libvirt 3.0.0, qemu-kvm 2.8 on devuan 9

Sparse Files Howto

Unix file systems like ext3/4 can store files which are partly empty more efficiently by not storing blocks with all zeros. These files are called sparse files. When reading these files every things works as normal but “all zero” blocks don’t wast space on the drive.

This can be useful for different application. For example a database can make a big file for random access, without using the space on the drive. The actual size on the disk grows with every used block. Another example are raw disk images for virtualization like KVM. You can make a 10GB disk image which uses almost no space, and grows only when used.

Usefull sysadmin commands:

"du -h --apparent-size FILE" shows the full file size including sparse areas
"du -h FILE" shows the actual space used on the file system
"ls -lh FILE" show the full file size
"ls -sh FILE" shows the actual space used on the file system
"fallocate -d FILE" make a file sparse, which means "digs holes" for "all zero" blocks
"rsync -S ..." the -S option makes rsync sparse file aware and produces sparse files at the receiver
"truncate -s 1G FILE" makes a sparse file with 1GB that uses no file system space

DELL iDRAC6 with Java8

Problem: The remote console feature of a Dell R710 server does not open with an Linux client with errors like these:

Connection failed, Unsigned Java Applett, etc

Solution: I had to change three things:
in /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/security/java.security I had to change to these lines:

# from
# jdk.jar.disabledAlgorithms=MD2, MD5, RSA keySize < 1024, DSA keySize < 1024
# to
jdk.jar.disabledAlgorithms=MD2, RSA keySize < 1024, DSA keySize < 1024

# from
#jdk.tls.disabledAlgorithms=SSLv3, RC4, DES, MD5withRSA, DH keySize < 1024, EC keySize < 224, 3DES_EDE_CBC, anon, NULL
# to
jdk.tls.disabledAlgorithms=RC4, DES, MD5withRSA, DH keySize < 1024, EC keySize < 224, 3DES_EDE_CBC, anon, NULL

And in ~/.java/deployment/deployment.properties I changed these lines:

# from
# to

And there is still a small bug: cursor keys won’t work, but numpad cursor keys do.

Akamai High Traffic Volume

Problem: Akamai is showing very high edge traffic volume, but other sources of accounting show less traffic, and Akamai is billing this high volume!

Discussion: I activated log shipping to compare the volume of the shipped data, to the reported volume by Akamai. The difference was +66% in the Akamai volume. Support was not help full. So I investigated deeper and found out, that the ratio for images was much lower and much higher for css/html/js files. So I checked compression.

Even though the origin servers can do compression, they did not use compression with Akamai. The reason for this was that IIS disables compression when it receives a “Via:” header, because there is some old HTTP spec that says “you should disable compression, when you get the request trough a proxy, because the proxy might not be capable of compression.”

You may say: I don’t care about uncompressed data from the origin because you have a 99% cache rate. I can handle 1% of traffic to be uncompressed.

But! When Akamai receives an object uncompressed, it counts and bills every download of this object in uncompressed size, even though all the requests to the client edge are compressed.

A small calculation: If you publish some css/js/html file with 5 MegaBytes, 1 million times per day, Akamai is charging you 5TeraBytes of traffic. But in reality they ship only about 625 GigaBytes to the users with an average of 1/8 compression for these kind of files !

This means you can reduce about 86% of your Akamai bill, for good compressible files.

Solution: Make sure that your origin servers always sends compressed data to Akamai by ignoring or removing the “Via:” header. And check this repeatedly because Akamai may add other headers that disable compression on the origin side.

Ref: Origin impact is explained here: https://community.akamai.com/customers/s/article/Beware-the-Via-header-and-its-bandwidth-impact-on-your-origin?language=en_US

Billing is explained here: https://community.akamai.com/customers/s/article/Reporting-of-compressed-content-on-Akamai-1386938153725

We will show and bill the uncompressed content size in the following circumstances:
– The origin (including NetStorage) sent the object uncompressed
– An edge server receives the object compressed from the origin, or a peer edge server, but had to uncompress it because of an internal process, including, but not limited to:
Edge Side Include (ESI)
– The client does not support compression (the request did not include an Accept-Encoding: gzip header)