Akamai High Traffic Volume

Problem: Akamai is showing very high edge traffic volume, but other sources of accounting show less traffic, and Akamai is billing this high volume!

Discussion: I activated log shipping to compare the volume of the shipped data, to the reported volume by Akamai. The difference was +66% in the Akamai volume. Support was not help full. So I investigated deeper and found out, that the ratio for images was much lower and much higher for css/html/js files. So I checked compression.

Even though the origin servers can do compression, they did not use compression with Akamai. The reason for this was that IIS disables compression when it receives a “Via:” header, because there is some old HTTP spec that says “you should disable compression, when you get the request trough a proxy, because the proxy might not be capable of compression.”

You may say: I don’t care about uncompressed data from the origin because you have a 99% cache rate. I can handle 1% of traffic to be uncompressed.

But! When Akamai receives an object uncompressed, it counts and bills every download of this object in uncompressed size, even though all the requests to the client edge are compressed.

A small calculation: If you publish some css/js/html file with 5 MegaBytes, 1 million times per day, Akamai is charging you 5TeraBytes of traffic. But in reality they ship only about 625 GigaBytes to the users with an average of 1/8 compression for these kind of files !

This means you can reduce about 86% of your Akamai bill, for good compressible files.

Solution: Make sure that your origin servers always sends compressed data to Akamai by ignoring or removing the “Via:” header. And check this repeatedly because Akamai may add other headers that disable compression on the origin side.

Ref: Origin impact is explained here: https://community.akamai.com/customers/s/article/Beware-the-Via-header-and-its-bandwidth-impact-on-your-origin?language=en_US

Billing is explained here: https://community.akamai.com/customers/s/article/Reporting-of-compressed-content-on-Akamai-1386938153725

We will show and bill the uncompressed content size in the following circumstances:
– The origin (including NetStorage) sent the object uncompressed
– An edge server receives the object compressed from the origin, or a peer edge server, but had to uncompress it because of an internal process, including, but not limited to:
Edge Side Include (ESI)
Prefetching
– The client does not support compression (the request did not include an Accept-Encoding: gzip header)

Remove ID3 Tags from Flac Files

Problem: Some flac players refuse to play some flac files, and even tools like an old ffmpeg can’t handle some flac files

Solution: These flac files might have id3v2 tags which they realy should not, because flac uses vorbis style tags and not id3. Remove those id3v2 tags with this command:

> id3v2 --delete-all song.flac

This removes the id3v2 tags from the flac file in place

Discussion: Those flac files were made with EAC. In the encoder settings “Add ID3 Tags” was checked, and EAC added ID3 tags even though flac files don’t need and must not have ID3 tags. If you like to know, whether your flac files have these false ID3 tags your can run “id3v2 -l song.flac” or look into the files with hexdump.

hexdump -C song-with-id3.flac | head
00000000  49 44 33 03 00 00 00 06  44 0b 54 49 54 32 00 00  |ID3.....D.TIT2..|
00000010  00 27 00 00 01 ff fe 54  00 68 00 65 00 20 00 46  |.'.....T.h.e. .F|
00000020  00 69 00 72 00 65 00 20  00 54 00 68 00 69 00 73  |.i.r.e. .T.h.i.s|
00000030  00 20 00 54 00 69 00 6d  00 65 00 54 50 45 31 00  |. .T.i.m.e.TPE1.|

hexdump -C song-without-id3.flac | head
00000000  66 4c 61 43 00 00 00 22  10 00 10 00 00 00 10 00  |fLaC..."........|
00000010  2b 54 0a c4 42 f0 00 bf  39 4c 3d e0 59 d1 58 72  |+T..B...9L=.Y.Xr|
00000020  49 b7 d4 56 99 08 c4 ae  45 b5 03 00 02 0a 00 00  |I..V....E.......|
00000030  00 00 00 00 00 00 00 00  00 00 00 00 00 00 10 00  |................|

Correct flac files start with “fLaC” and not “ID3”

Version: EAC “Exact Audio Copy” Sept. 2019

Chrome Destroys URLs in the Location Line

Problem: current Chrome browsers cut URl parts and hide the correct URL. https://www.derstandard.at/ is shown as derstandard.at. This is simply wrong from the technical point of view. And it might even be dangerous for some URLs form a security standpoint.

Solution: there is an option to disable this bug:
chrome://flags/#omnibox-ui-hide-steady-state-url-trivial-subdomains

Update: oogle ropped his ption. The new sollution starting with 79 is this startup paramter:

--disable-features=OmniboxUIExperimentHideSteadyStateUrlTrivialSubdomains

Why does Google insist on breaking the URL scheme ? What’s behind this ?

APT sources list

Problem: when debian goes from “testing” to “stable” to “oldstable” the package sources change. eg. jessie-updates are remove, same happened to jessie-backports

The current file /etc/apt/sources.list for jessie (currently oldstable) could look like this

deb http://ftp.debian.org/debian/ jessie main contrib non-free
deb http://security.debian.org/ jessie/updates main contrib non-free

Configure WLAN Statically in Debian/Linux

If you want to configure WLAN settings on a Linux machine statically you can use the normal /etc/network/interfaces configuration method of Debian. For WPA-PSK you can use this 3 steps:

Install the “wpasupplicant” package

Generate a psk line with “wpa_passphrase” and copy the hex string after “psk=”

root@server:~# wpa_passphrase WLANNAME
# reading passphrase from stdin
thepassword
network={
ssid="WLANNAME"
#psk="thepassword"
psk=fe5409c4831b3daafff41fe2e6ed15ba7ed18c87bab254315e0be5f9180573d3
}

Add some lines to /etc/network/interfaces using this hex string

allow-hotplug wlan0
iface wlan0 inet dhcp
metric 4
wpa-essid WLANNAME
wpa-scan-ssid 1
wpa-psk fe5409c4831b3daafff41fe2e6ed15ba7ed18c87bab254315e0be5f9180573d3

The line “wpa-scan-ssid 1” allows to use hidden WLAN that are not broadcasted. With “metric 4” you can make WLAN less preferred if there is a second LAN connection that should be preferred (default is “metric 1”).

Debugging Akamai

Akamai just works, … most of the time. But sometimes you have to check what’s going on, and Akamai gives you a handy tool for this.

There is an HTTP request header that tells Akamai to respond with some internal information.

Pragma: akamai-x-cache-on, akamai-x-cache-remote-on, akamai-x-check-cacheable, akamai-x-get-cache-key, akamai-x-get-ssl-client-session-id, akamai-x-get-true-cache-key, akamai-x-get-request-id

With this request header Akamai includes this in the response header

X-Cache: TCP_MISS from a84-53-161-127.deploy.akamaitechnologies.com (AkamaiGHost/9.6.2.0.1-25325260) (-)
X-Cache-Key: S/L/16382/612780/0s/www.yourdomain.de/ cid=what_TOKEN=dings_
X-Cache-Key-Extended-Internal-Use-Only: S/L/16382/612780/0s/www.yourdomain.de/ vcd=1948 cid=what_TOKEN=dings_
X-True-Cache-Key: /L/www.yourdomain.de/ vcd=1948 cid=what_TOKEN=dings_
X-Akamai-SSL-Client-Sid: lZWwRTj17XXXXXXXXXU5Cw==
X-Check-Cacheable: NO
X-Akamai-Request-ID: f82516c

Some important parts:

  • TCP_MISS shows that Akamai didn’t use it’s cache for this request, but the origin
  • X-Cache-Key shows what Akamai used to reference the cache position. In this case the url was http://www.yourdomain.de/?what and a cookie named TOKEN was included in the cacheID (“cid=…”)

MikroTik Automatic IPSec Failover

Problem: Mikrotik allows only one ipsec policy per network-to-network pair. If you want to have redundant tunnels between two locations with two upstreams you cannot configure ipsec redundancy on Mikrotik because one ipsec policy is always marked as “invalid” by the OS.

Solution: I made a Mikrotik script that checks the status and reachabilty of the ipsec tunnel and endpoint, and switches between a primary and secondary tunnel policy and peer. You can add this script to the scheduler, for automatic failover. (Source: “/system script run 0” if this script is script “0”)

{
:local PrimaryPolicy 2
:local SecondaryPolicy 3
:local PrimaryPeer 0
:local SecondaryPeer 1

:local PrimaryOK [:ping count=3 src-address=localAip remoteAip];
:local SecondaryOK [:ping count=3 src-address=localBip remoteBip];
:local PrimaryActive [/ip ipsec policy get $PrimaryPolicy active];

# :log info "Status: $PrimaryOK $SecondaryOK $PrimaryActive";
# Test case: set $PrimaryOK 0;

:if ($PrimaryOK < 1 && $SecondaryOK > 1 && $PrimaryActive) do={
:log warn "switch to failover";
/ip ipsec policy disable $PrimaryPolicy;
/ip ipsec policy enable $SecondaryPolicy;
/ip ipsec peer disable $PrimaryPeer;
/ip ipsec peer enable $SecondaryPeer;
}
:if ($PrimaryOK = 3 && !$PrimaryActive) do={
:log warn "switch to primary";
/ip ipsec policy disable $SecondaryPolicy;
/ip ipsec policy enable $PrimaryPolicy;
/ip ipsec peer disable $SecondaryPeer;
/ip ipsec peer enable $PrimaryPeer;
}
}

Version: tested with RouterOS 6.44.1

FortiGate HA Synchronization Fail

Problem: Two FortiGate firewall show “not synchonized” in the HA status.

Discussion: the problem with this is, that FortiGate does not show why it fails. I found no log file with a reasonable error message. So I tried to synchronize the config myself, which is exactly what should NOT be necessary when using HA synchronization.

Solution: When an ipsec-phase1 setting in the master is removed while the slave is not online, the ipsec-phase1 removal fails during synchronization. Why Fortinet, doesn’t your box log this? Removing the phase1-section by hand did not work ether:

FortiGate-Master # execute ha manage 1

FortiGate-Slave $ config vpn ipsec phase1-interface
FortiGate-Slave (phase1-interface) $ delete VPN-PEER
This phase1-interface is currently used
command_cli_delete:5937 delete table entry VPN-PEER unset oper error ret=-23
Command fail. Return code -23

Like with most cheap software I had to reboot the slave, and then I could remove the phase1-interface section, and then the synchronization worked again.

I don’t remember if I ever had to reboot a Cisco or Linux box to fix a bug.

Version: FortiGate 6.0.4