Greenlock(-express) Letsencrypt Fails with ECONNRESET

Problem: after upgrading vom greenlock-express v2.0 to v2.5 and switching from acme-v1 to acme-v2 every attempt to register a new TLS cert with Letsencrypt fails with “ECONNRESET”

Discussion: the new version of greenlock tries to validate the .well-known/acme-challenge file before asking letsencrypt for the certificate.
If your webserver is behind a loadbalancer or firewall and the webserver can not request itself using the official public IP, this loopback request may fail. In this case only this cryptic error message is shown:

[acme-v2] handled(?) rejection as errback:
Error: read ECONNRESET
    at TCP.onStreamRead (internal/stream_base_commons.js:200:27)
Error loading/registering certificate for 'your.webserver':
Error: read ECONNRESET
    at TCP.onStreamRead (internal/stream_base_commons.js:200:27) {
  errno: 'ECONNRESET',
  code: 'ECONNRESET',
  syscall: 'read'
}

Solution: You can redirect these local loopback web requests using iptables to the local web server and bypass the loadbalancer/firewall:

iptables -t nat -I OUTPUT -d PUBLIC_WEBSERVER_IP -p tcp --dport 80 -j REDIRECT --to-port LOCAL_WEBSERVER_TCP_PORT

Akamai High Traffic Volume

Problem: Akamai is showing very high edge traffic volume, but other sources of accounting show less traffic, and Akamai is billing this high volume!

Discussion: I activated log shipping to compare the volume of the shipped data, to the reported volume by Akamai. The difference was +66% in the Akamai volume. Support was not help full. So I investigated deeper and found out, that the ratio for images was much lower and much higher for css/html/js files. So I checked compression.

Even though the origin servers can do compression, they did not use compression with Akamai. The reason for this was that IIS disables compression when it receives a “Via:” header, because there is some old HTTP spec that says “you should disable compression, when you get the request trough a proxy, because the proxy might not be capable of compression.”

You may say: I don’t care about uncompressed data from the origin because you have a 99% cache rate. I can handle 1% of traffic to be uncompressed.

But! When Akamai receives an object uncompressed, it counts and bills every download of this object in uncompressed size, even though all the requests to the client edge are compressed.

A small calculation: If you publish some css/js/html file with 5 MegaBytes, 1 million times per day, Akamai is charging you 5TeraBytes of traffic. But in reality they ship only about 625 GigaBytes to the users with an average of 1/8 compression for these kind of files !

This means you can reduce about 86% of your Akamai bill, for good compressible files.

Solution: Make sure that your origin servers always sends compressed data to Akamai by ignoring or removing the “Via:” header. And check this repeatedly because Akamai may add other headers that disable compression on the origin side.

Ref: Origin impact is explained here: https://community.akamai.com/customers/s/article/Beware-the-Via-header-and-its-bandwidth-impact-on-your-origin?language=en_US

Billing is explained here: https://community.akamai.com/customers/s/article/Reporting-of-compressed-content-on-Akamai-1386938153725

We will show and bill the uncompressed content size in the following circumstances:
– The origin (including NetStorage) sent the object uncompressed
– An edge server receives the object compressed from the origin, or a peer edge server, but had to uncompress it because of an internal process, including, but not limited to:
Edge Side Include (ESI)
Prefetching
– The client does not support compression (the request did not include an Accept-Encoding: gzip header)

Chrome Destroys URLs in the Location Line

Problem: current Chrome browsers cut URl parts and hide the correct URL. https://www.derstandard.at/ is shown as derstandard.at. This is simply wrong from the technical point of view. And it might even be dangerous for some URLs form a security standpoint.

Solution: there is an option to disable this bug:
chrome://flags/#omnibox-ui-hide-steady-state-url-trivial-subdomains

Update: oogle ropped his ption. The new sollution starting with 79 is this startup paramter:

--disable-features=OmniboxUIExperimentHideSteadyStateUrlTrivialSubdomains

Update 2: “They are evil now” did it again. URLs are broken again. The option of the day for Chrom80 to repair the URLs is:

--disable-features=OmniboxUIExperimentHideSteadyStateUrlScheme,OmniboxUIExperimentHideSteadyStateUrlTrivialSubdomains

Why does Google insist on breaking the URL scheme ? What’s behind this ?

Update 3: New method to work around the Chrome URL bug for version 83:
Enable the setting: chrome://flags/#omnibox-context-menu-show-full-urls
then right click in the location input field and activate “Alway show full URLs”

Debugging Akamai

Akamai just works, … most of the time. But sometimes you have to check what’s going on, and Akamai gives you a handy tool for this.

There is an HTTP request header that tells Akamai to respond with some internal information.

Pragma: akamai-x-cache-on, akamai-x-cache-remote-on, akamai-x-check-cacheable, akamai-x-get-cache-key, akamai-x-get-ssl-client-session-id, akamai-x-get-true-cache-key, akamai-x-get-request-id

With this request header Akamai includes this in the response header

X-Cache: TCP_MISS from a84-53-161-127.deploy.akamaitechnologies.com (AkamaiGHost/9.6.2.0.1-25325260) (-)
X-Cache-Key: S/L/16382/612780/0s/www.yourdomain.de/ cid=what_TOKEN=dings_
X-Cache-Key-Extended-Internal-Use-Only: S/L/16382/612780/0s/www.yourdomain.de/ vcd=1948 cid=what_TOKEN=dings_
X-True-Cache-Key: /L/www.yourdomain.de/ vcd=1948 cid=what_TOKEN=dings_
X-Akamai-SSL-Client-Sid: lZWwRTj17XXXXXXXXXU5Cw==
X-Check-Cacheable: NO
X-Akamai-Request-ID: f82516c

Some important parts:

  • TCP_MISS shows that Akamai didn’t use it’s cache for this request, but the origin
  • X-Cache-Key shows what Akamai used to reference the cache position. In this case the url was http://www.yourdomain.de/?what and a cookie named TOKEN was included in the cacheID (“cid=…”)

Web Audio Silence

Problem: I had problems with an audio driver (no not on Linux). The sound started with a delay after every gap of silence. This bug cuts off about 1/2 of a second of the attack of the sound. This is a problem when you try to make music in particular.

Workarround: I made a little webpage that plays “Total Silence” or “Almost Silent” sound. This keeps the sound driver busy and prevents the driver from shutting down the sound.

–> http://seven.mail.at/silence.html

Google Maps Marker on Mobile

Problem: A responsive webapp shows a google map with markers that are clickable. On desktop everything works as expected, but on mobile the markers are not clickable.

Discussion: After debugging with chrome remote inspector, I found that a div->frame with opacity:0 was lying above (explicit z-index:2) the clickable markers.

I don’t know what this frame is for, but it covers the markers and its click events.

Workaround: The frame is only loaded when the user is logged into google. You can remove this frame by removing “signed_in” from the script tag.

Version: https://maps.googleapis.com/maps/api/js on 23.2.2016. Chrome 48 on Android 5.1.1,

Google Map from RSS Feed

Problem: Google had a nice feature to build google maps from rss geo information with a simple iframe tag, but this service is discontinued.

<iframe width="920" height="450" frameborder="0" scrolling="no" marginheight="0" marginwidth="0" 
 src="https://maps.google.com/?q=http:%2F%2Ftothepin.blogspot.com%2Ffeeds%2Fposts%2Fdefault&amp;ie=UTF8&amp;t=t&amp;source=embed&amp;output=embed">
</iframe>

You could actually add ?q=rssfeed to the maps.google.com url and it produced a map from all geo data in this rss feed.

Solution: The new api for google maps is different but you can still do the same. Here some sample code:

<script src="//maps.googleapis.com/maps/api/js?v=3.exp&signed_in=true"></script>
<script>
    function initialize() {
        var myLatlng = new google.maps.LatLng(48.2084900,16.3720800);
        var mapOptions = {zoom: 4, center: myLatlng }

        var map = new google.maps.Map(document.getElementById('map-canvas'), mapOptions);

        var georssLayer = new google.maps.KmlLayer({
            url: 'http://tothepin.blogspot.com/feeds/posts/default?alt=rss'
        });
        georssLayer.setMap(map);
    }

    google.maps.event.addDomListener(window, 'load', initialize);
</script>
<style>
#map-canvas { width:800px; height:500px; }
</style>
<div id=map-canvas>Map</div>