OpenSearch on Google Kubernetes Engine

GKE version: 1.20.12-gke.1500
Helm Chart version: 1.8.0

In pursuit of Elasticsearch knowledge, I attempted to install OpenSearch on a GKE cluster. Following the instructions in https://opensearch.org/docs/latest/opensearch/install/helm/ the pods crashed.

NAME READY STATUS RESTARTS AGE
opensearch-cluster-master-1 0/1 CrashLoopBackOff 1 13s
opensearch-cluster-master-0 0/1 CrashLoopBackOff 1 13s

Looking at the logs from the pod:

[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

Looking at values.yml I tried to add:

sysctl:
  enabled: false
sysctlVmMaxMapCount: 262144

But I was greeted with the following when I described the pods:

Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning SysctlForbidden 1s kubelet forbidden sysctl: "vm.max_map_count" not whitelisted

Since I am using terraform, I looked at the documentation and found that the vm.max_map_count parameter is not supported by “linux_node_config parameter” in the “google_container_cluster” resource.

Searching through the Internet, I found this excellent post: A Guide to Deploy Elasticsearch Cluster on Google Kubernetes Engine but my init container still failed, it wasn’t able to change the vm.max_map_count value.

sysctl: error setting key 'vm.max_map_count': Permission denied

So I tried adding “runAsUser: 0” so that it runs as root and it worked. Final block in values.yml for my init containers (specifically to deal with the sysctl issue) is below:

extraInitContainers:
  - name: increase-the-vm-max-map-count
    image: busybox
    command:
    - sysctl
    - -w
    - vm.max_map_count=262144
    securityContext:
      privileged: true
      runAsUser: 0
  - name: increase-the-ulimit
    image: busybox
    command:
    - sh
    - -c
    - ulimit -n 65536
    securityContext:
      privileged: true
      runAsUser: 0


helm upgrade opensearch -n opensearch -f values.yml opensearch/opensearch --version 1.8.0

All is well

NAME READY STATUS RESTARTS AGE
opensearch-cluster-master-0 1/1 Running 0 12m
opensearch-cluster-master-1 1/1 Running 0 12m
opensearch-cluster-master-2 1/1 Running 0 12m

Adding Storage To Lenovo Laptop

Before purchasing anything, I would usually do a lot of research about reliability, pricing, and support. Another factor is upgradability.

I wanted to buy an IdeaPad laptop directly from Lenovo, but it was not as customizable as I would like it to be.

The laptop has multiple configuration options, there are models with an SSD and HDD, and some models with only SSD. I wanted to buy one with only a single 512 GB M.2 2242 SSD and upgrade in the future. When I contacted Lenovo sales via chat, they told me that it is impossible to add a new drive, that it would void the warranty.

I decided to buy one with a single M.2 SSD anyway.

The interesting this is that a disk caddy (including cables) is included indicating that we are allowed to add a disk on our own.

I haven’t tried to open the bottom panel yet, but I guess it’s normal these days to be able to add a disk into the disk bay.

There you go, I should be able to add a drive when needed 👍🏼

Old Blog No More

As I have explained in the previous post Domain Change I decided to move on and let the domain romantika.name expire. Today, I found out a new WordPress blog has been erected using that domain, with technical topics as its focus.

Some even cross over with my old topics such as exporting blogger to WordPress and building a collapsible plugin. This gives the impression as if the site is owned by me especially since my old WordPress plugins are still pointing over to romantika.name as they are outdated in the plugin repository.

I need to tell everyone that romantika.name is no longer owned by me, and anything that is written there is certainly not mine. I am not responsible for anything that is published there.

Please be aware.

Domain Change

Since I haven’t had a lot of time to write for the last few years, I have decided to let the domain romantika.name to expire at the end of this month. From my experience with domains, someone will buy the domain and squat there with some ads. Especially a domain that has been around for 15 years!

   Domain Name: ROMANTIKA.NAME
   Created On: 2004-12-29T08:00:57Z
   Expires On: 2020-12-29T08:00:57Z
   Updated On: 2019-12-27T05:51:05Z

For now, I have enabled redirection so that all links to the old domain will redirect to this one.

Here is the nginx config to achieve this.

    server_name www.romantika.name romantika.name;
    rewrite ^\/v2(\/?.*) https://blog.adyromantika.com$1 last;
    rewrite ^(.*) https://blog.adyromantika.com$1 permanent;

As for WordPress, it’s pretty easy using WP-CLI:

wp db export backup.sql
wp search-replace 'https://www.romantika.name' 'https://blog.adyromantika.com'
wp search-replace 'https://romantika.name' 'https://blog.adyromantika.com'

I have also transferred all of the domains I own to Cloudflare and saved more than 50%.

Let’s hope 2021 will be a better year for all of us.

Restricting xmlrpc.php

Earlier today, I saw some spikes on the load graph for the new server (where this site is hosted).

Upon checking the logs I saw a lot of these:

134.122.53.221 - - [01/May/2020:12:21:54 +0000] "POST //xmlrpc.php HTTP/1.1" 200 264 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36"
134.122.53.221 - - [01/May/2020:12:21:55 +0000] "POST //xmlrpc.php HTTP/1.1" 200 264 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36"
198.98.183.150 - - [01/May/2020:13:44:24 +0000] "POST //xmlrpc.php HTTP/1.1" 200 265 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36"
198.98.183.150 - - [01/May/2020:13:44:25 +0000] "POST //xmlrpc.php HTTP/1.1" 200 265 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36"

I’m not mentioning the source IP owner, technical readers can look it up if they are interested. However, the second IP comes from a IP range that is rather interesting.

Searching the Internet, I found out that many people consider the xmlrpc.php as a problem. For those who are not familiar with WordPress, this file is responsible for external communications. For example when using the mobile application to manage your site, and also when you use Jetpack.

There are plugins to disable XML-RPC such as this one, but I use the app from time to time, so I would like to keep xmlrpc.php working.

The official Jetpack website provides this list for whitelisting purposes.

I have been restricting access to my /wp-admin URL for ages, using Nginx. I think it is a good idea to do the same for xmlrpc.php.

location ~ ^/(xmlrpc\.php$) {
    include conf.d/includes/jetpack-ipvs-v4.conf;
    deny all;
 
    include fastcgi.conf;
    fastcgi_intercept_errors on;
    fastcgi_pass php;
}

The simple script to update this IP list into Nginx configuration, that is consumed by the configuration above:

#!/bin/bash
 
FILENAME=jetpack-ipvs-v4.conf
CONF_FILE=/etc/nginx/conf.d/includes/${FILENAME}
 
wget -q -O /tmp/ips-v4.txt https://jetpack.com/ips-v4.txt
 
if [ -s /tmp/ips-v4.txt ]; then
  cat /tmp/ips-v4.txt | awk {'print "allow "$1";"'} > /tmp/${FILENAME}
 
  [ -s ${CONF_FILE} ] || touch ${CONF_FILE}
 
  if [ "$(diff /tmp/${FILENAME} ${CONF_FILE})" != "" ]; then
    echo "Files different, replacing ${CONF_FILE} and reloading nginx"
    mv -fv /tmp/${FILENAME} ${CONF_FILE}
    systemctl reload nginx
  else
    echo "File /tmp/${FILENAME} match ${CONF_FILE}, not doing anything"
  fi
fi
 
rm -f /tmp/ips-v4.txt

It can be periodically executed by cron so that when the IP list changes, the configuration gets updated.

Now, if any IP other than Jetpack tries to access /xmlrpc.php it will receive Error 403 Forbidden.

Have fun!

Moving Domains to Cloudflare

While clicking around in Cloudflare today, I found this button.

It got me into thinking about what I paid last year for a .com domain and this is what I see when I looked at the order history.

I was surprised to see the difference. At the current exchange rate, Cloudflare will only cost me MYR34.89 (US$8.03 with ICANN fee) for a .com domain, which is a saving of MYR40.33 per domain, per year.

Looking at all domains transferable to Cloudflare, this is what it would cost me if I renew them right now:

If I transfer them to Cloudflare, it will cost me US$60.28 (MYR261.93) per year, which is a saving of US$66.75 (MYR290.07) equivalent to a year of mid-size shared hosting in Malaysia, or a full year of hosting cost with my current cloud provider.

Mind. Blown. 🤯

Cloudflare offers at-cost pricing for registration and renewal of many TLDs, but unfortunately .name is not one of them so I won’t be able to move romantika.name to them. For now.

Happy New Year 2020

Yes, yes I know it is already April. The year 2020 has proven challenging not only to some of us but all of us globally. I hope everyone is staying safe with their loved ones.

Server Updates

I recently had to move this blog to a new server because:

  • The old server was an OpenVZ instance and it is not straightforward to upgrade the outdated OS, Ubuntu 14.04 which has reached the end of standard support.
  • Since Ubuntu 14.04 has reached the end of support, it was impossible to upgrade PHP. It was running PHP 5.5.9.
  • Since WordPress 5.4 requires at least PHP 5.6, it was impossible to upgrade from WordPress 5.1.4
  • The old hosting company decided to increase the hosting price by €20, which is not a small amount for a slow OpenVZ instance.

So I decided to move on to another hosting provider, this time I will have control over my own kernel and will be charged pay-per-use.

It has a smaller spec but different virtualization technology. The new server is on Ubuntu 18.04.4 with PHP 7.4.5.

Improvements I made on the new server are:

  • Use certbot on all hosted domains so that communication between Cloudflare and my server is always valid HTTPS
  • The site is now 100% HTTPS via Cloudflare. It can be served directly via HTTPS too.
  • All scripts are now pushed using Ansible
  • Git based updates for projects such as the automated prayer times

Many things have changed as well, such as better responsive themes and more robust plugins.

Historically, this blog was started in 2005 on WordPress version 1.5 and has moved through 5 different hosting providers.

Personal Update

Since my last post, I have moved through 6 different jobs. Time flies.

I am also older now so I don’t really have much energy to rant, but expect more mixed ramblings from me from time to time 😉

In Malaysia, the country has been in the Movement Control Order (MCO) for more than a month now and it is certainly a significant change in lifestyle.

Let’s hope that COVID-19 goes away as soon as possible so that we can live “the new normal”.

Until the next post, stay safe people, wherever you are.

Flask + GitLab OAuth

I’m back. A lot of things have changed since I last wrote and one of that is my go-to language.

Earlier today, I needed to write a simple Flask application using GitLab as the OAuth2 provider.

I immediately turned to Flask-OAuth to do the job, but it keeps on failing with:

SSLHandshakeError: [Errno 1] _ssl.c:510: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed

It seems to be a problem with httplib2.

After struggling for quite some time, I found Flask-OAuthlib that claims to be a replacement for the outdated Flask-Oauth. It worked like a charm.

GitLab’s documentation on consuming its OAuth2 is quite basic. Below is a basic implementation that works.

All you need to do is change the gitlab.example.com to your GitLab server, and add the consumer_key and consumer_secret. If successful, the main page will display a JSON with the logged on user’s details.

from flask import Flask, render_template, redirect, url_for, session, request, jsonify
from flask_oauthlib.client import OAuth
 
app = Flask(__name__)
app.debug = True
app.secret_key = 'development'
oauth = OAuth(app)
 
gitlab = oauth.remote_app('gitlab',
    base_url='https://gitlab.example.com/api/v3/',
    request_token_url=None,
    access_token_url='https://gitlab.example.com/oauth/token',
    authorize_url='https://gitlab.example.com/oauth/authorize',
    access_token_method='POST',
    consumer_key='',
    consumer_secret=''
)
 
@app.route('/')
def index():
    if 'gitlab_token' in session:
        me = gitlab.get('user')
        return jsonify(me.data)
    return redirect(url_for('login'))
 
 
@app.route('/login')
def login():
    return gitlab.authorize(callback=url_for('authorized', _external=True, _scheme='https'))
 
 
@app.route('/logout')
def logout():
    del session['gitlab_token']
    return redirect(url_for('index'))
 
@app.route('/login/authorized')
def authorized():
    resp = gitlab.authorized_response()
    if resp is None:
        return 'Access denied: reason=%s error=%s' % (
            request.args['error'],
            request.args['error_description']
        )
    session['gitlab_token'] = (resp['access_token'], '')
    return redirect(url_for('index'))
 
@gitlab.tokengetter
def get_gitlab_oauth_token():
    return session.get('gitlab_token')
 
if __name__ == "__main__":
    app.run()

I hope it saves someone some time.

CloudWatch INSUFFICIENT_DATA for Linux System Metric

I recently had to recreate images for our production systems on EC2 because they didn’t have ephemeral storage that we require to keep our temporary tcp dumps. Considering that they are EC2 instances, it was quite easy.

We use mon-get-instance-stats.pl to monitor system metrics such as memory utilization and disk space.

Naturally, I copied alarms from the old instances and just replaced the InstanceId with the new ones. However, I was baffled to see CloudWatch complaining that the alarms has INSUFFICIENT_DATA. Attempting to verify, mon-get-instance-stats.pl --verify showed the wrong InstanceId.

It wasn’t after I ransacked the whole filesystem I realized that the Perl scripts are caching information in /var/tmp/aws-mon. Remove (or move) that directory and all is well again.

I hope this saves someone some time.

[SOLD] Frame LCD Display Touch Digitizer Screen for LG Google Nexus 5 D820 D821

I’m selling this component for RM300 to buyers in Malaysia. I ordered it on the 4th of July 2014 and received it on 22th of July 2014. I bought it for US$114.89 which is around RM364. So, it’s just been used for 4 days before the phone stops working. Local repair shops and retailers are selling this unit for RM500, without service charge.

The story of why I am selling this display will follow further below.

Frame LCD Display Touch Digitizer Screen for LG Google Nexus 5 D820 D821

Continue reading [SOLD] Frame LCD Display Touch Digitizer Screen for LG Google Nexus 5 D820 D821

DD-WRT: OpenVPN Server Using Certificates

GUI confuses me sometimes, so I prefer to make configurations in text files. For DD-WRT, OpenVPN server is available in OpenVPN, OpenVPN Small, Big, Mega, and Giga builds: K2.6 Build Features. Since I have never used any router with USB storage capabilities, I can’t be sure but I think OpenVPN can be installed using ipkg as well.

For this post I am going to assume you’re an OS X user, but Windows procedures shouldn’t be too different.

1. Generating certificates and keys

  1. Get Easy-RSA. You can either clone the git repository or download the package as zip. Navigate to the folder where you downloaded/cloned Easy-RSA and get into the directory easy-rsa/2.0.
  2. Edit the file vars. I’m showing the variables that you might want to change. Take note of the KEY_SIZE variable. If you’re paranoid like me, leave it at 2048. It takes longer to generate DH parms but not that long.
    # Increase this to 2048 if you
    # are paranoid.  This will slow
    # down TLS negotiation performance
    # as well as the one-time DH parms
    # generation process.
    export KEY_SIZE=2048
     
    # In how many days should the root CA key expire?
    export CA_EXPIRE=3650
     
    # In how many days should certificates expire?
    export KEY_EXPIRE=3650
     
    # These are the default values for fields
    # which will be placed in the certificate.
    # Don't leave any of these fields blank.
    export KEY_COUNTRY="MY"
    export KEY_PROVINCE="SELANGOR"
    export KEY_CITY="Puchong"
    export KEY_ORG="AdyRomantika"
    export KEY_EMAIL="[email protected]"
    export KEY_OU="RomantikaName"
     
    # X509 Subject Field
    export KEY_NAME="MYKEY1"
  3. Import the variables into the current shell:
    $ source vars
  4. Clean existing keys if any (WARNING: This deletes all existing certificates and keys)
    $ ./clean-all
  5. Generate server certificates. The script will still ask for parameters you entered in vars so just press ENTER if you’re satisfied
    • This will produce 2 files: ca.key and ca.crt
    $ ./build-ca
  6. Generate Diffie Hellman parameters
    • This will produce the file: dh{n}.pem where {n} is the key size specified in the vars file.
    $ ./build-dh
  7. Generate key for the server.
    • When asked for a password, just press ENTER otherwise the key password will be asked each time service is being brought up.
    • When asked whether to sign the certificate, say Yes.
    • This will produce 3 files: server.crt, server.csr, server.key
    $ ./build-key-server server1
  8. Generate key for the clients. This step can be repeated in the future for more clients as needed.
    • When asked for a password, you can enter a password so that when connecting to the service, the key password will be asked. I recommend this to make it more secure.
    • When asked whether to sign the certificate, say Yes.
    • This will produce 3 files: client1.crt, client1.csr, client1.key
    $ ./build-key client1

Continue reading DD-WRT: OpenVPN Server Using Certificates

WordPress Update: Upgrade package not available (3.5)

I used to upgrade WordPress manually using FTP. I would update a local copy of the website, make sure everything works on my laptop and then upload it to the server. Not that I don’t trust WordPress automatic upgrade but I am paranoid that my custom plugins and changes will break the site.

However, starting from early 2012 I began to use the upgrade functionality within WordPress. Everything went really well until after I upgraded to 3.5

I wasn’t able to upgrade to 3.5.1 but I was able to upgrade another blog with earlier version than 3.5 to 3.5.1. So I thought it might be a problem with the settings or permissions on this site.

wordpress-update-upgrade-package-not-available

Today, I looked at the issue again. What I found is that there is a discrepancy between the upgrade code and the data returned by API.

The code failed here wp-admin/includes/class-wp-upgrader.php:

111   function download_package($package) {
116     if ( empty($package) )
117       return new WP_Error('no_package', $this->strings['no_package']);
127   }

Called within the same file:

878     $download = $this->download_package( $current->package );
879     if ( is_wp_error($download) )
880       return $download;

There are a lot more tracing done, but ultimately line 878 will always fail because $current does not have the property package:

stdClass Object
(
    [response] => upgrade
    [download] => http://wordpress.org/wordpress-3.6.zip
    [locale] => en_US
    [packages] => stdClass Object
        (
            [full] => http://wordpress.org/wordpress-3.6.zip
            [no_content] => http://wordpress.org/wordpress-3.6-no-content.zip
            [new_bundled] => http://wordpress.org/wordpress-3.6-new-bundled.zip
            [partial] => 
        )
 
    [current] => 3.6
    [php_version] => 5.2.4
    [mysql_version] => 5.0
    [new_bundled] => 3.6
    [partial_version] => 
    [dismissed] => 
)

I wanted to create a Trac ticket but realized that this should have been fixed, and this is a backward compatible issue anyway. Looking at the new code http://core.trac.wordpress.org/browser/trunk/wp-admin/includes/class-wp-upgrader.php?rev=24474 I am able to see that the packages properties has been handled.

A quick look at 3.5.1 also suggests that the new data returned has been handled correctly.

   1036     // If partial update is returned from the API, use that, unless we're doing a reinstall.
   1037     // If we cross the new_bundled version number, then use the new_bundled zip.
   1038     // Don't though if the constant is set to skip bundled items.
   1039     // If the API returns a no_content zip, go with it. Finally, default to the full zip.
   1040     if ( $current->packages->partial && 'reinstall' != $current->response && $wp_version == $current->partial_version )
   1041       $to_download = 'partial';
   1042     elseif ( $current->packages->new_bundled && version_compare( $wp_version, $current->new_bundled, '< ' )
   1043       && ( ! defined( 'CORE_UPGRADE_SKIP_NEW_BUNDLED' ) || ! CORE_UPGRADE_SKIP_NEW_BUNDLED ) )
   1044       $to_download = 'new_bundled';
   1045     elseif ( $current->packages->no_content )
   1046       $to_download = 'no_content';
   1047     else
   1048       $to_download = 'full';
   1049 
   1050     $download = $this->download_package( $current->packages->$to_download );
   1051     if ( is_wp_error($download) )
   1052       return $download;

My quick and dirty hack is to edit wp-admin/includes/class-wp-upgrader.php

    878     $download = $this->download_package( 'http://wordpress.org/wordpress-3.6-no-content.zip' );

Just like that, and upgrade was quick. If you’re stuck in 3.5 you can try it out.

welcome-worpress-3.6

Happy 10th Anniversary WordPress!

Today marks the 10th anniversary of WordPress which was first released on May 27th, 2003.

WordPress now powers countless number of blogs in the Internet via the community driven project WordPress.org and the hosted solutions at WordPress.com.

This site has been running on WordPress since the beginning, in 2005.

Being sick today, I will not be able to make it to any meetups 🙁

Happy 10th Anniversary, WordPress!

CrashPlan 3.5.3 Headless Upgrade

A headless installation of CrashPlan will fail when it tries to update itself.

This short post assumes that you already have it setup and successfully running before, and is targeted only to help you save some time by identifying important files to copy.

Running the installer again will also work, but we actually spend more time to fix the scripts and the identity file might get overwritten causing more time to figure out what happened.

So here goes. This is how we extract the tar archive and the cpio archive within it.

# CrashPlan_3.5.3_Linux.tgz
# cd CrashPlan-install
# cat CrashPlan_3.5.3.cpi | gzip -dc - | cpio -i --no-preserve-owner

Changed files for 3.4.1 to 3.5.3 (thanks to rsync) are:

lang/txt.properties
lang/txt_sv.properties
lang/txt_th.properties
lang/txt_tr.properties
lang/txt_zh.properties
lib/com.backup42.desktop.jar
lib/com.jniwrapper.jniwrap.jar
lib/com.jniwrapper.winpack.jar

All I did was replace those files, and my CrashPlan installation is working fine.

If you actually arrive here to find information on installing for the first time, this post can help you if you’re using a Dlink DNS-32X series. Follow it from start to end (with some adaptation to the paths) and you’ll be fine.

However, you might have to change paths and also do extra steps to get it working. At one point, CrashPlan will run fine but you’ll see that it’s not uploading files.

This post can help you troubleshoot the Java issues by replacing libraries.

From the top of my head I remember having to insert a new library with the correct architecture inside jna-3.2.5.jar, replace libmd5.so, and replace libjtux.so. I also had to link /ffp/usr/local/crashplan/libffi.so.5 to a location accessible by the system loader.

Good luck!