Restricting xmlrpc.php

Earlier today, I saw some spikes on the load graph for the new server (where this site is hosted).

Upon checking the logs I saw a lot of these:

134.122.53.221 - - [01/May/2020:12:21:54 +0000] "POST //xmlrpc.php HTTP/1.1" 200 264 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36"
134.122.53.221 - - [01/May/2020:12:21:55 +0000] "POST //xmlrpc.php HTTP/1.1" 200 264 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36"
198.98.183.150 - - [01/May/2020:13:44:24 +0000] "POST //xmlrpc.php HTTP/1.1" 200 265 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36"
198.98.183.150 - - [01/May/2020:13:44:25 +0000] "POST //xmlrpc.php HTTP/1.1" 200 265 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36"

I’m not mentioning the source IP owner, technical readers can look it up if they are interested. However, the second IP comes from a IP range that is rather interesting.

Searching the Internet, I found out that many people consider the xmlrpc.php as a problem. For those who are not familiar with WordPress, this file is responsible for external communications. For example when using the mobile application to manage your site, and also when you use Jetpack.

There are plugins to disable XML-RPC such as this one, but I use the app from time to time, so I would like to keep xmlrpc.php working.

The official Jetpack website provides this list for whitelisting purposes.

I have been restricting access to my /wp-admin URL for ages, using Nginx. I think it is a good idea to do the same for xmlrpc.php.

location ~ ^/(xmlrpc\.php$) {
    include conf.d/includes/jetpack-ipvs-v4.conf;
    deny all;
 
    include fastcgi.conf;
    fastcgi_intercept_errors on;
    fastcgi_pass php;
}

The simple script to update this IP list into Nginx configuration, that is consumed by the configuration above:

#!/bin/bash
 
FILENAME=jetpack-ipvs-v4.conf
CONF_FILE=/etc/nginx/conf.d/includes/${FILENAME}
 
wget -q -O /tmp/ips-v4.txt https://jetpack.com/ips-v4.txt
 
if [ -s /tmp/ips-v4.txt ]; then
  cat /tmp/ips-v4.txt | awk {'print "allow "$1";"'} > /tmp/${FILENAME}
 
  [ -s ${CONF_FILE} ] || touch ${CONF_FILE}
 
  if [ "$(diff /tmp/${FILENAME} ${CONF_FILE})" != "" ]; then
    echo "Files different, replacing ${CONF_FILE} and reloading nginx"
    mv -fv /tmp/${FILENAME} ${CONF_FILE}
    systemctl reload nginx
  else
    echo "File /tmp/${FILENAME} match ${CONF_FILE}, not doing anything"
  fi
fi
 
rm -f /tmp/ips-v4.txt

It can be periodically executed by cron so that when the IP list changes, the configuration gets updated.

Now, if any IP other than Jetpack tries to access /xmlrpc.php it will receive Error 403 Forbidden.

Have fun!

CloudWatch INSUFFICIENT_DATA for Linux System Metric

I recently had to recreate images for our production systems on EC2 because they didn’t have ephemeral storage that we require to keep our temporary tcp dumps. Considering that they are EC2 instances, it was quite easy.

We use mon-get-instance-stats.pl to monitor system metrics such as memory utilization and disk space.

Naturally, I copied alarms from the old instances and just replaced the InstanceId with the new ones. However, I was baffled to see CloudWatch complaining that the alarms has INSUFFICIENT_DATA. Attempting to verify, mon-get-instance-stats.pl --verify showed the wrong InstanceId.

It wasn’t after I ransacked the whole filesystem I realized that the Perl scripts are caching information in /var/tmp/aws-mon. Remove (or move) that directory and all is well again.

I hope this saves someone some time.

DD-WRT: OpenVPN Server Using Certificates

GUI confuses me sometimes, so I prefer to make configurations in text files. For DD-WRT, OpenVPN server is available in OpenVPN, OpenVPN Small, Big, Mega, and Giga builds: K2.6 Build Features. Since I have never used any router with USB storage capabilities, I can’t be sure but I think OpenVPN can be installed using ipkg as well.

For this post I am going to assume you’re an OS X user, but Windows procedures shouldn’t be too different.

1. Generating certificates and keys

  1. Get Easy-RSA. You can either clone the git repository or download the package as zip. Navigate to the folder where you downloaded/cloned Easy-RSA and get into the directory easy-rsa/2.0.
  2. Edit the file vars. I’m showing the variables that you might want to change. Take note of the KEY_SIZE variable. If you’re paranoid like me, leave it at 2048. It takes longer to generate DH parms but not that long.
    # Increase this to 2048 if you
    # are paranoid.  This will slow
    # down TLS negotiation performance
    # as well as the one-time DH parms
    # generation process.
    export KEY_SIZE=2048
     
    # In how many days should the root CA key expire?
    export CA_EXPIRE=3650
     
    # In how many days should certificates expire?
    export KEY_EXPIRE=3650
     
    # These are the default values for fields
    # which will be placed in the certificate.
    # Don't leave any of these fields blank.
    export KEY_COUNTRY="MY"
    export KEY_PROVINCE="SELANGOR"
    export KEY_CITY="Puchong"
    export KEY_ORG="AdyRomantika"
    export KEY_EMAIL="[email protected]"
    export KEY_OU="RomantikaName"
     
    # X509 Subject Field
    export KEY_NAME="MYKEY1"
  3. Import the variables into the current shell:
    $ source vars
  4. Clean existing keys if any (WARNING: This deletes all existing certificates and keys)
    $ ./clean-all
  5. Generate server certificates. The script will still ask for parameters you entered in vars so just press ENTER if you’re satisfied
    • This will produce 2 files: ca.key and ca.crt
    $ ./build-ca
  6. Generate Diffie Hellman parameters
    • This will produce the file: dh{n}.pem where {n} is the key size specified in the vars file.
    $ ./build-dh
  7. Generate key for the server.
    • When asked for a password, just press ENTER otherwise the key password will be asked each time service is being brought up.
    • When asked whether to sign the certificate, say Yes.
    • This will produce 3 files: server.crt, server.csr, server.key
    $ ./build-key-server server1
  8. Generate key for the clients. This step can be repeated in the future for more clients as needed.
    • When asked for a password, you can enter a password so that when connecting to the service, the key password will be asked. I recommend this to make it more secure.
    • When asked whether to sign the certificate, say Yes.
    • This will produce 3 files: client1.crt, client1.csr, client1.key
    $ ./build-key client1

Continue reading DD-WRT: OpenVPN Server Using Certificates

CrashPlan 3.5.3 Headless Upgrade

A headless installation of CrashPlan will fail when it tries to update itself.

This short post assumes that you already have it setup and successfully running before, and is targeted only to help you save some time by identifying important files to copy.

Running the installer again will also work, but we actually spend more time to fix the scripts and the identity file might get overwritten causing more time to figure out what happened.

So here goes. This is how we extract the tar archive and the cpio archive within it.

# CrashPlan_3.5.3_Linux.tgz
# cd CrashPlan-install
# cat CrashPlan_3.5.3.cpi | gzip -dc - | cpio -i --no-preserve-owner

Changed files for 3.4.1 to 3.5.3 (thanks to rsync) are:

lang/txt.properties
lang/txt_sv.properties
lang/txt_th.properties
lang/txt_tr.properties
lang/txt_zh.properties
lib/com.backup42.desktop.jar
lib/com.jniwrapper.jniwrap.jar
lib/com.jniwrapper.winpack.jar

All I did was replace those files, and my CrashPlan installation is working fine.

If you actually arrive here to find information on installing for the first time, this post can help you if you’re using a Dlink DNS-32X series. Follow it from start to end (with some adaptation to the paths) and you’ll be fine.

However, you might have to change paths and also do extra steps to get it working. At one point, CrashPlan will run fine but you’ll see that it’s not uploading files.

This post can help you troubleshoot the Java issues by replacing libraries.

From the top of my head I remember having to insert a new library with the correct architecture inside jna-3.2.5.jar, replace libmd5.so, and replace libjtux.so. I also had to link /ffp/usr/local/crashplan/libffi.so.5 to a location accessible by the system loader.

Good luck!

Error Compiling djbdns and daemontools

While attempting to compile djbdns 1.05 and daemontools 0.76 on a CentOS 5.5 I received the error:

/usr/bin/ld: errno: TLS definition in /lib/libc.so.6 section .tbss mismatches non-TLS reference in envdir.o

The problem can be eliminated by adding:

-include /usr/include/errno.h

In conf-cc files for each tarball. Don’t forget to install gcc first, if you have a basic installation.

By the way, please remember to follow the installation instruction for daemontools exactly as described or you’ll end up with the software somewhere undesirable. Well, you can change /package to be elsewhere. I stupidly did it on /root as a test so the svcscanboot process was unable to execute programs in the /root directory. They run as unprivileged users.

Although these software felt like really old-school to me, they have very small memory footprint and runs very fast. If you’re also looking into DNS, consider PowerDNS too, as it has very good statistical capabilities.

Do I need to reboot the machine after increasing the maximum number of open files at /etc/security/limits.conf?

No, you don’t need to. This morning I struggled to convince someone that the server does not need a reboot. It was because of this: Increasing the number of file handles on Linux workstations.

ulimit – Provides control over the resources available to the shell and to processes started by it, on systems that allow such control.

limits.conf is configuration file for the pam_limits module

It takes effects immediately upon re-login. It’s hard to explain things that only you understand internally. I wish I have a formal Red Hat training so that I can explain better.

In the end I just rebooted the system so that other person who thinks (he/she) knows everything will be satisfied. Now (he/she) is.

Those struggling to get some proof you can probably forward this url which should have better reputation than this blog.

Setting DD-WRT Cron Job Through Command Line

I managed to get OpenVPN running on my DD-WRT v2.2 router, with the instructions from the wiki.

However after a few reboot tests I saw that OpenVPN died immediately after it started, with no traceable reasons.

Sep 12 00:51:10 192.168.xx.xx openvpn[3940]: TUN/TAP device tap0 opened
. . .
Sep 12 00:51:11 192.168.xx.xx openvpn[3949]: Initialization Sequence Completed

I suspect it has got to do with the fact that my ppp0 (ADSL) connection takes some time to activate.

So I thought of doing a check using cron – if OpenVPN is not running, run it.

The command I wrote was:

But the bad news is that when I enter this command in the cron box inside the Web Administration GUI the single quotes get translated into the HTML entity, and this becomes permanent in the nvram and also in /tmp/cron.d/cron_jobs. Damn.

So I thought of using the command line. Here’s what I did in the SSH shell:

At this point if you don’t want to reboot your router, enter these into /tmp/cron.d/cron_jobs and restart cron using stopservice cron && startservice cron.

And I’m all set!

I hope the IT team from my company is not reading this, but I also have a vpnc daemon running on the router to connect to my company network and I do the same check as above 😉

OpenVZ On Ubuntu Or Debian

As a SysAdmin I have been using OpenVZ since it was introduced, and trust me it has not always been this easy. I used to take care of 20 physical servers with yearly replacement of about 5 machines. Since some of the servers are running different Linux distributions and different hardware it was decided that to standardize all servers, OpenVZ was to be deployed so that all of them are running Debian stable.

OpenVZ is container-based virtualization for Linux and it only separates the different guest servers in terms of resources. This differ from other implementations such as VMware, Xen, and VirtualBox where these involve hardware virtualization. Because of this, the guests called VE or VPS have the same kernel version and can only run Linux. What distribution as guest? The choice is yours.

Undoubtedly most of you have heard of Virtuozzo – it’s running OpenVZ. As a matter of fact the company that produces Virtuozzo is the one funding and supporting the development of OpenVZ.

The fact that it can run any distribution you like means that you can study and learn how to maintain different distributions. Even the littlest difference can confuse a rookie SysAdmin, for example:

  • Debian apache’s init script is distributed as /etc/init.d/apache and /etc/init.d/apache2 while in CentOS it’s called /etc/init.d/httpd
  • In Debian to change init scripts and runlevels we use update-rc.d while in CentOS we use chkconfig even though they both do the same exact thing

There are many other differences in terms of implementation that I rather not discuss here.

Click on Continue Reading if you’re interested to read more…
Continue reading OpenVZ On Ubuntu Or Debian

OSCC: The Silent Mirror

All hyped out about sharing Linux knowledge with friends especially dirn, I wanted to download Ubuntu for my own use mainly because I am a strict Debian user. Browsing the mirror list in Ubuntu official site I am disappointed by the speed of most of the mirrors I selected, and the fastest I can get is the ETA of 4 hours.

Then a bell rang in my head and I went to look for OSCC. This mirror is the closest mirror I can get using my ADSL. The problem with this mirror is that it sometimes have old files especially for the Debian and CentOS repositories. So it always become my last choice to look for files. I don’t blame them as the rsync process must’ve been really slow to download files from the 1st level mirrors.

The speed is very satisfactory because the mirror is located in Cyberjaya, Malaysia as you can see below:

I have been seeking this mirror every time I feel disappointed with speed of overseas mirror as it mirrors some other projects too.

Historically in 2002 I almost became an employer of OSCC after scoring good marks in a Linux test done at DRB-Hicom, and went to an interview at OSCC. I failed to get the position because when asked “How do you change init scripts and levels in Red Hat?”, I answered “I use ln to make symbolic links from /etc/init.d to /etc/rc.{0-6}”. They said, “No, you should use chkconfig“. My answer was not incorrect being a self-taught Linux user but by-the-book users will feel otherwise. I was annoyed but I don’t hold any grudge against them. I do, however feel lucky I didn’t get the job.

Checking Limits on OpenVZ / Virtuozzo

Do you use virtual server hosting for your websites? It’s commonly known as VPS. Most hosting companies now uses Virtuozzo, a proprietary operating system virtualization product produced by SWsoft, Inc.

The OpenVZ project is an open source community project supported by SWsoft and is intended to provide access to the code and ultimately for the open source community to test, develop and further the OS virtualization effort.

A couple of months ago when I have not tried OpenVZ, a friend asked me about a problem he is facing with his VPS which is hosting streaming videos and receiving millions of hits per day. He received errors such as:

  • cannot fork
  • Error running script: not enough memory
  • Fork failed

Now that I have deep knowledge in OpenVZ I know what causes the problem. The problem is that his running software and services were using resources more than allocated by the hosting company. If you are using such service, one good way to check is by executing this command:

# cat /proc/user_beancounters

The output would look like this:

   uid  resource           held    maxheld    barrier      limit    failcnt
  101:  kmemsize         473318     927071    2752512    2936012          0
        lockedpages           0          0         32         32          0
        privvmpages        1611      62436       4915       5357         40
        shmpages              1         31       8192       8192          0
        dummy                 0          0          0          0          0
        numproc               9         15         65         65          0
        physpages           887      32985          0 2147483647          0
        vmguarpages           0          0       6144 2147483647          0
        oomguarpages        888      32985       6144 2147483647          0
        numtcpsock            0          4         80         80          0
        numflock              1          3        100        110          0
        numpty                1          1         16         16          0
        numsiginfo            0          3        256        256          0
        tcpsndbuf             0       7856     319488     524288          0
        tcprcvbuf             0      95460     319488     524288          0
        othersockbuf       6660       8880     132096     336896          0
        dgramrcvbuf           0       8364     132096     132096          0
        numothersock          5          8         80         80          0
        dcachesize            0          0    1048576    1097728          0
        numfile             168        399       2048       2048          0
        dummy                 0          0          0          0          0
        dummy                 0          0          0          0          0
        dummy                 0          0          0          0          0
        numiptent            10         10        128        128          0

These info are important because it is most likely that you can’t see what configurations your VPS is running with.

Simple meanings of the columns:

  • resource – name of the resource
  • held – current usage
  • maxheld – max ever used
  • barrier – soft limit of the resource
  • limit – hard limit where the VPS will never use more
  • failcnt – fail count

The most important thing to see is the failcnt column, where in an ideal situation you should see only zeros. In this case, you see that privvmpages have failed 40 times because I on purposely lowered the memory allocated for the VPS and run some programs.

You will never be able to change the resource allocation from within the VPS but at least you know what your problem is and is a good point of discussion with the hosting company.

And oh yes, the values are in 4k blocks which means that if the setting is 4915 the actual value is 19660k (4915 * 4k). Of course this is only applicable for some, and not for countable values such as numpty.

Good luck!

PHP 5 In CentOS 4.5

Just a short sharing note, for users of CentOS 4.5 who is looking to update PHP to version 5 instead of the default 4.3.9 there is a clean and easy way to upgrade your PHP.

  1. Open up /etc/yum.repos.d/CentOS-Base.repo and look for the section centosplus:

    [centosplus]
    name=CentOS-$releasever - Plus
    mirrorlist=http://mirrorlist.centos.org/...
    #baseurl=http://mirror.centos.org/...
    gpgcheck=1
    enabled=0
    gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-centos4
    priority=2
    protect=1

  2. Change enabled=0 to enabled=1
  3. Save the file
  4. Run yum update php*

And the rest is up to you… when it finishes restart Apache (service httpd restart) and you’ll be up and running with PHP 5.

How to check PHP version on the server?

Use rpm -qa | grep php and you’ll see the list of installed PHP packages. In this case PHP on the server has been upgraded to PHP 5.

php-pdo-5.1.6-3.el4s1.7
php-cli-5.1.6-3.el4s1.7
php-pear-1.4.11-1.el4s1.1
php-ncurses-5.1.6-3.el4s1.7
php-mbstring-5.1.6-3.el4s1.7
php-pgsql-5.1.6-3.el4s1.7
php-gd-5.1.6-3.el4s1.7
php-odbc-5.1.6-3.el4s1.7
php-common-5.1.6-3.el4s1.7
php-5.1.6-3.el4s1.7
php-snmp-5.1.6-3.el4s1.7
php-ldap-5.1.6-3.el4s1.7
php-mysql-5.1.6-3.el4s1.7
php-devel-5.1.6-3.el4s1.7
php-xmlrpc-5.1.6-3.el4s1.7
php-imap-5.1.6-3.el4s1.7
php-xml-5.1.6-3.el4s1.7

Good luck!

Nikon Capture NX on Linux

I am feeling a little bit slow today, because my notebook is slow. LOL. I think this relates directly to the fact that everyday at work I am using a new Lenovo T60 notebook which is much faster than my 2 years old personal notebook. I used to dual-boot the notebook with Debian where the speed is acceptable but since I acquired a DSLR it is a hassle to switch OS. And I am not supposed to install non-approved software on the company computers.

My main issue not to run 100% Linux is that most of the graphics editor will not run properly, and most of the time fail to run on Wine. One of the software I use a lot is Adobe Photoshop. A couple of month ago I tried running CS2 on Wine and it didn’t work. I gave up on that. Recently, CS3 was released but I didn’t bother to try at all to avoid any disappointment.

Nikon Capture NX

Since I take all of my photos as RAW, or to be more precise in NEF (Nikon Electronic Format) I need either Photoshop or Nikon software to process the pictures I took. I’ve tried using dcraw and other open sourced RAW programs but the results just ain’t the same. Too bad. Or perhaps I am the one being not an expert in using those tools because some people do get better output. Quote from dcraw: “when used skillfully, produces better quality output than the tools provided by the camera vendor“.

Anyone have ever tried running CS3 or Nikon Capture on Linux (and succeeded)?

Iceweasel

iceweasel_icon.png

Have you ever heard of the browser named Iceweasel? Of course not, if you’re not using Debian. One of my machine at home is running a Debian Etch installation (my torrent box), and a few days ago I ran apt-get upgrade to upgrade the packages.

I was quite annoyed at first, as it’s trying to install a new package (not to mention the huge size) but I let it anyway. Earlier today I launched the web browser in my Xfce and Iceweasel was loading…

Iceweasel is a rebranded Firefox, and exist in 2 independent projects: one by Gnuzilla, and the other one by Debian.

Iceweasel was created since Mozilla demanded that Debian complies to some of the policies and terms that Debian finds unacceptable.

The other products are also re branded. Thunderbird became Icedove and Seamonkey became Iceape.

The current release of Gnuzilla IceWeasel is based on the 1.5.0.7 version of Mozilla Firefox, while the current version of Debian Iceweasel is based on the 2.0.0.1 release of Firefox.

deer_park_globe.png

The most obvious reason for this name change was that Mozilla demanded that Debian retain all branding from Mozilla if they were to continue using the Firefox name. However, because of the Debian Free Software Guidelines that said no non-free artwork and plugins are allowed, they were unable to comply. This generic, non-branded icon on the right was used for Firefox in Debian.

What I can see so far is that only the name changed. All of my plugins can still be used and upgraded normally. As for my active machines, however I always use the extracted package from Mozilla so there would be no way I would realize about the existence of Iceweasel.

Iceweasel. Cute name?

ALSA Support in Skype

Finally, Skype has released a beta version with alsa support: 1.3.0.30_API

Skype Beta with ALSA

Hopefully all the troubles with “Problem with Sound Device” will be history. However for users with very old kernels, and prefer to use OSS, the option is still there. The problem with Skype utilizing OSS on modern systems is that it keeps on failing to close /dev/dsp after using it. The only way to make it work again is to restart Skype. It’s a hassle and a headache. Believe me, I used to be a SysAdmin (until a week ago) for a 99.99% Linux desktop company – with Skype as one of the primary communication tools.