Remove/Hide WooCommerce Mine Status Filter

So this has been annoying me for ages.

I don’t really get what’s it for, more-so because I’m the only shop admin! Either all the orders are “Mine” or none of them.
I remember searching for why previously and came across a bug during a WooCommerce Database update that caused orders to be re-assigned to the owner id 1 if they were set at 0 or something similar. But it didn’t really explain the purpose of this filter and I didn’t get any further in my understanding of why anyone might find it useful.
There was a suggestion that it would come into play for order that were created using the ‘Add Order’ button. Now I can see that may be useful, I do occasionally create orders manually for customers who are having problems and this could be a way of finding them, but there’s no way I’ve manually create the hundreds or thousands that’s there (depending on which site I’m looking at, we have a few).

Not getting anywhere on finding to disable or hide it and just finding lots of info on custom statuses, I gave up. I did consider using CSS to just hide it but I really hate working on CSS when I have to. This wasn’t annoying enough that I had to.

But today I noticed our ‘Pending Payment’ count was rather high. We’ve had a busy month and I haven’t had time to pay attention to the small details. but almost 300 orders ‘Pending Payment’ raised my interest. That was quite cimply that the cronjob to clear them wasn’t there, and I vaguely remembered this happening a few years back. Clearing the inventory timeout, save and setting it back to 60 mins (and save) recreate the cron job. That fired straight away and cancelled half of the orders there. I gave it a few mins and force run the cronjob again, that cleared the queue right down to about 8 orders. 2 of them are valid carts going through now and 6 are manual orders that may/may not have been paid (I need to check each of them) but were holding orders (not really shipping anything) so not an issue.

Once I finished with ‘Pending Payment’ I spotted the ‘Mine’ filter again, which brought back ‘why is it there, can I get rid of it?, it’s totally pointless for me. It just takes up room.

Well people I can say, you can hide it 😀 and it’s pretty easy to do.

You can add the following to your functions.php

function hide_mine_filter_shop_orders( $views ) {
    // Unset the option from the views
    unset( $views['mine'] );
    return $views;
};
add_filter( 'views_edit-shop_order', 'hide_mine_filter_shop_orders' );

And it’s now gone:

** The pics are from 2 different sites. I’ll be applying it to all shortly 🙂

I know it’s not much, but I’ve never used it and with lots of different order status’ it much better to have one less.

I should add just in case I ever read this again, I actually put that in one of our own plugins that restores the Items Purchased column that was removed from WooCommerce Core but crucial to speedily deal with orders (that still should have been a tick option in screen options, but the devs wouldn’t take that on board).

WooCommerce Nag Notice

We all (mostly) understand updates are important and I’m sure there were only good intentions by setting it but the
‘Connect your store to WooCommerce.com to receive extensions updates and support.’
Nag notice is ridiculous. A non-dismissable notice should never be allowed. I get you dont want people to just quickly click the dismiss button, so why not put an option at the bottom of the connect page ‘Dismiss this alert. I understand what this means’ even if it only dismissed for say 3 months and then you had to do it again. It would still be annoying but at least easier to deal with.

But no, in someones infinite wisdom they’ve decided you absolutely must connect your store and have no other option. Well here’s the code to add to stop that nag notice

add_filter( 'woocommerce_helper_suppress_admin_notices', '__return_true' );

** It is your own responsibility to keep your site upto date.
** Disabling this notice, may disable other woocommerce notices.

There are of course legitimate reasons why would wouldn’t want to connect your store, managing your updates your own way should always be allowed. So dev’s stop trying to dictate how you want/think things should run, choice is the key.

WooCommerce 3.3.0+

Yesterday upgrade a store to WooCommerce 3.3.1 from whatever the hell it was on before.

Today I’ve spent the day putting things right 😠 All the issues are around the new Orders UI and it seems like petty small stuff but it’s safe to say I’m hating the new UI because I’ve wasted the day dealing with over 50 complaints.
For those unfamiliar here’s the proposed changes (the end result is a little different) https://woocommerce.wordpress.com/2017/11/16/wc-3-3-order-screen-changes-including-a-new-preview-function-ready-for-feedback/#comment-4137

I’ve so far fixed some of the issues, such as rearranging the columns (why the actual fu*k status was moved I’ll never know or understand). The below code is probably not the best way to do it, but it works

// Function to Change the order of the Columns
function woocommerce_myinitials_restore_columns( $columns ) {
    $new_order = array(
       'cb' => '',
       'order_status' => '',
       'csv_export_status' => '', // dont think this ones standard but it's part of a plugin we use.
       'order_number' => '',
       'order_items' => '',
       'billing_address' => '',
       'shipping_address' => '',
       'order_date' => '',
       'order_total' => '',
       'wc_actions' => '',
    );
    foreach($columns as $key => $value) {
       $new_order[$key] = $value;
    }

    return $new_order;
}
add_filter( 'manage_edit-shop_order_columns', 'woocommerce_myinitials_restore_columns',99 );

So that’s one issue solved. The next was being able to click anywhere in the row opens the order (yeah I’m sure that’s nice, but if you rely on tapping a touchscreen i.e. click and drag to scroll then this causes problems). The following code adds the no-link class to the tr and stops this shitty behaviour

function woocommerce_myinitials_restore_columns_add_nolink_class( $classes ) {
    if ( is_admin() ) {
        $current_screen = get_current_screen();
        if ( $current_screen->base == 'edit' && $current_screen->post_type == 'shop_order' ) {
            $classes[] = 'no-link';
        }
    }
    return $classes;
}
add_filter( 'post_class', 'woocommerce_myinitials_restore_columns_add_nolink_class' );

Thanks on this one goes to ‘rodrigoprimo’ for the initial fix and others who picked it up and added a bit to it https://github.com/woocommerce/woocommerce/pull/18708.

I’ve added the following as a stylesheet

.post-type-shop_order .wp-list-table td, .post-type-shop_order .wp-list-table th {
   vertical-align: unset;
}

.post-type-shop_order .wp-list-table td.order_status {
   vertical-align: middle;
   text-align: center;
}

This places the orders back at the top of the row, and stops the previous restoration of items sold link jumping around. but I’d rather the new status text stays in the middle inline with the checkbox, hopefully we’ll get this back to an icon soon.

All of the above I’ve added to our custom plugin, you could either do this or add them to your functions.php

Outstanding issues:
1. Getting back the Email address. There is some hope this may come back officially, but I’ll be fixing it for us tomorrow.
2. The Status being text not icons. I understand this makes more sense to new users but if you have some long statuses like we do, the text doesn’t fit and we’ve got 10 statuses all looking the same. Having coloured icons worked for us and if you weren’t sure hover the mouse. I’ll be looking to get them back to icons tomorrow.
3) Date column, just why! Why would anyone think not putting the actual date and time of the order here is a good idea. Stupid ‘X mins ago’ is no use at all.

The new preview window looks good but I really dont see it getting much use, we need the important data on the front. If it’s not that important just open the order. WooCommerce dev’s decided to screw with it but I dont think there’s an understanding that if your going as far as opening the preview window then I’m pretty sure you were used to just editing the order which is probably still going to be the case.

So summing up today, I’ve had a shit day of people moaning at me because some developers decided to improve something that really didn’t need touching. Doesn’t sound like every developer I’ve ever known 😂. I’m now getting something to eat before I go near everything I’d planned on working on today.

Setting NGINX + PHPLDAPADMIN location & php for subfolder

Spent far too long trying out different ways to get this to work. I need to setup a server listening to the local IP address to restrict things like phpldapadmin to internal requests. But hit problems with nginx appending the location to the root path, and php having no idea where to get the files from.

Here’s the config that ended up working

server {
listen 80;
root /var/www/default/;

index index.html index.htm index.nginx-debian.html;

server_name 192.168.0.3;

location = / {
        try_files $uri $uri/ =404;
}

location /phpldapadmin {
        alias /usr/share/phpldapadmin/htdocs;
        index index.php index.html index.htm;

        location ~ \.php$ {
                include snippets/fastcgi-php.conf;

                # With php7.0-fpm:
                fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
                fastcgi_param SCRIPT_FILENAME $request_filename;
                fastcgi_intercept_errors on;
        }
}

# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
        include snippets/fastcgi-php.conf;

        # With php7.0-cgi alone:
        # fastcgi_pass 127.0.0.1:9000;
        # With php7.0-fpm:
        fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
        fastcgi_param SCRIPT_FILENAME $request_filename;
        fastcgi_intercept_errors on;
}

access_log /var/log/nginx/localip-access.log;
error_log /var/log/nginx/localip-error.log;
}

Redis Sentinel PHP Session Handler

In the last few weeks I’ve been rebuilding servers & all services. I like to do this every so often to clear out any old junk, and take the opportunity to improve/upgrade systems and documentation.

This time around it’s kind of a big hit though. While I’ve had some services running with HA in mind, most would require some manual intervention to recover. So the idea this time was to complete what I’d previously started.

So far there’s 2 main changes I’ve made:

  1. Move from MySQL Master/Master setup to MariaDB cluster.
  2. Get redis working as HA (which is why your here).

I’ll bore everyone in another post with the issues on MariaDB cluster. But this one concentrates on Redis.

The original setup was 2 redis servers running, 1 master 1 slave. With php session handler configured against a hostname which was listed in /etc/hosts. However this time as I’m running a MariaDB cluster, it kind of made sense to try out a redis cluster (dont stop reading yet). After reading lots, I decided on 3 Servers each running a master and 2 slaves. A picture here would probably help but you’ll just have to visualize. 1, 2 & 3 are Servers, Mx is Master of x, Sx is Slave of x. So I had 1M1, 1S2, 1S3, 2S1, 2M2, 2S3, 3S1, 3S2, 3M3.

This worked, in that if server 1 died, it’s master role was taken over by either 2 or 3. And some nagios checks and handlers brought it back as the master once it was back online. Now I have no idea if this was really a good setup (I didn’t test it for long enough), but 1 of the problems I encountered was where the PHP sessions ended up. I (wrongly) thought the php session would work with the cluster to PUT and GET the data, so I gave it all 3 master addresses. Reading info on redis, if the server your asking doesn’t have the data it will tell you which does, so I thought it’s not really a problem if 1M1 goes down and the master becomes 2M1 because the other 2 masters will know so will say the data is over there. In manual testing this worked. but PHP sessions doesn’t seem to work with being redirected (and this is also a problem later).

So after seeing this as a problem, I thought maybe a cluster is a bit overkill anyway and simplifying it to 1 Master and 2 Slaves would be fine anyway.

I wiped out the cluster configuration and started again, but I knew I was also going to need sentinel this time to manage which is the master (I’d looked at it before, but went for cluster instead. Time to read up again).

After getting a master up and running and then adding 2 slaves. I pointed PHP Sessions to all 3 servers (again a mistake). I was hoping (again) that the handler would be sensible enough to connect to each and if it’s a slave (read only) detect that it can’t write and move to the next. It doesn’t. It happily writes errors in the logs for UNKNOWN.

So I need a way for the session handlers to also know which is the current master, and just use this.

My setup is currently 3 MariaDB/redis servers S1, S2 & S3 and 2 Nginx servers N1 & N2.

I decided to install redis-sentinel on all 5, with a quorum of 3. The important bit in my sentinel config is:

sentinel client-reconfig-script mymaster /root/SCRIPTS/redis-reconfigure.sh

and the redis-reconfigure.sh script:

#!/bin/bash
adddate() {
	while IFS= read -r line; do
		echo "$(date) $line"
	done
}

addrecord() {
	echo "## Auto Added REDIS-MASTER ##" >> /etc/hosts
	echo "$1 REDIS-MASTER" >> /etc/hosts
}

deleterecord() {
	sed -i '/REDIS-MASTER/d' /etc/hosts
}

# <master-name> <role> <state> <from-ip> <from-port> <to-ip> <to-port>
# $1 $2 $3 $4 $5 $6 $7

if [ "$#" -eq "7" ]; then
	if grep -q "REDIS-MASTER" /etc/hosts; then
		echo "Delete and Re-Add $6 REDIS-MASTER" | adddate >> /var/log/redis/reconfigure.log
		deleterecord
		addrecord "$6"
	else
		echo "Add $6 REDIS-MASTER" | adddate >> /var/log/redis/reconfigure.log
		addrecord "$6"
	fi
fi

Basically this script is run whenever the master changes (I need to add some further checks to make sure <role> and <state> are valid, but this is being done for quick testing.

I’ve then changed the session path:

session.save_path = "tcp://REDIS-MASTER:6379"

and again in the pool.d/www.conf:

php_admin_value[session.save_path] = "tcp://REDIS-MASTER:6379"

Quite simply I’m back to using a hosts entry which points at the master. but the good thing (I wasn’t expecting) was I dont have to restart php-fpm for it to detect the change in IP address.

It’s probably not the most elegant way of handling sessions, but it works for me. The whole point in this is to be able to handle spikes in traffic by adding more servers N3, N4, etc. and providing they are sentinels with the script they will each continue to point to the master.

I did think about writing this out as a step by step guide with each configuration I use, but the configs are pretty standard and as it’s a bit more advanced than just installing a single redis to handle sessions, I think the above info should really only be needed by anyone with an idea of what’s where anyway.

I still dont really get why the redis stuff for PHP doesn’t follow what the redis server tells it i.e the data is over there. I suppose it will evolve. If your programming up your own use of redis like the big boys then you’ll be programming for that. I feel like I’m doing something wrong or missing something, I’m certain I’m not the only one who uses redis for PHP sessions across multiple servers behind a load balancer, but there is very little I could find googling beyond those that just drop a single redis instance into place. As this becomes single point of failure it’s pretty bizarre to me that every guide seems to go this way.

If anyone does read this and actually wants a step by step, let me know in the comments.

Just to finish off I’ll give an idea of my current (it’s always changing) topology.

2 Nginx Load Balancers (1 active, 1 standby)

2 Nginx Web Servers with PHP (running syncthing to keep files in sync, I tried Gluster FS which everyone raves about, but I found it crippling so I’m still looking for a better solution).

3 Database Servers (running MariaDB in cluster setup and Redis in 1 Master, 2 Slaves configuration).

1 Monitoring Server (running nagios, checking pings, disk space, processes, ports, users, ram, cpu, MariaDB cluster status, Redis, DB & redis is accisble from Web Servers, vpns, dns and probably a load more).

How I started using WordPress

I should make it clear from the outset, this post isn’t going to be solving anything. I’ve spent about 3 days working on stuff and this is just bugging me.

Let’s go back to around this time last year. A couple of friends and I were working on how to get some money back from a facebook page one of them had setup a few years prior. I had been working with him on it since a few weeks after he set it up, and we’d pushed around ideas to sell some merchandise alongside it a few times, but never really got anywhere.

He had made a rash decision one night to use an ‘online website creation’ provider to get a site online. From the start I hated it. Not the idea, I think it was about time to get our own site running, but he spent a few weeks tweaking about 8 pages to look really good, using their WYSIWYG editor. Then wanted me to change a few things in code that wasn’t right. It was an absolute nightmare! I can’t remember the name of the site but I think it had a W E X somewhere in the title.

It was a paid “solution”, and I think cost about £40 per month by the time he’d added a mailing list option to capture email addresses (not actually handle any emails) and a few other bits.

Coming from more of an IT position, my main concern was around load/spikes being handled. There was very little information about how well they could handle this (and I think we found the reason why). We finally put this live posting the link to about 40k followers.

Watching the nice google analytics (that I had to add to each page, because they didn’t have a drop in the tracking code option), within seconds we hit around 150 hits per second. This continued for a few hours, but 2 problems became apparent:

  1. The site was struggling, and we would probably have been hitting a higher number. We were getting positive feedback and people understood it was busy, but I still wasn’t happy we’re paying for them to handle this and it’s just not.
  2. And this really hits onto point 1. He’d setup a site that was pretty static! There was nothing to get people coming back for more. Yes there was a news page that we could update, but other than the mailing list form there was nothing interactive. (So hitting point 1, static content should really have been able to be handled 100x better).

Anyway we took that for what it was, a basic site with a bit of info and something to get us started.

We already had a ‘Shop coming soon’ page, so the next thing we were trying to figure out was what are we going to sell and how?

Initially we thought t-shirts, and started looking at some of the ways we could do this. 2 main providers seemed to jump out zazzle and cottonpress (I think, it was a while ago). While both had some good offerings, neither really grabbed us. I can’t remember which, but one of them deleted an image we uploaded for copyright (it was our logo, and we had it plastered everywhere and the account was signed up with a domain name using the same logo), but they wanted us to fill in and snailmail/fax some copyright forms and reupload the logo. Considering we were only seeing what we could do at the time, we decided to drop them as an option. If we have to jump through these hoops with everything we do, we’ll spend more time filling in their paperwork than anything else.

Time went on, visits to the site died down (did I mention lack of content/interactivity), and we still hadn’t sorted out products, a store, a business.

We continued with the facebook page and still poked around ideas on how to get a site/shop running. I spent a good few weeks working with oscommerce (I’d previously used it a little for another project idea, but it never went live) and finally had something to show, a semi working shop front (it had no products).

We discussed that neither of us really had a clue about setting up a proper business. I’m all IT and have no interest in writing business plans or doing business meetings (I should mention that a previous role I was an IT Manager and regularly had to be part of “grown up” meetings, I’m a Tech, I hate people, I hate meetings, give me something broken – I’ll fix it, give me a problem – I’ll work out a solution. But in NO WAY do I want to be taking part in any more business meetings).

A few months later I was helping him move. Another friend of his was also helping. I’d met him once before but didn’t get chatting then. He mentioned he was in his last year of university and was studying business and finance! Just how he hadn’t thought of this before I dont know, but instantly we knew he was coming on board 🙂

We spent about 3 hours in McDonalds discussing what we had, what we’d like to do, and just how crap we’ve been so far. Within this conversation we said about selling t-shirts/mugs/bags etc. Just like a genii out of a lamp our 3rd comes out with ‘oh my almost father in-law does printing stuff, I know he does mugs. Shall I speak to him?’ Just like a match made in heaven, we suddenly had our missing piece! someone who should have more of an idea on the business side (or at least know someone to ask) and connections to a printers for the kind of stuff we want to sell. You just couldn’t make it up, he’d known this guy for a few years and never thought to asking him about business stuff.

Things started moving forward, slowly at first but at least they were moving. We met up with “almost” father in-law and went through some designs and processes. We setup a business. I continued work on the shop website and we took down the other that was costing too much and not really doing anything.

In around October time we were set. Nothing spectacular, about 9 mugs and a few t-shirts. The mugs would be the easiest, we just send the order to “almost” and he takes care of printing, packaging and sending. The t-shirts would be a little different as we’d have to get a template made for them, and couldn’t afford the cost until we had some orders in.

We launched, I had tried to over-spec the server(s) but in itself this was tricky. There were no real stats on how well oscommerce could perform on certain hardware. And scalable VPS’s such as DigitalOcean’s current system just didn’t exist. Scaling would mean taking a new 30 day server and moving everything over to it. Certainly not a 5 min job, and definitely not something to start an hour after we’ve launched. We’ll just have to bite the bullet and see.

My memory of launch night is fuzzy to say the least. I think I’d been working 36-48 hours trying to finish stuff off. I had a big list of checks and can’t remember doing half of them.

Our page audience had grown to about 70k, so I was very nervous. We launched the shop and watched. ping, email – it’s an order, ping, another, ping another. It was working. I have to say the server(s) held up pretty well. It wasn’t without problems, we did start seeing the site timing out on new connections for about 10 mins. but a swift kick of apache sorted that and it didn’t cause a problem again.

Finally we were running. The feedback again was good. We had a bunch of concerns like

  1. Will the system work
  2. Will the servers(s) hold up
  3. What happens if it goes mental and we sell a thousand mugs
  4. Can we really do this

I think all in all it went well. We could have done better, but it also could have been a lot worse. We ran with oscommerce for a few months. Shortly after launching the shop I had a discussion on just how were we going to get facebook and the shop incorporated? There was no obvious answer, then we hit a problem. Once of the posts to facebook got reported (it’s a humorous page, and we only every post stuff sent in to us), this showed us just how much we’re reliant on facebook. Suddenly we’re all logged out and the poster was blocked for 24 hours. Luckily facebook pretty much left the page alone (just delete the one post), so we played on it that one of our admins was in the dog house for an earlier post. But it still didn’t take long to realise if facebook wanted they can delete the page at a whim and we’ve suddenly lost all our content and fans!

This just didn’t sit right with me, and I started looking at how the hell to get a backup of OUR page and content. There was nothing. So I started looking at how can we do things differently, enter WORDPRESS.

I’d seen this name floating about for the last few months, but never really saw the point in using it. I dont blog, we dont blog, so what’s the point? (I’m still not entirely sure I understand the point) but it’s close to one of the best things I’ve spent weeks fiddling with.

I’d installed wordpress on our VPS for me to have a look around, it still wasn’t a site that we could really use, but as a CMS maybe I can find a way to connect to facebook and backup our stuff. There must be people who do this right? WRONG. There’s loads of plugins for wordpress & facebook, but I’ve only ever found 1 that takes your page and puts it as posts. To make matters worse, it’s flakey as f**k hadn’t been worked on in god knows how long, and of the very few comments in the code their in Chinese.

Now I would never describe myself as a coder. I’ve used Delphi and VB for writing some functional programs in the past, and had to program a few in VB.net when I was IT Manager (the old problem/solution thing), I could also write some ASP and PHP, really most of my stuff was drag and drop boxes and program them up for stuff. I did quite a bit of databases stuff within them, but that was it. There was absolutely no such thing as using classes (I think even think they existed). But as part of my job I had a dev team who did develop in PHP and VB.net, and they were always amazed when trying to tell me how something couldn’t work, that I could not only follow along but tell them why they were wrong and on several occasions when something broke could actually read their code and work out (normally the simple thing) a temporary fix.

And so it begins I now have no dev team, a bunch of php code and classes that really didn’t make much sense to me. Bit by bit I managed to work out what each bit was doing, then moved on to changing it so that it would run for us. I know it will seem like simple stuff (especially looking back), but things like:

  • Changing a hard coded loop to only pull 10 facebook posts, to take a limit from a setup interface where you can specify how many to pull.
  • Adding in Date ranges to pull from and to.
  • Improving the cron job, so it look from the time of the lastest post it was + 1 second.
  • Downloading any attached image and saving it to the server (huge accomplishment).
  • Changing the posts content that gets published and updating links back to the post it just posted.

There’s load of stuff I’ve had to do to this plugin to get it working, let alone better and working for us. Eventually I’d finished (your never really finished, I have a list of new changes to get to sometime). After running it on a fresh wordpress install, we suddenly have a complete backup of our facebook page, around 9k posts and images, all sat in wordpress. and what’s more automatically grabbing new stuff 🙂

I showed off my new achievement, personally I dont think it was appreciated just how much time and effort I had put into this. but it went down really well. We now had a blog! we now had a blog alongside our store and it was really starting to come together.

Over the next few weeks I kept working on improving the blog while managing the shop. Then suddenly a new disaster, our server had for some reason gone offline. Trying to connect via the backup terminal access just gave me a blank screen, something was wrong and I couldn’t get access to see what. To make matters worse, our provider had very nicely decided to cut back on it’s 24/7 support, and now only operated 8-8. At around midnight, it’s not exactly what you want to be finding out that the support has changed and no-one bothered to tell you. The ONLY thing I could do was email them, and hope someone picked it up soon. They didn’t! I spent a good few hours trying everything I could think of to get hold of someone or find a way to the console, but nothing. This had the effect of making me sleep through my alarm at 8am, but I woke at 9am and called them. After a few choice words, I was assured the tech team would look at it right away. I was so tired I fell back to sleep and woke again about mid day. The first thing I did was open our site, or I should say attempted to! it was still down!! Another call, more choice words, and me advising them I’m not going anywhere and if they cut me off I’ll just keep calling until I can speak to a “Tech”, then explaining the problem and what I had tried to tier 1 monkey, quickly got me escalated. I couldn’t stop laughing when I finally did get to tier 2, their tier 1 had placed me on hold to get someone then come back to me and said ‘I need you to take this call, this bloke knows what the f**k he’s on about, I’d put him to tier 3 but I can’t direct transfer’ to which I replied ‘Yes and I know how to work a phone system, Line 1 is the customer, you should be on Line 2 for that conversation’ 🙂 I have to be honest just that mistake made my day. Tier 1, 2 and the manager that called me back an hour later were mortified, but as I explained to him I’ve been a senior tech on phone support, I’ve been an IT manager, I’m guessing I hit a newish person and scared the crap out of them. I only care in getting this back online. To be fair to Tier 2, I was connected to the console while he was apologising. (This part really could have had it’s own post).

Anyway getting over that failure I started looking for another VPS provider, I had no problem with their VPS and generally it was a very stable system, but 8-8 support with no out of hours we’re really f**ked option, forced my hand. It had gone down very badly with the others that this had costs us money and there was no way I could argue it as I agreed the situation was crap.

I found another provider and started moving stuff, but it just wasn’t right. It was actually a previous colleagues company, but something just wasn’t right. So I kept looking. Then I found Digital Ocean, initially I started using them to test some wordpress plugins, but I loved that I could very quickly bring up new servers in a matter of minutes. This surely has to be better than waiting hours. And it was. Testing was going well, so I started moving everything over. Everything just worked, and where I had to contact their support for a few little things (1 account related can’t remember the others), I had a reply very quickly sometimes within minutes other times within 30 mins. I couldn’t fault their support and I wasn’t bounced around, they knew exactly what I needed and sorted it.

So here we have a medium spec’d Digital Ocean server, running our WordPress and OsCommerce solutions and handling both pretty well.

But being one to never settle, I kept tweaking stuff and looking at out options. I setup another server (droplet) for testing, another wordpress install later and I’m going through trying out the ecommerce plugins. I was blown away with WooCommerce! yes OsCommerce worked for us. and yes I had put in quite a bit of time customising it and getting it to work with our processes. but the whole feel of the interface was crap. Woocommerce was like a breathe of fresh air. It had a bunch of functionality, there’s loads more plugins, it’s far easier to customise, it works from the wordpress themes, and fits right in with our blog and doesn’t look disjointed.

I proposed we move over to this and it went down well. Well enough infact the the others wanted to get more involved, we spent weeks working on changes to the theme (that’d we’d paid about $50 for), I moved the shop over and made it live without telling our facebook audience. We started getting some sales via Woocommerce, and it was obvious that this just integrated well.

We were going to have a relaunch to show off the new blog and store, I think I managed to p*ss the others off, when WordPress brought in a new standard theme that worked even better with Woocommerce and I changed to it to show them. It was obvious that it did and we should stick with it, but it also meant the last few weeks of customising was wiped out (and they still bring up the time I wiped out a few weeks work when I changed the theme).

I would never say wordpress/woocommerce is perfect, I’ve found many issues along the way and had to find work arounds for a lot of stuff. I still dont truly feel like I know what I’m doing and there’s no way that we use wordpress to it’s full potential. Currently we have the blog and shop running, we have somewhere in the region of 10k posts and around 15k sales. We still don’t publish to the blog independent of facebook/twitter but it’s on the roadmap.

One thing that has caught us out a few times is DigitalOcean scaling. Because we very often have little traffic, I always keep the servers scaled down with the intention of boosting them up before we push anything new. On at least 2 occasions, we’ve forgotten this and overloaded our site.

I’ve also gone through a few different configurations just trying to find the best solution.

1st We had 1 server, that was mid range and just worked, but I knew this alone wouldn’t handle the traffic.

2nd I brought up 2 web servers and a database server. This wasn’t an ideal setup, loadbalancing was at DNS level, syncing was done via cron jobs, and the whole thing held together via a VPN to keep database connections secure. This had a bunch of problems.

Next I moved back to a single web server but kept a separate database. This was better, and around the time DigitalOcean allowed you to scale up easier (but not down you still had to wipe out the server to do so).

Because having a single web server just wasn’t enough, I went back to 2, but added in the new(ish) cloudflare CDN in front of the servers. This really helped (though I’m still not convinced really does CDN for us)

As part of the above, I tried incorporating GlusterFS (absolute disaster). From every web search I did GlusterFS looks to be THE solution. In practice for us it took a website responding (with some heavy  graphics) in 3 seconds longest avg 2secs, to 30sec longest 18secs avg. I know everyone rave’s on about how great it is and how if it’s slow it’s something you’ve done. I dont believe this for a second. I’ve spent days at a time trying to make it better, but the simple truth is if the files are pulled locally I get the 2/3secs above. When using a Gluster mount point to the files (which are still actually local, Gluster on both web servers, mounted back to themselves), I get the 18-30secs. Both web servers have a private lan connection to use gluster in the same Digital Ocean Location and NO amount of tweaking or testing seems to every really improve this. It was only made worse during testing when I took down one of the servers, so that the other could only use itself to serve the files and this managed to take out the mountpoint until I restart, and still it served up the pages slowly. I thought the whole point in using Gluster (at least for me) was HA, no single point of failure. Having both servers offline if one goes down does not seem very HA to me.

The ONE thing I really want Digital Ocean to sort out is their private lan. In order to solve the issue of anyone else on the private lan being able to see my traffic between servers I’ve had to use VPN’s between them. This adds complications to the entire setup, and a private lan per account would be very welcomed.

The setup I’m currently in the middle of deploying is:

a) Cloudflare

b) 2xNginx loadbalance proxies (also serve up maintence pages if they can’t connect back.

c) 2xNginx Backend servers

d) 2xMySql+Redis Servers

e) 2xNFS Servers

I’m happy with the load balancers, though I would love for DigitalOcean to offer a proper loadbalancing solution.

The MySQL servers took some config to get replication working properly while also using SSL for the connection to each other and from the backend web servers.

I still haven’t managed to configure MySQL to be HA from the web servers, so at the moment this would be a manual switch. I’ve found HyperDB for wordpress, while should resolve this, but since I had to slightly change the wordpress config to do SSL for MySQL, HyperDB doesn’t seem to be able to use SSL so I need to work out how to do this. I find this really weird as once of the suggestions is to have your database remote, I really would have thought being remote (especially if using something like Amazon for the database) would mean you’d want to use SSL to keep your database traffic secure. It seems strange that this isn’t a fundamental option in HyperDB (unless I’m just not seeing it).

And the last part NFS Servers, I still need to find out how to keep these in sync (without using Gluster), I’ve previously used Syncthing to keep servers in sync, it works but is pretty much held together with tape (my configuration of it not the actual program). Once I have the NFS sync’d I also need to find a way for the web servers to use both HA.

I do feel like this configuration is the closest to the best I can achieve on a budget. Once I have the MySQL and NFS stuff worked out, I will then be able to scale any server without completely taking the site offline. Which will really help in being able to deal with spikes. It is not much easier to scale with Digital Ocean, but I’d still really want to know doing so or taking a server out for maintenance is fine because everything will just keep running.

If you’ve got this far, I really thank you for reading. I hope the next couple of posts will be my solutions to the MySQL SSL and NFS problems. It’s not 2:41am and I think I’ve been writing this for about 2 hours, so I’m going to sleep 🙂 leave a comment if you got this far, include the words ‘sleep deprived’ so I dont think it’s spam.

Setting Featured image on WordPress Posts in BULK

I’m in the middle of changing another site to a new theme. The problem being the front page uses the feature image from each post to build up the display. None of the posts have a feature image set. It uses a facebooktowordpress plugin (heavily customised the original no longer pull the images, and really couldn’t handle pulling what was needed).

With this plugin each post on facebook is posted to the blog, the first image (this need changing to all images) on the post is downloaded to the wordpress server and then the source of the post uses the local image. But at no point did I ever anticipate needing featured images to be set.

So here’s the code I’ve put together to:

  • Pull a list of posts without a thumbnail/featured image set.
  • Pull the contents of each post in the list and look for img src
  • Download the image
  • Add the image to the media library against the current post
  • Pull the ID of the attachment and set it as the featured image

This is meant again to run from the command line NOT as a plugin.

<?php
 $counter=0;
 $limit = 20;
 $updated=0;

 if( php_sapi_name() !== 'cli' ) {
 die("Meant to be run from command line");
 }

 function find_wordpress_base_path() {
 $dir = dirname(__FILE__);
 do {
 //it is possible to check for other files here
 if( file_exists($dir."/wp-config.php") ) {
 return $dir;
 }
 } while( $dir = realpath("$dir/..") );
 return null;
 }

 define( 'BASE_PATH', find_wordpress_base_path()."/" );
 define('WP_USE_THEMES', false);
 global $wp, $wp_query, $wp_the_query, $wp_rewrite, $wp_did_header;
 require(BASE_PATH . 'wp-load.php');
 echo "Site URL: " . get_site_url() . "\n\r";
 echo "Base: " . find_wordpress_base_path()."/\n\r";
 echo "Upload DIR: " . wp_upload_dir()['path'] . "\n\r";
 echo "Posts: " . wp_count_posts()->publish."\n\r";

 $query = array (
 'posts_per_page' => -1,
 'post_type' => 'post',
 'meta_key' => '_thumbnail_id',
 'meta_compare' => 'EXISTS',
 );
 $my_query = new WP_Query($query);
 $posts_with_thumbs = $my_query->post_count;
 echo "Posts with thumbnail_id: " . $posts_with_thumbs . "\n\r";

 $query = array (
 'posts_per_page' => -1,
 'post_type' => 'post',
 'meta_key' => '_thumbnail_id',
 'meta_compare' => 'NOT EXISTS',
 );
 $my_query = new WP_Query($query);
 $posts_without_thumbs = $my_query->post_count;
 echo "Posts without thumbnail_id: " . $posts_without_thumbs . "\n\r";

 $counter = 0;
 while(( $my_query->have_posts() ) and ($counter < $limit)) {
 $my_query->the_post();
 echo "\n\r\n\r";
 echo "Post ID: " . $my_query->post->ID . "\n\r";
 echo "Post Title: " . $my_query->post->post_title . "\n\r";
 $content = $my_query->post->post_content;
 $sub="";
 $video_id="";
 $sub_image=flase;
 $sub_changed=false;
 $sub_youtube=false;

 if ( strpos($content, 'img src') !== false ) {
 $re = "/<img.*?src='([^\"]*)'.*\/>/i";
 preg_match_all($re, $content, $matches);
 $sub = $matches[1][0];
 echo "Image URL: " . $sub . "\n\r";
 $sub_image=true;
 if ( substr( $sub,0,11 ) === "/wp-content" ) {
 $sub = get_site_url() . $sub;
 echo "Real URL: " . $sub . "\n\r";
 $sub_changed=true;
 }
 }

 if ( $sub == "" ) {
 if (strpos($content, 'youtube') !== false) {
 $re = "/\[[{embed\}]]([^\"]*)\[\/embed\]/i";
 preg_match_all($re, $content, $matches);
 $sub = $matches[1][0];
 echo "Youtube Detected. URL: " . $sub . "\n\r";
 $re = "/watch\?v=([^\"]*)\[\/embed\]/i";
 preg_match_all($re, $content, $matches);
 $video_id = $matches[1][0];
 $sub = "http://img.youtube.com/vi/". $video_id . "/hqdefault.jpg";
 echo "Youtube Thumbnail: " . $sub . "\n\r";
 $sub_youtube = true;
 }
 }

 echo "Content: " . $content . "\n\r";

 if ( ($sub_image === true) || ($sub_youtube === true) ) {
echo "Image or Youtube detected!\n\r";
 if ($sub_image === true) {
echo "Image!!\n\r";
 $media = media_sideload_image($sub, $my_query->post->ID, $my_query->post->post_title);
 } elseif ($sub_youtube === true) {
echo "Youtube!!\n\r";
 // Download file to temp location
 $tmp = download_url( $sub );
 // Set variables for storage
 // fix file filename for query strings
 preg_match('/[^\?]+\.(jpg|JPG|jpe|JPE|jpeg|JPEG|gif|GIF|png|PNG)/', $sub, $matches);
// $file_array['name'] = basename($matches[0]);
 $file_array['name'] = $video_id . ".jpg";
 $file_array['tmp_name'] = $tmp;
 // If error storing temporarily, unlink
 if ( is_wp_error( $tmp ) ) {
 @unlink($file_array['tmp_name']);
 $file_array['tmp_name'] = '';
 }
 // do the validation and storage stuff
 $media = media_handle_sideload( $file_array, $my_query->post->ID, "YouTube: " . $my_query->post->post_title );
 // If error storing permanently, unlink
 if ( is_wp_error($media) ) {@unlink($file_array['tmp_name']);}
 }

 if(!empty($media) && !is_wp_error($media)){
 echo "File Downloaded!\n\r";
 $args = array(
 'post_type' => 'attachment',
 'posts_per_page' => 1,
 'post_status' => 'any',
 'post_parent' => $my_query->post->ID,
 );

 $attachments = new WP_Query($args);
 while( $attachments->have_posts() ) {
 echo "Attachment ID: " . $attachments->post->ID . "\n\r";
 set_post_thumbnail($my_query->post->ID, $attachments->post->ID);
 if ($sub_image) {
 $atturl = wp_get_attachment_url($attachments->post->ID);
 $atturl = preg_replace("(^https?:)", "", $atturl );
 echo "Attachment URL: " . $atturl . "\n\r";
 if ($sub_changed) {
 $sub = str_replace(get_site_url(), "", $sub);
 }
 $newcontent = str_replace($sub,$atturl,$content);

 if ($newcontent != $content){
 $update_post = array(
 'ID' => $my_query->post->ID,
 'post_content' => $newcontent,
 );
 wp_update_post($update_post);
 }
 $updated++;
 break;
 }
 }
 echo "\n\r";
 }
 }
 $counter++;
 }
 echo "With Thumbs: " . $posts_with_thumbs . " and " . $posts_without_thumbs . " Without.\n\r";
 echo "Updated: " . $updated . " of " . $counter . "\n\r";
?>

BE WARNED this is designed to make changes to your wordpress posts. The usual about taking backups is a MUST. both of the database and your wordpress www folder.

  • The “Limit” is set low, you’ll need to adjust this as needed.
  • Within the script there is
[[{embed\}]]

You will need to change this to

[ embed\ ]

WITHOUT the spaces, wordpress decided I was trying to embed something so I can’t paste it without a slight change.

  • I’ve tried to keep it as generic as possible, so there shouldn’t be any mention of my site in forced links or searches. When it searches for an image it also looks to see if it’s local to the site using /wp-content instead of a full url, it should get around this problem using get_site_url() but you could force your domain or a different path if needed here.
  • It also looks for any youtube content in the post and tries to pull the relevant image. I just wish it would pull one with a play button on it (I dont want to use CSS to get around this)
  • It handles the youtube download differently (this was made over a few days fixing problems as they come up) but I think it’s needed to give each image a unique filename.
  • You will have to run several times probably increasing the limit each time, on 2nd run it again checks all the previous ones that it couldn’t get an image for, so you should end up with 0 updated of x where x is probably all your posts without an image. I’ll be updating it to use a/(random of a few) image relevant to the site so everything has a featured image.

WordPress Posts without Featured Thumbnail Count

I have another script I’ve been building up to download the first  embedded image in a post, add it to the media library, attach it to the post and set it as featured image.

Here’s some quick code to query how many posts do and do not have a featured image

<?php
 if( php_sapi_name() !== 'cli' ) {
 die("Meant to be run from command line");
 }

 function find_wordpress_base_path() {
 $dir = dirname(__FILE__);
 do {
 //it is possible to check for other files here
 if( file_exists($dir."/wp-config.php") ) {
 return $dir;
 }
 } while( $dir = realpath("$dir/..") );
 return null;
 }

 define( 'BASE_PATH', find_wordpress_base_path()."/" );
 define('WP_USE_THEMES', false);
 global $wp, $wp_query, $wp_the_query, $wp_rewrite, $wp_did_header;
 require(BASE_PATH . 'wp-load.php');
 echo "Base: " . find_wordpress_base_path()."/\n\r";
 echo "Upload DIR: " . wp_upload_dir()['path'] . "\n\r";
 echo "Posts: " . wp_count_posts()->publish."\n\r";

 $query = array (
 'posts_per_page' => -1,
 'post_type' => 'post',
 'meta_key' => '_thumbnail_id',
 'meta_compare' => 'EXISTS',
 );
 $my_query = new WP_Query($query);
 $posts_with_thumb = $my_query->post_count;
 echo "Posts with thumbnail_id: " . $posts_with_thumb . "\n\r";

 $query = array (
 'posts_per_page' => -1,
 'post_type' => 'post',
 'meta_key' => '_thumbnail_id',
 'meta_compare' => 'NOT EXISTS',
 );
 $my_query = new WP_Query($query);
 $posts_without_thumb = $my_query->post_count;
 echo "Posts without thumbnail_id: " . $posts_without_thumb . "\n\r";
?>

It’s made to run from the terminal NOT as a wordpress plugin, just place it in the root wordpress folder and run with php <filename>

It will output something like

Base: /var/www/{folder}/
Upload DIR: /var/www/{folder}/wp-content/uploads
Posts: 8547
Posts with thumbnail_id: 6824
Posts without thumbnail_id: 1723

Took a few seconds to return, it’s probably not the best way of doing it, but worked for what I wanted. I’d like to give credit for each bit of code but really no idea what I got the different bits from.