Remove/Hide WooCommerce Mine Status Filter

So this has been annoying me for ages.

I don’t really get what’s it for, more-so because I’m the only shop admin! Either all the orders are “Mine” or none of them.
I remember searching for why previously and came across a bug during a WooCommerce Database update that caused orders to be re-assigned to the owner id 1 if they were set at 0 or something similar. But it didn’t really explain the purpose of this filter and I didn’t get any further in my understanding of why anyone might find it useful.
There was a suggestion that it would come into play for order that were created using the ‘Add Order’ button. Now I can see that may be useful, I do occasionally create orders manually for customers who are having problems and this could be a way of finding them, but there’s no way I’ve manually create the hundreds or thousands that’s there (depending on which site I’m looking at, we have a few).

Not getting anywhere on finding to disable or hide it and just finding lots of info on custom statuses, I gave up. I did consider using CSS to just hide it but I really hate working on CSS when I have to. This wasn’t annoying enough that I had to.

But today I noticed our ‘Pending Payment’ count was rather high. We’ve had a busy month and I haven’t had time to pay attention to the small details. but almost 300 orders ‘Pending Payment’ raised my interest. That was quite cimply that the cronjob to clear them wasn’t there, and I vaguely remembered this happening a few years back. Clearing the inventory timeout, save and setting it back to 60 mins (and save) recreate the cron job. That fired straight away and cancelled half of the orders there. I gave it a few mins and force run the cronjob again, that cleared the queue right down to about 8 orders. 2 of them are valid carts going through now and 6 are manual orders that may/may not have been paid (I need to check each of them) but were holding orders (not really shipping anything) so not an issue.

Once I finished with ‘Pending Payment’ I spotted the ‘Mine’ filter again, which brought back ‘why is it there, can I get rid of it?, it’s totally pointless for me. It just takes up room.

Well people I can say, you can hide it 😀 and it’s pretty easy to do.

You can add the following to your functions.php

function hide_mine_filter_shop_orders( $views ) {
    // Unset the option from the views
    unset( $views['mine'] );
    return $views;
};
add_filter( 'views_edit-shop_order', 'hide_mine_filter_shop_orders' );

And it’s now gone:

** The pics are from 2 different sites. I’ll be applying it to all shortly 🙂

I know it’s not much, but I’ve never used it and with lots of different order status’ it much better to have one less.

I should add just in case I ever read this again, I actually put that in one of our own plugins that restores the Items Purchased column that was removed from WooCommerce Core but crucial to speedily deal with orders (that still should have been a tick option in screen options, but the devs wouldn’t take that on board).

WooCommerce Nag Notice

We all (mostly) understand updates are important and I’m sure there were only good intentions by setting it but the
‘Connect your store to WooCommerce.com to receive extensions updates and support.’
Nag notice is ridiculous. A non-dismissable notice should never be allowed. I get you dont want people to just quickly click the dismiss button, so why not put an option at the bottom of the connect page ‘Dismiss this alert. I understand what this means’ even if it only dismissed for say 3 months and then you had to do it again. It would still be annoying but at least easier to deal with.

But no, in someones infinite wisdom they’ve decided you absolutely must connect your store and have no other option. Well here’s the code to add to stop that nag notice

add_filter( 'woocommerce_helper_suppress_admin_notices', '__return_true' );

** It is your own responsibility to keep your site upto date.
** Disabling this notice, may disable other woocommerce notices.

There are of course legitimate reasons why would wouldn’t want to connect your store, managing your updates your own way should always be allowed. So dev’s stop trying to dictate how you want/think things should run, choice is the key.

WooCommerce 3.3.0+

Yesterday upgrade a store to WooCommerce 3.3.1 from whatever the hell it was on before.

Today I’ve spent the day putting things right 😠 All the issues are around the new Orders UI and it seems like petty small stuff but it’s safe to say I’m hating the new UI because I’ve wasted the day dealing with over 50 complaints.
For those unfamiliar here’s the proposed changes (the end result is a little different) https://woocommerce.wordpress.com/2017/11/16/wc-3-3-order-screen-changes-including-a-new-preview-function-ready-for-feedback/#comment-4137

I’ve so far fixed some of the issues, such as rearranging the columns (why the actual fu*k status was moved I’ll never know or understand). The below code is probably not the best way to do it, but it works

// Function to Change the order of the Columns
function woocommerce_myinitials_restore_columns( $columns ) {
    $new_order = array(
       'cb' => '',
       'order_status' => '',
       'csv_export_status' => '', // dont think this ones standard but it's part of a plugin we use.
       'order_number' => '',
       'order_items' => '',
       'billing_address' => '',
       'shipping_address' => '',
       'order_date' => '',
       'order_total' => '',
       'wc_actions' => '',
    );
    foreach($columns as $key => $value) {
       $new_order[$key] = $value;
    }

    return $new_order;
}
add_filter( 'manage_edit-shop_order_columns', 'woocommerce_myinitials_restore_columns',99 );

So that’s one issue solved. The next was being able to click anywhere in the row opens the order (yeah I’m sure that’s nice, but if you rely on tapping a touchscreen i.e. click and drag to scroll then this causes problems). The following code adds the no-link class to the tr and stops this shitty behaviour

function woocommerce_myinitials_restore_columns_add_nolink_class( $classes ) {
    if ( is_admin() ) {
        $current_screen = get_current_screen();
        if ( $current_screen->base == 'edit' && $current_screen->post_type == 'shop_order' ) {
            $classes[] = 'no-link';
        }
    }
    return $classes;
}
add_filter( 'post_class', 'woocommerce_myinitials_restore_columns_add_nolink_class' );

Thanks on this one goes to ‘rodrigoprimo’ for the initial fix and others who picked it up and added a bit to it https://github.com/woocommerce/woocommerce/pull/18708.

I’ve added the following as a stylesheet

.post-type-shop_order .wp-list-table td, .post-type-shop_order .wp-list-table th {
   vertical-align: unset;
}

.post-type-shop_order .wp-list-table td.order_status {
   vertical-align: middle;
   text-align: center;
}

This places the orders back at the top of the row, and stops the previous restoration of items sold link jumping around. but I’d rather the new status text stays in the middle inline with the checkbox, hopefully we’ll get this back to an icon soon.

All of the above I’ve added to our custom plugin, you could either do this or add them to your functions.php

Outstanding issues:
1. Getting back the Email address. There is some hope this may come back officially, but I’ll be fixing it for us tomorrow.
2. The Status being text not icons. I understand this makes more sense to new users but if you have some long statuses like we do, the text doesn’t fit and we’ve got 10 statuses all looking the same. Having coloured icons worked for us and if you weren’t sure hover the mouse. I’ll be looking to get them back to icons tomorrow.
3) Date column, just why! Why would anyone think not putting the actual date and time of the order here is a good idea. Stupid ‘X mins ago’ is no use at all.

The new preview window looks good but I really dont see it getting much use, we need the important data on the front. If it’s not that important just open the order. WooCommerce dev’s decided to screw with it but I dont think there’s an understanding that if your going as far as opening the preview window then I’m pretty sure you were used to just editing the order which is probably still going to be the case.

So summing up today, I’ve had a shit day of people moaning at me because some developers decided to improve something that really didn’t need touching. Doesn’t sound like every developer I’ve ever known 😂. I’m now getting something to eat before I go near everything I’d planned on working on today.

Nginx + WordPress + Fail2Ban + CloudFlare

I hate being woken at 2am, to be told “we’re under attack!”. Well that’s pretty much this morning 🙁

Now to be fair, it wouldn’t have even been noticed. Our servers and setup handled it very well and it was only noticed during a switch over to a backup server with less resources during maintenance.

On checking the logs I could see lots of attempts like

X.Y.X.Y - - [28/Aug/2016:03:12:16 +0100] "POST /wp-login.php HTTP/1.0" 200 5649 "-" "-"

We’re walking 114,893 requests to just the one server. My first instinct is to add the offending IP address to our IPTables BLOCK list and just drop the traffic altogether. Except this no longer works with CloudFlare since the IP addresses the firewall will see is theirs not the offender!

No problem, CloudFlare deals with this(!?) I can just turn on ‘Under Attack’ mode and let them handle it. This is where you quickly learn it doesn’t really do much. Personally I got a lovely 5 second delay when accessing our websites with ‘Under Attack’ activated. but our servers were still being bombarded with requests. So I added the offending IP addresses to the firewall on CloudFlare. Surely that will block them from even reaching our servers! While I can’t say it had no effect, out of a number of IP addresses I had added some were still hitting our servers quite heavily.

So the question becomes ‘How can I drop the traffic at nginx level?’. Nginx is configured to restore the real IP addresses, so I should be able to block the real offenders here not CloudFlare.

Turns out to be pretty easy. Add:

### Block spammers and other unwanted visitors ###
include blockips.conf;

Into the http section in /etc/nginx/nginx.conf

http {

### Block spammers and other unwanted visitors ###
include blockips.conf;

...

}

Then create the /etc/nginx/blockips.conf:

deny X.Y.X.Y;

Just add a deny line for each offending IP. Then reload nginx (service nginx reload). I’d recommend testing your new config first (nginx -t)

Now with that done of both servers I’m still seeing requests but they are now getting 403 errors and not allowed to hit any of the websites on our servers 🙂 after another 5 minutes of attack they clearly gave up as all the requests stopped.

But we’re not done. What if they’re just changing IP addresses? we need this to be automatic. Enter Fail2Ban. I haven’t used this in some time but I know what I need it to do.

  1. Check all the websites log files for wp-login.php attempts
  2. Write to /etc/nginx/blockips.conf
  3. Reload nginx

Should be quite simple. Turns out it is, but it took hours trying to figure out how since all the guides seem to think you’ll be using Fail2Ban to configure IPTables.

Here’s the files/configuration I added for Fail2Ban

# WordPress brute force auth filter
#
# Block IPs trying to auth wp wordpress
#
# Matches e.g.
# X.Y.X.Y - - [28/Aug/2016:03:12:16 +0100] "POST /wp-login.php HTTP/1.0" 200 5649 "-" "-"
#

[Definition]
failregex = ^<HOST> .* "POST /wp-login.php.*200
ignoreregex =</pre>

/etc/fail2ban/jail.d/wordpress-auth.conf
<!--?prettify linenums=true?-->
<pre class="prettyprint">[wordpress-auth]
enabled = true
filter = wordpress-auth
action = wordpress-auth
logpath = /var/log/nginx/*access*.log
bantime = 1200
maxretry = 8
# Fail2Ban configuration file based on dummy.conf
#
# Author: JD
#
#

[Definition]

# Option: actionstart
# Notes.: command executed once at the start of Fail2Ban.
# Values: CMD
#
actionstart = touch /etc/nginx/blockips.conf
service nginx reload

# Option: actionstop
# Notes.: command executed once at the end of Fail2Ban
# Values: CMD
#
# dont do this actionstop = echo "" &gt; /etc/nginx/blockips.conf
# service nginx reload
actionstop =

# Option: actioncheck
# Notes.: command executed once before each actionban command
# Values: CMD
#
actioncheck =

# Option: actionban
# Notes.: command executed when banning an IP. Take care that the
# command is executed with Fail2Ban user rights.
# Tags: See jail.conf(5) man page
# Values: CMD
#
actionban = echo "deny &lt;ip&gt;;" &gt;&gt; /etc/nginx/blockips.conf
service nginx reload

# Option: actionunban
# Notes.: command executed when unbanning an IP. Take care that the
# command is executed with Fail2Ban user rights.
# Tags: See jail.conf(5) man page
# Values: CMD
#
actionunban = sed -i "/&lt;ip&gt;/d" /etc/nginx/blockips.conf
service nginx reload

[Init]

init = 123

It’s pretty simple and will need some tweaking. I’m not so sure 8 requests in 20 mins is very good. We do have customers on some of our sites who regularly forget their password. The regex does look at the 200 code, I read that a successful auth would actually give a 304. Not sure if this is correct so will need some testing. I also found other information on how to change the code to a 403 for failed login attempt. I think this would be a huge plus, but I’m not looking into that tonight.

A few tests using curl and I can see Fail2Ban has done it’s job, added the IP to the nginx blockips file and reload nginx 😀 I’m not too worried about syncing this across servers, as shown tonight they will balance well and I’m confident that within a few minutes of being bombarded they would both independently block the IP.

So there’s my morning of working around using CloudFlare but still keeping some form of block list. Hope this helps someone. Please comment to let me know if you found any of this useful/worth reading.

How I started using WordPress

I should make it clear from the outset, this post isn’t going to be solving anything. I’ve spent about 3 days working on stuff and this is just bugging me.

Let’s go back to around this time last year. A couple of friends and I were working on how to get some money back from a facebook page one of them had setup a few years prior. I had been working with him on it since a few weeks after he set it up, and we’d pushed around ideas to sell some merchandise alongside it a few times, but never really got anywhere.

He had made a rash decision one night to use an ‘online website creation’ provider to get a site online. From the start I hated it. Not the idea, I think it was about time to get our own site running, but he spent a few weeks tweaking about 8 pages to look really good, using their WYSIWYG editor. Then wanted me to change a few things in code that wasn’t right. It was an absolute nightmare! I can’t remember the name of the site but I think it had a W E X somewhere in the title.

It was a paid “solution”, and I think cost about £40 per month by the time he’d added a mailing list option to capture email addresses (not actually handle any emails) and a few other bits.

Coming from more of an IT position, my main concern was around load/spikes being handled. There was very little information about how well they could handle this (and I think we found the reason why). We finally put this live posting the link to about 40k followers.

Watching the nice google analytics (that I had to add to each page, because they didn’t have a drop in the tracking code option), within seconds we hit around 150 hits per second. This continued for a few hours, but 2 problems became apparent:

  1. The site was struggling, and we would probably have been hitting a higher number. We were getting positive feedback and people understood it was busy, but I still wasn’t happy we’re paying for them to handle this and it’s just not.
  2. And this really hits onto point 1. He’d setup a site that was pretty static! There was nothing to get people coming back for more. Yes there was a news page that we could update, but other than the mailing list form there was nothing interactive. (So hitting point 1, static content should really have been able to be handled 100x better).

Anyway we took that for what it was, a basic site with a bit of info and something to get us started.

We already had a ‘Shop coming soon’ page, so the next thing we were trying to figure out was what are we going to sell and how?

Initially we thought t-shirts, and started looking at some of the ways we could do this. 2 main providers seemed to jump out zazzle and cottonpress (I think, it was a while ago). While both had some good offerings, neither really grabbed us. I can’t remember which, but one of them deleted an image we uploaded for copyright (it was our logo, and we had it plastered everywhere and the account was signed up with a domain name using the same logo), but they wanted us to fill in and snailmail/fax some copyright forms and reupload the logo. Considering we were only seeing what we could do at the time, we decided to drop them as an option. If we have to jump through these hoops with everything we do, we’ll spend more time filling in their paperwork than anything else.

Time went on, visits to the site died down (did I mention lack of content/interactivity), and we still hadn’t sorted out products, a store, a business.

We continued with the facebook page and still poked around ideas on how to get a site/shop running. I spent a good few weeks working with oscommerce (I’d previously used it a little for another project idea, but it never went live) and finally had something to show, a semi working shop front (it had no products).

We discussed that neither of us really had a clue about setting up a proper business. I’m all IT and have no interest in writing business plans or doing business meetings (I should mention that a previous role I was an IT Manager and regularly had to be part of “grown up” meetings, I’m a Tech, I hate people, I hate meetings, give me something broken – I’ll fix it, give me a problem – I’ll work out a solution. But in NO WAY do I want to be taking part in any more business meetings).

A few months later I was helping him move. Another friend of his was also helping. I’d met him once before but didn’t get chatting then. He mentioned he was in his last year of university and was studying business and finance! Just how he hadn’t thought of this before I dont know, but instantly we knew he was coming on board 🙂

We spent about 3 hours in McDonalds discussing what we had, what we’d like to do, and just how crap we’ve been so far. Within this conversation we said about selling t-shirts/mugs/bags etc. Just like a genii out of a lamp our 3rd comes out with ‘oh my almost father in-law does printing stuff, I know he does mugs. Shall I speak to him?’ Just like a match made in heaven, we suddenly had our missing piece! someone who should have more of an idea on the business side (or at least know someone to ask) and connections to a printers for the kind of stuff we want to sell. You just couldn’t make it up, he’d known this guy for a few years and never thought to asking him about business stuff.

Things started moving forward, slowly at first but at least they were moving. We met up with “almost” father in-law and went through some designs and processes. We setup a business. I continued work on the shop website and we took down the other that was costing too much and not really doing anything.

In around October time we were set. Nothing spectacular, about 9 mugs and a few t-shirts. The mugs would be the easiest, we just send the order to “almost” and he takes care of printing, packaging and sending. The t-shirts would be a little different as we’d have to get a template made for them, and couldn’t afford the cost until we had some orders in.

We launched, I had tried to over-spec the server(s) but in itself this was tricky. There were no real stats on how well oscommerce could perform on certain hardware. And scalable VPS’s such as DigitalOcean’s current system just didn’t exist. Scaling would mean taking a new 30 day server and moving everything over to it. Certainly not a 5 min job, and definitely not something to start an hour after we’ve launched. We’ll just have to bite the bullet and see.

My memory of launch night is fuzzy to say the least. I think I’d been working 36-48 hours trying to finish stuff off. I had a big list of checks and can’t remember doing half of them.

Our page audience had grown to about 70k, so I was very nervous. We launched the shop and watched. ping, email – it’s an order, ping, another, ping another. It was working. I have to say the server(s) held up pretty well. It wasn’t without problems, we did start seeing the site timing out on new connections for about 10 mins. but a swift kick of apache sorted that and it didn’t cause a problem again.

Finally we were running. The feedback again was good. We had a bunch of concerns like

  1. Will the system work
  2. Will the servers(s) hold up
  3. What happens if it goes mental and we sell a thousand mugs
  4. Can we really do this

I think all in all it went well. We could have done better, but it also could have been a lot worse. We ran with oscommerce for a few months. Shortly after launching the shop I had a discussion on just how were we going to get facebook and the shop incorporated? There was no obvious answer, then we hit a problem. Once of the posts to facebook got reported (it’s a humorous page, and we only every post stuff sent in to us), this showed us just how much we’re reliant on facebook. Suddenly we’re all logged out and the poster was blocked for 24 hours. Luckily facebook pretty much left the page alone (just delete the one post), so we played on it that one of our admins was in the dog house for an earlier post. But it still didn’t take long to realise if facebook wanted they can delete the page at a whim and we’ve suddenly lost all our content and fans!

This just didn’t sit right with me, and I started looking at how the hell to get a backup of OUR page and content. There was nothing. So I started looking at how can we do things differently, enter WORDPRESS.

I’d seen this name floating about for the last few months, but never really saw the point in using it. I dont blog, we dont blog, so what’s the point? (I’m still not entirely sure I understand the point) but it’s close to one of the best things I’ve spent weeks fiddling with.

I’d installed wordpress on our VPS for me to have a look around, it still wasn’t a site that we could really use, but as a CMS maybe I can find a way to connect to facebook and backup our stuff. There must be people who do this right? WRONG. There’s loads of plugins for wordpress & facebook, but I’ve only ever found 1 that takes your page and puts it as posts. To make matters worse, it’s flakey as f**k hadn’t been worked on in god knows how long, and of the very few comments in the code their in Chinese.

Now I would never describe myself as a coder. I’ve used Delphi and VB for writing some functional programs in the past, and had to program a few in VB.net when I was IT Manager (the old problem/solution thing), I could also write some ASP and PHP, really most of my stuff was drag and drop boxes and program them up for stuff. I did quite a bit of databases stuff within them, but that was it. There was absolutely no such thing as using classes (I think even think they existed). But as part of my job I had a dev team who did develop in PHP and VB.net, and they were always amazed when trying to tell me how something couldn’t work, that I could not only follow along but tell them why they were wrong and on several occasions when something broke could actually read their code and work out (normally the simple thing) a temporary fix.

And so it begins I now have no dev team, a bunch of php code and classes that really didn’t make much sense to me. Bit by bit I managed to work out what each bit was doing, then moved on to changing it so that it would run for us. I know it will seem like simple stuff (especially looking back), but things like:

  • Changing a hard coded loop to only pull 10 facebook posts, to take a limit from a setup interface where you can specify how many to pull.
  • Adding in Date ranges to pull from and to.
  • Improving the cron job, so it look from the time of the lastest post it was + 1 second.
  • Downloading any attached image and saving it to the server (huge accomplishment).
  • Changing the posts content that gets published and updating links back to the post it just posted.

There’s load of stuff I’ve had to do to this plugin to get it working, let alone better and working for us. Eventually I’d finished (your never really finished, I have a list of new changes to get to sometime). After running it on a fresh wordpress install, we suddenly have a complete backup of our facebook page, around 9k posts and images, all sat in wordpress. and what’s more automatically grabbing new stuff 🙂

I showed off my new achievement, personally I dont think it was appreciated just how much time and effort I had put into this. but it went down really well. We now had a blog! we now had a blog alongside our store and it was really starting to come together.

Over the next few weeks I kept working on improving the blog while managing the shop. Then suddenly a new disaster, our server had for some reason gone offline. Trying to connect via the backup terminal access just gave me a blank screen, something was wrong and I couldn’t get access to see what. To make matters worse, our provider had very nicely decided to cut back on it’s 24/7 support, and now only operated 8-8. At around midnight, it’s not exactly what you want to be finding out that the support has changed and no-one bothered to tell you. The ONLY thing I could do was email them, and hope someone picked it up soon. They didn’t! I spent a good few hours trying everything I could think of to get hold of someone or find a way to the console, but nothing. This had the effect of making me sleep through my alarm at 8am, but I woke at 9am and called them. After a few choice words, I was assured the tech team would look at it right away. I was so tired I fell back to sleep and woke again about mid day. The first thing I did was open our site, or I should say attempted to! it was still down!! Another call, more choice words, and me advising them I’m not going anywhere and if they cut me off I’ll just keep calling until I can speak to a “Tech”, then explaining the problem and what I had tried to tier 1 monkey, quickly got me escalated. I couldn’t stop laughing when I finally did get to tier 2, their tier 1 had placed me on hold to get someone then come back to me and said ‘I need you to take this call, this bloke knows what the f**k he’s on about, I’d put him to tier 3 but I can’t direct transfer’ to which I replied ‘Yes and I know how to work a phone system, Line 1 is the customer, you should be on Line 2 for that conversation’ 🙂 I have to be honest just that mistake made my day. Tier 1, 2 and the manager that called me back an hour later were mortified, but as I explained to him I’ve been a senior tech on phone support, I’ve been an IT manager, I’m guessing I hit a newish person and scared the crap out of them. I only care in getting this back online. To be fair to Tier 2, I was connected to the console while he was apologising. (This part really could have had it’s own post).

Anyway getting over that failure I started looking for another VPS provider, I had no problem with their VPS and generally it was a very stable system, but 8-8 support with no out of hours we’re really f**ked option, forced my hand. It had gone down very badly with the others that this had costs us money and there was no way I could argue it as I agreed the situation was crap.

I found another provider and started moving stuff, but it just wasn’t right. It was actually a previous colleagues company, but something just wasn’t right. So I kept looking. Then I found Digital Ocean, initially I started using them to test some wordpress plugins, but I loved that I could very quickly bring up new servers in a matter of minutes. This surely has to be better than waiting hours. And it was. Testing was going well, so I started moving everything over. Everything just worked, and where I had to contact their support for a few little things (1 account related can’t remember the others), I had a reply very quickly sometimes within minutes other times within 30 mins. I couldn’t fault their support and I wasn’t bounced around, they knew exactly what I needed and sorted it.

So here we have a medium spec’d Digital Ocean server, running our WordPress and OsCommerce solutions and handling both pretty well.

But being one to never settle, I kept tweaking stuff and looking at out options. I setup another server (droplet) for testing, another wordpress install later and I’m going through trying out the ecommerce plugins. I was blown away with WooCommerce! yes OsCommerce worked for us. and yes I had put in quite a bit of time customising it and getting it to work with our processes. but the whole feel of the interface was crap. Woocommerce was like a breathe of fresh air. It had a bunch of functionality, there’s loads more plugins, it’s far easier to customise, it works from the wordpress themes, and fits right in with our blog and doesn’t look disjointed.

I proposed we move over to this and it went down well. Well enough infact the the others wanted to get more involved, we spent weeks working on changes to the theme (that’d we’d paid about $50 for), I moved the shop over and made it live without telling our facebook audience. We started getting some sales via Woocommerce, and it was obvious that this just integrated well.

We were going to have a relaunch to show off the new blog and store, I think I managed to p*ss the others off, when WordPress brought in a new standard theme that worked even better with Woocommerce and I changed to it to show them. It was obvious that it did and we should stick with it, but it also meant the last few weeks of customising was wiped out (and they still bring up the time I wiped out a few weeks work when I changed the theme).

I would never say wordpress/woocommerce is perfect, I’ve found many issues along the way and had to find work arounds for a lot of stuff. I still dont truly feel like I know what I’m doing and there’s no way that we use wordpress to it’s full potential. Currently we have the blog and shop running, we have somewhere in the region of 10k posts and around 15k sales. We still don’t publish to the blog independent of facebook/twitter but it’s on the roadmap.

One thing that has caught us out a few times is DigitalOcean scaling. Because we very often have little traffic, I always keep the servers scaled down with the intention of boosting them up before we push anything new. On at least 2 occasions, we’ve forgotten this and overloaded our site.

I’ve also gone through a few different configurations just trying to find the best solution.

1st We had 1 server, that was mid range and just worked, but I knew this alone wouldn’t handle the traffic.

2nd I brought up 2 web servers and a database server. This wasn’t an ideal setup, loadbalancing was at DNS level, syncing was done via cron jobs, and the whole thing held together via a VPN to keep database connections secure. This had a bunch of problems.

Next I moved back to a single web server but kept a separate database. This was better, and around the time DigitalOcean allowed you to scale up easier (but not down you still had to wipe out the server to do so).

Because having a single web server just wasn’t enough, I went back to 2, but added in the new(ish) cloudflare CDN in front of the servers. This really helped (though I’m still not convinced really does CDN for us)

As part of the above, I tried incorporating GlusterFS (absolute disaster). From every web search I did GlusterFS looks to be THE solution. In practice for us it took a website responding (with some heavy  graphics) in 3 seconds longest avg 2secs, to 30sec longest 18secs avg. I know everyone rave’s on about how great it is and how if it’s slow it’s something you’ve done. I dont believe this for a second. I’ve spent days at a time trying to make it better, but the simple truth is if the files are pulled locally I get the 2/3secs above. When using a Gluster mount point to the files (which are still actually local, Gluster on both web servers, mounted back to themselves), I get the 18-30secs. Both web servers have a private lan connection to use gluster in the same Digital Ocean Location and NO amount of tweaking or testing seems to every really improve this. It was only made worse during testing when I took down one of the servers, so that the other could only use itself to serve the files and this managed to take out the mountpoint until I restart, and still it served up the pages slowly. I thought the whole point in using Gluster (at least for me) was HA, no single point of failure. Having both servers offline if one goes down does not seem very HA to me.

The ONE thing I really want Digital Ocean to sort out is their private lan. In order to solve the issue of anyone else on the private lan being able to see my traffic between servers I’ve had to use VPN’s between them. This adds complications to the entire setup, and a private lan per account would be very welcomed.

The setup I’m currently in the middle of deploying is:

a) Cloudflare

b) 2xNginx loadbalance proxies (also serve up maintence pages if they can’t connect back.

c) 2xNginx Backend servers

d) 2xMySql+Redis Servers

e) 2xNFS Servers

I’m happy with the load balancers, though I would love for DigitalOcean to offer a proper loadbalancing solution.

The MySQL servers took some config to get replication working properly while also using SSL for the connection to each other and from the backend web servers.

I still haven’t managed to configure MySQL to be HA from the web servers, so at the moment this would be a manual switch. I’ve found HyperDB for wordpress, while should resolve this, but since I had to slightly change the wordpress config to do SSL for MySQL, HyperDB doesn’t seem to be able to use SSL so I need to work out how to do this. I find this really weird as once of the suggestions is to have your database remote, I really would have thought being remote (especially if using something like Amazon for the database) would mean you’d want to use SSL to keep your database traffic secure. It seems strange that this isn’t a fundamental option in HyperDB (unless I’m just not seeing it).

And the last part NFS Servers, I still need to find out how to keep these in sync (without using Gluster), I’ve previously used Syncthing to keep servers in sync, it works but is pretty much held together with tape (my configuration of it not the actual program). Once I have the NFS sync’d I also need to find a way for the web servers to use both HA.

I do feel like this configuration is the closest to the best I can achieve on a budget. Once I have the MySQL and NFS stuff worked out, I will then be able to scale any server without completely taking the site offline. Which will really help in being able to deal with spikes. It is not much easier to scale with Digital Ocean, but I’d still really want to know doing so or taking a server out for maintenance is fine because everything will just keep running.

If you’ve got this far, I really thank you for reading. I hope the next couple of posts will be my solutions to the MySQL SSL and NFS problems. It’s not 2:41am and I think I’ve been writing this for about 2 hours, so I’m going to sleep 🙂 leave a comment if you got this far, include the words ‘sleep deprived’ so I dont think it’s spam.

Setting Featured image on WordPress Posts in BULK

I’m in the middle of changing another site to a new theme. The problem being the front page uses the feature image from each post to build up the display. None of the posts have a feature image set. It uses a facebooktowordpress plugin (heavily customised the original no longer pull the images, and really couldn’t handle pulling what was needed).

With this plugin each post on facebook is posted to the blog, the first image (this need changing to all images) on the post is downloaded to the wordpress server and then the source of the post uses the local image. But at no point did I ever anticipate needing featured images to be set.

So here’s the code I’ve put together to:

  • Pull a list of posts without a thumbnail/featured image set.
  • Pull the contents of each post in the list and look for img src
  • Download the image
  • Add the image to the media library against the current post
  • Pull the ID of the attachment and set it as the featured image

This is meant again to run from the command line NOT as a plugin.

<?php
 $counter=0;
 $limit = 20;
 $updated=0;

 if( php_sapi_name() !== 'cli' ) {
 die("Meant to be run from command line");
 }

 function find_wordpress_base_path() {
 $dir = dirname(__FILE__);
 do {
 //it is possible to check for other files here
 if( file_exists($dir."/wp-config.php") ) {
 return $dir;
 }
 } while( $dir = realpath("$dir/..") );
 return null;
 }

 define( 'BASE_PATH', find_wordpress_base_path()."/" );
 define('WP_USE_THEMES', false);
 global $wp, $wp_query, $wp_the_query, $wp_rewrite, $wp_did_header;
 require(BASE_PATH . 'wp-load.php');
 echo "Site URL: " . get_site_url() . "\n\r";
 echo "Base: " . find_wordpress_base_path()."/\n\r";
 echo "Upload DIR: " . wp_upload_dir()['path'] . "\n\r";
 echo "Posts: " . wp_count_posts()->publish."\n\r";

 $query = array (
 'posts_per_page' => -1,
 'post_type' => 'post',
 'meta_key' => '_thumbnail_id',
 'meta_compare' => 'EXISTS',
 );
 $my_query = new WP_Query($query);
 $posts_with_thumbs = $my_query->post_count;
 echo "Posts with thumbnail_id: " . $posts_with_thumbs . "\n\r";

 $query = array (
 'posts_per_page' => -1,
 'post_type' => 'post',
 'meta_key' => '_thumbnail_id',
 'meta_compare' => 'NOT EXISTS',
 );
 $my_query = new WP_Query($query);
 $posts_without_thumbs = $my_query->post_count;
 echo "Posts without thumbnail_id: " . $posts_without_thumbs . "\n\r";

 $counter = 0;
 while(( $my_query->have_posts() ) and ($counter < $limit)) {
 $my_query->the_post();
 echo "\n\r\n\r";
 echo "Post ID: " . $my_query->post->ID . "\n\r";
 echo "Post Title: " . $my_query->post->post_title . "\n\r";
 $content = $my_query->post->post_content;
 $sub="";
 $video_id="";
 $sub_image=flase;
 $sub_changed=false;
 $sub_youtube=false;

 if ( strpos($content, 'img src') !== false ) {
 $re = "/<img.*?src='([^\"]*)'.*\/>/i";
 preg_match_all($re, $content, $matches);
 $sub = $matches[1][0];
 echo "Image URL: " . $sub . "\n\r";
 $sub_image=true;
 if ( substr( $sub,0,11 ) === "/wp-content" ) {
 $sub = get_site_url() . $sub;
 echo "Real URL: " . $sub . "\n\r";
 $sub_changed=true;
 }
 }

 if ( $sub == "" ) {
 if (strpos($content, 'youtube') !== false) {
 $re = "/\[[{embed\}]]([^\"]*)\[\/embed\]/i";
 preg_match_all($re, $content, $matches);
 $sub = $matches[1][0];
 echo "Youtube Detected. URL: " . $sub . "\n\r";
 $re = "/watch\?v=([^\"]*)\[\/embed\]/i";
 preg_match_all($re, $content, $matches);
 $video_id = $matches[1][0];
 $sub = "http://img.youtube.com/vi/". $video_id . "/hqdefault.jpg";
 echo "Youtube Thumbnail: " . $sub . "\n\r";
 $sub_youtube = true;
 }
 }

 echo "Content: " . $content . "\n\r";

 if ( ($sub_image === true) || ($sub_youtube === true) ) {
echo "Image or Youtube detected!\n\r";
 if ($sub_image === true) {
echo "Image!!\n\r";
 $media = media_sideload_image($sub, $my_query->post->ID, $my_query->post->post_title);
 } elseif ($sub_youtube === true) {
echo "Youtube!!\n\r";
 // Download file to temp location
 $tmp = download_url( $sub );
 // Set variables for storage
 // fix file filename for query strings
 preg_match('/[^\?]+\.(jpg|JPG|jpe|JPE|jpeg|JPEG|gif|GIF|png|PNG)/', $sub, $matches);
// $file_array['name'] = basename($matches[0]);
 $file_array['name'] = $video_id . ".jpg";
 $file_array['tmp_name'] = $tmp;
 // If error storing temporarily, unlink
 if ( is_wp_error( $tmp ) ) {
 @unlink($file_array['tmp_name']);
 $file_array['tmp_name'] = '';
 }
 // do the validation and storage stuff
 $media = media_handle_sideload( $file_array, $my_query->post->ID, "YouTube: " . $my_query->post->post_title );
 // If error storing permanently, unlink
 if ( is_wp_error($media) ) {@unlink($file_array['tmp_name']);}
 }

 if(!empty($media) && !is_wp_error($media)){
 echo "File Downloaded!\n\r";
 $args = array(
 'post_type' => 'attachment',
 'posts_per_page' => 1,
 'post_status' => 'any',
 'post_parent' => $my_query->post->ID,
 );

 $attachments = new WP_Query($args);
 while( $attachments->have_posts() ) {
 echo "Attachment ID: " . $attachments->post->ID . "\n\r";
 set_post_thumbnail($my_query->post->ID, $attachments->post->ID);
 if ($sub_image) {
 $atturl = wp_get_attachment_url($attachments->post->ID);
 $atturl = preg_replace("(^https?:)", "", $atturl );
 echo "Attachment URL: " . $atturl . "\n\r";
 if ($sub_changed) {
 $sub = str_replace(get_site_url(), "", $sub);
 }
 $newcontent = str_replace($sub,$atturl,$content);

 if ($newcontent != $content){
 $update_post = array(
 'ID' => $my_query->post->ID,
 'post_content' => $newcontent,
 );
 wp_update_post($update_post);
 }
 $updated++;
 break;
 }
 }
 echo "\n\r";
 }
 }
 $counter++;
 }
 echo "With Thumbs: " . $posts_with_thumbs . " and " . $posts_without_thumbs . " Without.\n\r";
 echo "Updated: " . $updated . " of " . $counter . "\n\r";
?>

BE WARNED this is designed to make changes to your wordpress posts. The usual about taking backups is a MUST. both of the database and your wordpress www folder.

  • The “Limit” is set low, you’ll need to adjust this as needed.
  • Within the script there is
[[{embed\}]]

You will need to change this to

[ embed\ ]

WITHOUT the spaces, wordpress decided I was trying to embed something so I can’t paste it without a slight change.

  • I’ve tried to keep it as generic as possible, so there shouldn’t be any mention of my site in forced links or searches. When it searches for an image it also looks to see if it’s local to the site using /wp-content instead of a full url, it should get around this problem using get_site_url() but you could force your domain or a different path if needed here.
  • It also looks for any youtube content in the post and tries to pull the relevant image. I just wish it would pull one with a play button on it (I dont want to use CSS to get around this)
  • It handles the youtube download differently (this was made over a few days fixing problems as they come up) but I think it’s needed to give each image a unique filename.
  • You will have to run several times probably increasing the limit each time, on 2nd run it again checks all the previous ones that it couldn’t get an image for, so you should end up with 0 updated of x where x is probably all your posts without an image. I’ll be updating it to use a/(random of a few) image relevant to the site so everything has a featured image.

WordPress Posts without Featured Thumbnail Count

I have another script I’ve been building up to download the first  embedded image in a post, add it to the media library, attach it to the post and set it as featured image.

Here’s some quick code to query how many posts do and do not have a featured image

<?php
 if( php_sapi_name() !== 'cli' ) {
 die("Meant to be run from command line");
 }

 function find_wordpress_base_path() {
 $dir = dirname(__FILE__);
 do {
 //it is possible to check for other files here
 if( file_exists($dir."/wp-config.php") ) {
 return $dir;
 }
 } while( $dir = realpath("$dir/..") );
 return null;
 }

 define( 'BASE_PATH', find_wordpress_base_path()."/" );
 define('WP_USE_THEMES', false);
 global $wp, $wp_query, $wp_the_query, $wp_rewrite, $wp_did_header;
 require(BASE_PATH . 'wp-load.php');
 echo "Base: " . find_wordpress_base_path()."/\n\r";
 echo "Upload DIR: " . wp_upload_dir()['path'] . "\n\r";
 echo "Posts: " . wp_count_posts()->publish."\n\r";

 $query = array (
 'posts_per_page' => -1,
 'post_type' => 'post',
 'meta_key' => '_thumbnail_id',
 'meta_compare' => 'EXISTS',
 );
 $my_query = new WP_Query($query);
 $posts_with_thumb = $my_query->post_count;
 echo "Posts with thumbnail_id: " . $posts_with_thumb . "\n\r";

 $query = array (
 'posts_per_page' => -1,
 'post_type' => 'post',
 'meta_key' => '_thumbnail_id',
 'meta_compare' => 'NOT EXISTS',
 );
 $my_query = new WP_Query($query);
 $posts_without_thumb = $my_query->post_count;
 echo "Posts without thumbnail_id: " . $posts_without_thumb . "\n\r";
?>

It’s made to run from the terminal NOT as a wordpress plugin, just place it in the root wordpress folder and run with php <filename>

It will output something like

Base: /var/www/{folder}/
Upload DIR: /var/www/{folder}/wp-content/uploads
Posts: 8547
Posts with thumbnail_id: 6824
Posts without thumbnail_id: 1723

Took a few seconds to return, it’s probably not the best way of doing it, but worked for what I wanted. I’d like to give credit for each bit of code but really no idea what I got the different bits from.

GlusterFS woes

If your looking for the gluster error ‘brick2.mount_dir not present’ 
jump to the end

Time for another post 🙂

I’ve been using DigitalOcean for some time now, and I’m still tweaking my setup. Once thing I really hope they sort soon is proper private lan between your own droplets, for now we just have to use a vpn between them.

Being responsible for a new website can give a lot of headaches, especially when you have to try to guess just how popular it will be. So about a year ago I setup a new droplet to host the new site, testing it was going well and I increased the droplet before we launched to handle a spike. Sadly I underestimated just how busy it would get, based on the numbers I was given I think I was about right but unfortunately those numbers were way off.

But each failure it just another learning curve 🙂 so as quickly as possible that was fixed, then the site got to normal volume so we scaled it back down (yes it’s a whole cost exercise especially when your paying for it). Then we had the lead up to Christmas, in an attempt to not repeat the problems at launch, I changed the whole configuration so that I could (if needed) take a server down while staying partly operational. This kind of worked and was needed when some brightspark promoted the site a day early and we hadn’t scaled back up!

Come the new year I decided it was time to seriously sort the infrastructure for the site. It now has an online store and it’s important it keep running, it’s not just a blog anymore. So I put in place the following setup (working around various obstacles).

DNS:

  • All websites name servers are pointing to cloudflare and they handle the first web connection. It works really well on their free tier, and changes (adding new servers) are pretty quick to take effect.

DigitalOcean Droplets:

  • 1x Server running as a load balancer.
  • 2x Servers running as webservers.
  • 1x Server running as database server.
  • 1x Server running as email (not quite running).
  • At the same time as making this setup I decided to ditch apache and move to nginx, so loadbalancer and webservers are running that.

Software:

  • 4x Nginx (loadbalancer and webservers, and installed on database server for stats).
  • 2x Syncthing (webservers) to keep the www folders in sync.
  • 1x MySQL Server.
  • 4x OpenVPN (connections between loadbalancer and webservers, webservers and database).
  • 1x Redis Server (for session data, I tried nginx load balancing options but it still screwed up if I had to take one of the web servers out for maintenance, so installed Redis on the Database server.

As this progressed I dropped the VPN between loadbalancer and webservers and just use HTTPS/SSL instead. Syncthing already has it’s own SSL built-in so I could leave that over the semi-private LAN. but I really would like to change MySQL to be encrypted and drop the vpn from that too, but find info on doing this for wordpress seems to be non-existent at the moment.

Roll forward a few months, this has been working but still has areas to resolve. Such as syncthing: yes it keeps the folders in sync and is actually really good that I can also store them on another system easily. but it doesn’t listen to the OS for changes to files. Instead it polls every x seconds. Although there’s nothing much changing, updating plugins became a problem if you click the update button it downloads but then nginx sends your next request to another server and now the plugin.zip isn’t there so wordpress throws an error.

My whole reason for running syncthing was I wanted the files to be available on each server independently. So if Server A goes down it doesn’t matter Server B has all the files locally anyway. NFS would still give me a single point of failure. On looking into resolving this though I remembers GlusterFS. I’d played with it a long time ago, but dropped it as a solution (can’t remember what I was doing or why it wasn’t working). Now it’s time to try it again. downside I’m back to needing VPN’s and OpenVPN isn’t the easiest to quickly add a new server.

So I’ve done the following:

  • Added a new server just for the files (I don’t like gluster being in a 2 replica incase there’s a problem, there should be a majority who thinks they are holding the correct file).
  • Swapped out OpenVPN for Tinc, I have to say one of the best decisions. yes there are downsides, it creates a mesh (only doable with OpenVPN by running quagga for manually forcing routes) but I have no idea which Server is actually connected to which Servers. There’s no VPN status and I can’t see how much traffic has gone between 2 particular servers (iptables helps but it’s not 100%)
  • Added another new server for nagios and central logging.

There were a load of changes within a few weeks of each other, but I now have a setup I’m confident I can scale more quickly than ever before. Yes it has single points (load balancer, mysql) but I know if the load balancer has a problem it’s pretty static so can be wiped and redeployed quickly, as well as it will take a few minutes to open the webservers to the world and let cloudflare hit them directly. So MySQL is the real problem and I’ll be addressing that one soon enough.

 

So now onto today’s problem 🙂

I’ve had gluster running a few weeks, and I have our testing website (for theme changes etc) setup on our webservers behind the loadbalancer. The last few days I’ve need to do more extensive testing than just changing bits in a theme, so I’ve decided to split the tester site onto it’s own droplet (still behind the loadbalancer and with VPN to the databases). I thought I may as well make use of Gluster here too (yes it would be in a 2 replica setup itself and the fileserver. I don’t like that idea). So I brought up a new server and configured it: new users, firewall rules, tinc, nginx, php, etc.

I added gluster and copied the /etc/hosts entries over from the other servers. All looked good. I gluster peer probe ServerX and it worked, gluster peer status and I could see it fine. but on trying to add a new volume:

gluster volume create xxx-yyy-zzz replica 2 transport tcp FILESERVER:/GLUSTER/xxx.yyy-zzz TESTSERVER:/GLUSTER/xxx.yyy-zzz force

I was getting the error:

volume create: xxx-yyy-zzz: failed: Commit failed on localhost. Please check the log file for more details.

Checking the logs on both servers would show (maybe slight variation):

[2015-07-28 16:00:41.612907] E [glusterd-hooks.c:328:glusterd_hooks_run_hooks] 0-management: Failed to open dir /var/lib/glusterd/hooks/1/create/pre, due to No such file or directory
[2015-07-28 16:00:41.614499] E [glusterd-volume-ops.c:1811:glusterd_op_create_volume] 0-management: brick2.mount_dir not present
[2015-07-28 16:00:41.614587] E [glusterd-syncop.c:1288:gd_commit_op_phase] 0-management: Commit of operation 'Volume Create' failed on localhost

I tried a series of things to fix it:

I thought maybe the /GLUSTER/xxx-yyy-zzz needed to be created (I already made /GLUSTER) – Nope

  • I detached the peer and reattached – No.
  • I reboot the file server and test server – No.
  • I detached, reboot, reattached – No.
  • I tried creating the volume with just the test server and no replica – No.
  • I tried creating the volume on just the fileserver with no replica – Yes.

So the problem is point to the new system, but it’s a brand new system. They’re peers and connected.

  • I tried uninstalling and reinstalling gluster – No.
  • I tried uninstalling, purging and reinstalling – No.
  • I tried uninstalling, purging, manually deleting the /var/lib/gluster (probably a mistake that I didn’t detach first :() and reinstalling – No.
  • I have no idea why this wont WORK!!!!!

Let’s go further back, check the VPN, ping the servers.

  • Ping fileserver from testserver – Yes.
  • Ping testserver from fileserver – Yes/Hang on that’s the wrong IP!! Yes I’d copied an entry from webserverB into /etc/hosts, update the name but missed the IP address. Idiot! correct that. Ping – OK.
  • Try gluster again – Yes.

So if you’re having problems and seeing brickX.mount_dir not present make sure your DNS between servers is correct.

I don’t really know how the peer probe worked, but I think I must have done that from a servers who’s hosts was correct

Todays WordPress Adventures

Well it had to happen at some point, today I had a nice email from DigitalOcean saying they’ve disabled networking on one of my servers. This was because it’s ip address had been reported to RBL’s by several other servers.

Looking at the logs they included I was beating the s**t out of others wp-admin login pages. Now I know I wasn’t doing it personally, it was the first time in a long time I was in bed early and this seemed to start at 2am.

Luckily I could access my Droplet using the Console page, so after login I sat thinking ‘um…..’ where exactly do you start. The server normally has quite a bit of traffic so the logs are always cluttered. Needle in a haystack springs to mind.

I decided to run htop and see if the server was doing much without any traffic coming in. Oh yes /usr/bin/host is eating resources. So do I kill it or not. I decided not to at this point. Without networking I’m not doing anymore harm, and leaving it running may help find out what’s calling it.

It was a good call. I can’t give details of everything I did, I spent a hour hours checking through stuff. I do remember checking lsof and finding a link between a process id for host and a file within wpallimport’s uploads directory. So I had a look in there, followed by some further searching of google. 1 file in particular .sd0 seems to bring back results and this seems to be what’s caused it.

To get my server running again, I disabled the entire site within apache that was affected (luckily not a major site) and reboot the server. Once I was happy there were no cronjobs or anything calling on this script I mailed DigitalOcean and asked them to re-enable networking. They’re pretty speedy and within 15 mins had done it. A further reboot and my servers back online minus the one site I’ve disabled.

I expect the cleanup for this is going to take weeks of checking files, against backups while keeping as much as possible online.

I’m pretty confident I know what’s caused it, an out of date wordpress install with an out of date wpallimport install. It really goes to show that you have to check old stuff and keep it upto date.

The most annoying thing for me is that WordPress has a multisite option (which I use on 2 installs) and this allows me to keep plugins and everything upto date easily of sub-sites that are barely used. but it doesn’t extend to multiple domains which would really allow wordpress to be used across all my domains from one central console and then everything would be kept upto date in one go.

I know there’s a plugin for multisite domains, but I feel this is more of a hack of the wordpress system rather than wordpress properly designed to function with this in mind. I don’t want to install it and encounter many more problems.

It’s very bad admin on my part not having kept this site upto date, I’ll be the first to admit that but it’s easy to forget about installs you don’t use regularly. There must be some kind of nagios plugin to alert me to out of date plugins/versions for wordpress so I’ll be looking for that later in the week 🙂