New to Ansible

So If you read my last post (it was really long sorry), you’ll see right at the end the current deployment. I had tried a few managers to be able to deploy/scale the whole system, but it really overcomplicated the whole thing. Chef looked really good (I can’t remember the other one), but it was problematic and just didn’t suit.

Instead I kept with the scripts I had written for the time being. They are in no way good enough to share as they are very customised to my setup but they achieve what I need. However to run them takes quite a bit of initial manual work.

So what do I need from a system:-

  1. It has to just work, not go installing stuff it depends on to run.
  2. It has to be able to split the setup into an initial and running level.
  3. It has to be able to be told easily about a new server and what role it will be, then do the initial setup and onto the relevant running level.
  4. It MUST be simple to use and understand. I’m getting a bit sick of having to read through how to configure weird stuff because someone decided to do things completely differently to how you’d expect it to work.
  5. It must have very low resource requirements. I want to run this from a management droplet that already runs nagios, so it can’t sit there just eating resources while it has nothing to do.

I saw a very quick video on Ansible with someone using it for Raspberry PI’s (something else I have far too many of) and thought right away I should look into that. So here I am. I’m going to do some testing with Ansible.

Initially all I’m looking for it to do is handle the initial setup that I already have scripts for.

  1. Create a new user, add it to the sudo group, set a password and copy the SSH keys.
  2. Reconfigure sshd to deny root connections.
  3. Set the server’s timezone.
  4. Set the keypad option in nanorc.
  5. Set the .bashrc for root to use a red prompt.
  6. Install some basic packages such as screen and htop
  7. Create a swapfile
  8. Install and configure NTP
  9. Setup IPTables (I dont use UFW, I prefer to deploy an iptables file and have this restored when the loopback interface is loaded).

I’ve had a quick look on DigitalOcean community (I love the resources there), but the stuff about using Ansible seems a little more throw it all in one file rather than properly split them out like I saw in the video. I think splitting out each task is a must to be able to understand what’s going on and making changes.

The video I’m referring to is https://www.youtube.com/watch?v=ZNB1at8mJWY

So that’s the start, let’s get going and see what I can screw up

Part 1 is Here

How I started using WordPress

I should make it clear from the outset, this post isn’t going to be solving anything. I’ve spent about 3 days working on stuff and this is just bugging me.

Let’s go back to around this time last year. A couple of friends and I were working on how to get some money back from a facebook page one of them had setup a few years prior. I had been working with him on it since a few weeks after he set it up, and we’d pushed around ideas to sell some merchandise alongside it a few times, but never really got anywhere.

He had made a rash decision one night to use an ‘online website creation’ provider to get a site online. From the start I hated it. Not the idea, I think it was about time to get our own site running, but he spent a few weeks tweaking about 8 pages to look really good, using their WYSIWYG editor. Then wanted me to change a few things in code that wasn’t right. It was an absolute nightmare! I can’t remember the name of the site but I think it had a W E X somewhere in the title.

It was a paid “solution”, and I think cost about £40 per month by the time he’d added a mailing list option to capture email addresses (not actually handle any emails) and a few other bits.

Coming from more of an IT position, my main concern was around load/spikes being handled. There was very little information about how well they could handle this (and I think we found the reason why). We finally put this live posting the link to about 40k followers.

Watching the nice google analytics (that I had to add to each page, because they didn’t have a drop in the tracking code option), within seconds we hit around 150 hits per second. This continued for a few hours, but 2 problems became apparent:

  1. The site was struggling, and we would probably have been hitting a higher number. We were getting positive feedback and people understood it was busy, but I still wasn’t happy we’re paying for them to handle this and it’s just not.
  2. And this really hits onto point 1. He’d setup a site that was pretty static! There was nothing to get people coming back for more. Yes there was a news page that we could update, but other than the mailing list form there was nothing interactive. (So hitting point 1, static content should really have been able to be handled 100x better).

Anyway we took that for what it was, a basic site with a bit of info and something to get us started.

We already had a ‘Shop coming soon’ page, so the next thing we were trying to figure out was what are we going to sell and how?

Initially we thought t-shirts, and started looking at some of the ways we could do this. 2 main providers seemed to jump out zazzle and cottonpress (I think, it was a while ago). While both had some good offerings, neither really grabbed us. I can’t remember which, but one of them deleted an image we uploaded for copyright (it was our logo, and we had it plastered everywhere and the account was signed up with a domain name using the same logo), but they wanted us to fill in and snailmail/fax some copyright forms and reupload the logo. Considering we were only seeing what we could do at the time, we decided to drop them as an option. If we have to jump through these hoops with everything we do, we’ll spend more time filling in their paperwork than anything else.

Time went on, visits to the site died down (did I mention lack of content/interactivity), and we still hadn’t sorted out products, a store, a business.

We continued with the facebook page and still poked around ideas on how to get a site/shop running. I spent a good few weeks working with oscommerce (I’d previously used it a little for another project idea, but it never went live) and finally had something to show, a semi working shop front (it had no products).

We discussed that neither of us really had a clue about setting up a proper business. I’m all IT and have no interest in writing business plans or doing business meetings (I should mention that a previous role I was an IT Manager and regularly had to be part of “grown up” meetings, I’m a Tech, I hate people, I hate meetings, give me something broken – I’ll fix it, give me a problem – I’ll work out a solution. But in NO WAY do I want to be taking part in any more business meetings).

A few months later I was helping him move. Another friend of his was also helping. I’d met him once before but didn’t get chatting then. He mentioned he was in his last year of university and was studying business and finance! Just how he hadn’t thought of this before I dont know, but instantly we knew he was coming on board 🙂

We spent about 3 hours in McDonalds discussing what we had, what we’d like to do, and just how crap we’ve been so far. Within this conversation we said about selling t-shirts/mugs/bags etc. Just like a genii out of a lamp our 3rd comes out with ‘oh my almost father in-law does printing stuff, I know he does mugs. Shall I speak to him?’ Just like a match made in heaven, we suddenly had our missing piece! someone who should have more of an idea on the business side (or at least know someone to ask) and connections to a printers for the kind of stuff we want to sell. You just couldn’t make it up, he’d known this guy for a few years and never thought to asking him about business stuff.

Things started moving forward, slowly at first but at least they were moving. We met up with “almost” father in-law and went through some designs and processes. We setup a business. I continued work on the shop website and we took down the other that was costing too much and not really doing anything.

In around October time we were set. Nothing spectacular, about 9 mugs and a few t-shirts. The mugs would be the easiest, we just send the order to “almost” and he takes care of printing, packaging and sending. The t-shirts would be a little different as we’d have to get a template made for them, and couldn’t afford the cost until we had some orders in.

We launched, I had tried to over-spec the server(s) but in itself this was tricky. There were no real stats on how well oscommerce could perform on certain hardware. And scalable VPS’s such as DigitalOcean’s current system just didn’t exist. Scaling would mean taking a new 30 day server and moving everything over to it. Certainly not a 5 min job, and definitely not something to start an hour after we’ve launched. We’ll just have to bite the bullet and see.

My memory of launch night is fuzzy to say the least. I think I’d been working 36-48 hours trying to finish stuff off. I had a big list of checks and can’t remember doing half of them.

Our page audience had grown to about 70k, so I was very nervous. We launched the shop and watched. ping, email – it’s an order, ping, another, ping another. It was working. I have to say the server(s) held up pretty well. It wasn’t without problems, we did start seeing the site timing out on new connections for about 10 mins. but a swift kick of apache sorted that and it didn’t cause a problem again.

Finally we were running. The feedback again was good. We had a bunch of concerns like

  1. Will the system work
  2. Will the servers(s) hold up
  3. What happens if it goes mental and we sell a thousand mugs
  4. Can we really do this

I think all in all it went well. We could have done better, but it also could have been a lot worse. We ran with oscommerce for a few months. Shortly after launching the shop I had a discussion on just how were we going to get facebook and the shop incorporated? There was no obvious answer, then we hit a problem. Once of the posts to facebook got reported (it’s a humorous page, and we only every post stuff sent in to us), this showed us just how much we’re reliant on facebook. Suddenly we’re all logged out and the poster was blocked for 24 hours. Luckily facebook pretty much left the page alone (just delete the one post), so we played on it that one of our admins was in the dog house for an earlier post. But it still didn’t take long to realise if facebook wanted they can delete the page at a whim and we’ve suddenly lost all our content and fans!

This just didn’t sit right with me, and I started looking at how the hell to get a backup of OUR page and content. There was nothing. So I started looking at how can we do things differently, enter WORDPRESS.

I’d seen this name floating about for the last few months, but never really saw the point in using it. I dont blog, we dont blog, so what’s the point? (I’m still not entirely sure I understand the point) but it’s close to one of the best things I’ve spent weeks fiddling with.

I’d installed wordpress on our VPS for me to have a look around, it still wasn’t a site that we could really use, but as a CMS maybe I can find a way to connect to facebook and backup our stuff. There must be people who do this right? WRONG. There’s loads of plugins for wordpress & facebook, but I’ve only ever found 1 that takes your page and puts it as posts. To make matters worse, it’s flakey as f**k hadn’t been worked on in god knows how long, and of the very few comments in the code their in Chinese.

Now I would never describe myself as a coder. I’ve used Delphi and VB for writing some functional programs in the past, and had to program a few in VB.net when I was IT Manager (the old problem/solution thing), I could also write some ASP and PHP, really most of my stuff was drag and drop boxes and program them up for stuff. I did quite a bit of databases stuff within them, but that was it. There was absolutely no such thing as using classes (I think even think they existed). But as part of my job I had a dev team who did develop in PHP and VB.net, and they were always amazed when trying to tell me how something couldn’t work, that I could not only follow along but tell them why they were wrong and on several occasions when something broke could actually read their code and work out (normally the simple thing) a temporary fix.

And so it begins I now have no dev team, a bunch of php code and classes that really didn’t make much sense to me. Bit by bit I managed to work out what each bit was doing, then moved on to changing it so that it would run for us. I know it will seem like simple stuff (especially looking back), but things like:

  • Changing a hard coded loop to only pull 10 facebook posts, to take a limit from a setup interface where you can specify how many to pull.
  • Adding in Date ranges to pull from and to.
  • Improving the cron job, so it look from the time of the lastest post it was + 1 second.
  • Downloading any attached image and saving it to the server (huge accomplishment).
  • Changing the posts content that gets published and updating links back to the post it just posted.

There’s load of stuff I’ve had to do to this plugin to get it working, let alone better and working for us. Eventually I’d finished (your never really finished, I have a list of new changes to get to sometime). After running it on a fresh wordpress install, we suddenly have a complete backup of our facebook page, around 9k posts and images, all sat in wordpress. and what’s more automatically grabbing new stuff 🙂

I showed off my new achievement, personally I dont think it was appreciated just how much time and effort I had put into this. but it went down really well. We now had a blog! we now had a blog alongside our store and it was really starting to come together.

Over the next few weeks I kept working on improving the blog while managing the shop. Then suddenly a new disaster, our server had for some reason gone offline. Trying to connect via the backup terminal access just gave me a blank screen, something was wrong and I couldn’t get access to see what. To make matters worse, our provider had very nicely decided to cut back on it’s 24/7 support, and now only operated 8-8. At around midnight, it’s not exactly what you want to be finding out that the support has changed and no-one bothered to tell you. The ONLY thing I could do was email them, and hope someone picked it up soon. They didn’t! I spent a good few hours trying everything I could think of to get hold of someone or find a way to the console, but nothing. This had the effect of making me sleep through my alarm at 8am, but I woke at 9am and called them. After a few choice words, I was assured the tech team would look at it right away. I was so tired I fell back to sleep and woke again about mid day. The first thing I did was open our site, or I should say attempted to! it was still down!! Another call, more choice words, and me advising them I’m not going anywhere and if they cut me off I’ll just keep calling until I can speak to a “Tech”, then explaining the problem and what I had tried to tier 1 monkey, quickly got me escalated. I couldn’t stop laughing when I finally did get to tier 2, their tier 1 had placed me on hold to get someone then come back to me and said ‘I need you to take this call, this bloke knows what the f**k he’s on about, I’d put him to tier 3 but I can’t direct transfer’ to which I replied ‘Yes and I know how to work a phone system, Line 1 is the customer, you should be on Line 2 for that conversation’ 🙂 I have to be honest just that mistake made my day. Tier 1, 2 and the manager that called me back an hour later were mortified, but as I explained to him I’ve been a senior tech on phone support, I’ve been an IT manager, I’m guessing I hit a newish person and scared the crap out of them. I only care in getting this back online. To be fair to Tier 2, I was connected to the console while he was apologising. (This part really could have had it’s own post).

Anyway getting over that failure I started looking for another VPS provider, I had no problem with their VPS and generally it was a very stable system, but 8-8 support with no out of hours we’re really f**ked option, forced my hand. It had gone down very badly with the others that this had costs us money and there was no way I could argue it as I agreed the situation was crap.

I found another provider and started moving stuff, but it just wasn’t right. It was actually a previous colleagues company, but something just wasn’t right. So I kept looking. Then I found Digital Ocean, initially I started using them to test some wordpress plugins, but I loved that I could very quickly bring up new servers in a matter of minutes. This surely has to be better than waiting hours. And it was. Testing was going well, so I started moving everything over. Everything just worked, and where I had to contact their support for a few little things (1 account related can’t remember the others), I had a reply very quickly sometimes within minutes other times within 30 mins. I couldn’t fault their support and I wasn’t bounced around, they knew exactly what I needed and sorted it.

So here we have a medium spec’d Digital Ocean server, running our WordPress and OsCommerce solutions and handling both pretty well.

But being one to never settle, I kept tweaking stuff and looking at out options. I setup another server (droplet) for testing, another wordpress install later and I’m going through trying out the ecommerce plugins. I was blown away with WooCommerce! yes OsCommerce worked for us. and yes I had put in quite a bit of time customising it and getting it to work with our processes. but the whole feel of the interface was crap. Woocommerce was like a breathe of fresh air. It had a bunch of functionality, there’s loads more plugins, it’s far easier to customise, it works from the wordpress themes, and fits right in with our blog and doesn’t look disjointed.

I proposed we move over to this and it went down well. Well enough infact the the others wanted to get more involved, we spent weeks working on changes to the theme (that’d we’d paid about $50 for), I moved the shop over and made it live without telling our facebook audience. We started getting some sales via Woocommerce, and it was obvious that this just integrated well.

We were going to have a relaunch to show off the new blog and store, I think I managed to p*ss the others off, when WordPress brought in a new standard theme that worked even better with Woocommerce and I changed to it to show them. It was obvious that it did and we should stick with it, but it also meant the last few weeks of customising was wiped out (and they still bring up the time I wiped out a few weeks work when I changed the theme).

I would never say wordpress/woocommerce is perfect, I’ve found many issues along the way and had to find work arounds for a lot of stuff. I still dont truly feel like I know what I’m doing and there’s no way that we use wordpress to it’s full potential. Currently we have the blog and shop running, we have somewhere in the region of 10k posts and around 15k sales. We still don’t publish to the blog independent of facebook/twitter but it’s on the roadmap.

One thing that has caught us out a few times is DigitalOcean scaling. Because we very often have little traffic, I always keep the servers scaled down with the intention of boosting them up before we push anything new. On at least 2 occasions, we’ve forgotten this and overloaded our site.

I’ve also gone through a few different configurations just trying to find the best solution.

1st We had 1 server, that was mid range and just worked, but I knew this alone wouldn’t handle the traffic.

2nd I brought up 2 web servers and a database server. This wasn’t an ideal setup, loadbalancing was at DNS level, syncing was done via cron jobs, and the whole thing held together via a VPN to keep database connections secure. This had a bunch of problems.

Next I moved back to a single web server but kept a separate database. This was better, and around the time DigitalOcean allowed you to scale up easier (but not down you still had to wipe out the server to do so).

Because having a single web server just wasn’t enough, I went back to 2, but added in the new(ish) cloudflare CDN in front of the servers. This really helped (though I’m still not convinced really does CDN for us)

As part of the above, I tried incorporating GlusterFS (absolute disaster). From every web search I did GlusterFS looks to be THE solution. In practice for us it took a website responding (with some heavy  graphics) in 3 seconds longest avg 2secs, to 30sec longest 18secs avg. I know everyone rave’s on about how great it is and how if it’s slow it’s something you’ve done. I dont believe this for a second. I’ve spent days at a time trying to make it better, but the simple truth is if the files are pulled locally I get the 2/3secs above. When using a Gluster mount point to the files (which are still actually local, Gluster on both web servers, mounted back to themselves), I get the 18-30secs. Both web servers have a private lan connection to use gluster in the same Digital Ocean Location and NO amount of tweaking or testing seems to every really improve this. It was only made worse during testing when I took down one of the servers, so that the other could only use itself to serve the files and this managed to take out the mountpoint until I restart, and still it served up the pages slowly. I thought the whole point in using Gluster (at least for me) was HA, no single point of failure. Having both servers offline if one goes down does not seem very HA to me.

The ONE thing I really want Digital Ocean to sort out is their private lan. In order to solve the issue of anyone else on the private lan being able to see my traffic between servers I’ve had to use VPN’s between them. This adds complications to the entire setup, and a private lan per account would be very welcomed.

The setup I’m currently in the middle of deploying is:

a) Cloudflare

b) 2xNginx loadbalance proxies (also serve up maintence pages if they can’t connect back.

c) 2xNginx Backend servers

d) 2xMySql+Redis Servers

e) 2xNFS Servers

I’m happy with the load balancers, though I would love for DigitalOcean to offer a proper loadbalancing solution.

The MySQL servers took some config to get replication working properly while also using SSL for the connection to each other and from the backend web servers.

I still haven’t managed to configure MySQL to be HA from the web servers, so at the moment this would be a manual switch. I’ve found HyperDB for wordpress, while should resolve this, but since I had to slightly change the wordpress config to do SSL for MySQL, HyperDB doesn’t seem to be able to use SSL so I need to work out how to do this. I find this really weird as once of the suggestions is to have your database remote, I really would have thought being remote (especially if using something like Amazon for the database) would mean you’d want to use SSL to keep your database traffic secure. It seems strange that this isn’t a fundamental option in HyperDB (unless I’m just not seeing it).

And the last part NFS Servers, I still need to find out how to keep these in sync (without using Gluster), I’ve previously used Syncthing to keep servers in sync, it works but is pretty much held together with tape (my configuration of it not the actual program). Once I have the NFS sync’d I also need to find a way for the web servers to use both HA.

I do feel like this configuration is the closest to the best I can achieve on a budget. Once I have the MySQL and NFS stuff worked out, I will then be able to scale any server without completely taking the site offline. Which will really help in being able to deal with spikes. It is not much easier to scale with Digital Ocean, but I’d still really want to know doing so or taking a server out for maintenance is fine because everything will just keep running.

If you’ve got this far, I really thank you for reading. I hope the next couple of posts will be my solutions to the MySQL SSL and NFS problems. It’s not 2:41am and I think I’ve been writing this for about 2 hours, so I’m going to sleep 🙂 leave a comment if you got this far, include the words ‘sleep deprived’ so I dont think it’s spam.

Setting Featured image on WordPress Posts in BULK

I’m in the middle of changing another site to a new theme. The problem being the front page uses the feature image from each post to build up the display. None of the posts have a feature image set. It uses a facebooktowordpress plugin (heavily customised the original no longer pull the images, and really couldn’t handle pulling what was needed).

With this plugin each post on facebook is posted to the blog, the first image (this need changing to all images) on the post is downloaded to the wordpress server and then the source of the post uses the local image. But at no point did I ever anticipate needing featured images to be set.

So here’s the code I’ve put together to:

  • Pull a list of posts without a thumbnail/featured image set.
  • Pull the contents of each post in the list and look for img src
  • Download the image
  • Add the image to the media library against the current post
  • Pull the ID of the attachment and set it as the featured image

This is meant again to run from the command line NOT as a plugin.

<?php
 $counter=0;
 $limit = 20;
 $updated=0;

 if( php_sapi_name() !== 'cli' ) {
 die("Meant to be run from command line");
 }

 function find_wordpress_base_path() {
 $dir = dirname(__FILE__);
 do {
 //it is possible to check for other files here
 if( file_exists($dir."/wp-config.php") ) {
 return $dir;
 }
 } while( $dir = realpath("$dir/..") );
 return null;
 }

 define( 'BASE_PATH', find_wordpress_base_path()."/" );
 define('WP_USE_THEMES', false);
 global $wp, $wp_query, $wp_the_query, $wp_rewrite, $wp_did_header;
 require(BASE_PATH . 'wp-load.php');
 echo "Site URL: " . get_site_url() . "\n\r";
 echo "Base: " . find_wordpress_base_path()."/\n\r";
 echo "Upload DIR: " . wp_upload_dir()['path'] . "\n\r";
 echo "Posts: " . wp_count_posts()->publish."\n\r";

 $query = array (
 'posts_per_page' => -1,
 'post_type' => 'post',
 'meta_key' => '_thumbnail_id',
 'meta_compare' => 'EXISTS',
 );
 $my_query = new WP_Query($query);
 $posts_with_thumbs = $my_query->post_count;
 echo "Posts with thumbnail_id: " . $posts_with_thumbs . "\n\r";

 $query = array (
 'posts_per_page' => -1,
 'post_type' => 'post',
 'meta_key' => '_thumbnail_id',
 'meta_compare' => 'NOT EXISTS',
 );
 $my_query = new WP_Query($query);
 $posts_without_thumbs = $my_query->post_count;
 echo "Posts without thumbnail_id: " . $posts_without_thumbs . "\n\r";

 $counter = 0;
 while(( $my_query->have_posts() ) and ($counter < $limit)) {
 $my_query->the_post();
 echo "\n\r\n\r";
 echo "Post ID: " . $my_query->post->ID . "\n\r";
 echo "Post Title: " . $my_query->post->post_title . "\n\r";
 $content = $my_query->post->post_content;
 $sub="";
 $video_id="";
 $sub_image=flase;
 $sub_changed=false;
 $sub_youtube=false;

 if ( strpos($content, 'img src') !== false ) {
 $re = "/<img.*?src='([^\"]*)'.*\/>/i";
 preg_match_all($re, $content, $matches);
 $sub = $matches[1][0];
 echo "Image URL: " . $sub . "\n\r";
 $sub_image=true;
 if ( substr( $sub,0,11 ) === "/wp-content" ) {
 $sub = get_site_url() . $sub;
 echo "Real URL: " . $sub . "\n\r";
 $sub_changed=true;
 }
 }

 if ( $sub == "" ) {
 if (strpos($content, 'youtube') !== false) {
 $re = "/\[[{embed\}]]([^\"]*)\[\/embed\]/i";
 preg_match_all($re, $content, $matches);
 $sub = $matches[1][0];
 echo "Youtube Detected. URL: " . $sub . "\n\r";
 $re = "/watch\?v=([^\"]*)\[\/embed\]/i";
 preg_match_all($re, $content, $matches);
 $video_id = $matches[1][0];
 $sub = "http://img.youtube.com/vi/". $video_id . "/hqdefault.jpg";
 echo "Youtube Thumbnail: " . $sub . "\n\r";
 $sub_youtube = true;
 }
 }

 echo "Content: " . $content . "\n\r";

 if ( ($sub_image === true) || ($sub_youtube === true) ) {
echo "Image or Youtube detected!\n\r";
 if ($sub_image === true) {
echo "Image!!\n\r";
 $media = media_sideload_image($sub, $my_query->post->ID, $my_query->post->post_title);
 } elseif ($sub_youtube === true) {
echo "Youtube!!\n\r";
 // Download file to temp location
 $tmp = download_url( $sub );
 // Set variables for storage
 // fix file filename for query strings
 preg_match('/[^\?]+\.(jpg|JPG|jpe|JPE|jpeg|JPEG|gif|GIF|png|PNG)/', $sub, $matches);
// $file_array['name'] = basename($matches[0]);
 $file_array['name'] = $video_id . ".jpg";
 $file_array['tmp_name'] = $tmp;
 // If error storing temporarily, unlink
 if ( is_wp_error( $tmp ) ) {
 @unlink($file_array['tmp_name']);
 $file_array['tmp_name'] = '';
 }
 // do the validation and storage stuff
 $media = media_handle_sideload( $file_array, $my_query->post->ID, "YouTube: " . $my_query->post->post_title );
 // If error storing permanently, unlink
 if ( is_wp_error($media) ) {@unlink($file_array['tmp_name']);}
 }

 if(!empty($media) && !is_wp_error($media)){
 echo "File Downloaded!\n\r";
 $args = array(
 'post_type' => 'attachment',
 'posts_per_page' => 1,
 'post_status' => 'any',
 'post_parent' => $my_query->post->ID,
 );

 $attachments = new WP_Query($args);
 while( $attachments->have_posts() ) {
 echo "Attachment ID: " . $attachments->post->ID . "\n\r";
 set_post_thumbnail($my_query->post->ID, $attachments->post->ID);
 if ($sub_image) {
 $atturl = wp_get_attachment_url($attachments->post->ID);
 $atturl = preg_replace("(^https?:)", "", $atturl );
 echo "Attachment URL: " . $atturl . "\n\r";
 if ($sub_changed) {
 $sub = str_replace(get_site_url(), "", $sub);
 }
 $newcontent = str_replace($sub,$atturl,$content);

 if ($newcontent != $content){
 $update_post = array(
 'ID' => $my_query->post->ID,
 'post_content' => $newcontent,
 );
 wp_update_post($update_post);
 }
 $updated++;
 break;
 }
 }
 echo "\n\r";
 }
 }
 $counter++;
 }
 echo "With Thumbs: " . $posts_with_thumbs . " and " . $posts_without_thumbs . " Without.\n\r";
 echo "Updated: " . $updated . " of " . $counter . "\n\r";
?>

BE WARNED this is designed to make changes to your wordpress posts. The usual about taking backups is a MUST. both of the database and your wordpress www folder.

  • The “Limit” is set low, you’ll need to adjust this as needed.
  • Within the script there is
[[{embed\}]]

You will need to change this to

[ embed\ ]

WITHOUT the spaces, wordpress decided I was trying to embed something so I can’t paste it without a slight change.

  • I’ve tried to keep it as generic as possible, so there shouldn’t be any mention of my site in forced links or searches. When it searches for an image it also looks to see if it’s local to the site using /wp-content instead of a full url, it should get around this problem using get_site_url() but you could force your domain or a different path if needed here.
  • It also looks for any youtube content in the post and tries to pull the relevant image. I just wish it would pull one with a play button on it (I dont want to use CSS to get around this)
  • It handles the youtube download differently (this was made over a few days fixing problems as they come up) but I think it’s needed to give each image a unique filename.
  • You will have to run several times probably increasing the limit each time, on 2nd run it again checks all the previous ones that it couldn’t get an image for, so you should end up with 0 updated of x where x is probably all your posts without an image. I’ll be updating it to use a/(random of a few) image relevant to the site so everything has a featured image.

WordPress Posts without Featured Thumbnail Count

I have another script I’ve been building up to download the first  embedded image in a post, add it to the media library, attach it to the post and set it as featured image.

Here’s some quick code to query how many posts do and do not have a featured image

<?php
 if( php_sapi_name() !== 'cli' ) {
 die("Meant to be run from command line");
 }

 function find_wordpress_base_path() {
 $dir = dirname(__FILE__);
 do {
 //it is possible to check for other files here
 if( file_exists($dir."/wp-config.php") ) {
 return $dir;
 }
 } while( $dir = realpath("$dir/..") );
 return null;
 }

 define( 'BASE_PATH', find_wordpress_base_path()."/" );
 define('WP_USE_THEMES', false);
 global $wp, $wp_query, $wp_the_query, $wp_rewrite, $wp_did_header;
 require(BASE_PATH . 'wp-load.php');
 echo "Base: " . find_wordpress_base_path()."/\n\r";
 echo "Upload DIR: " . wp_upload_dir()['path'] . "\n\r";
 echo "Posts: " . wp_count_posts()->publish."\n\r";

 $query = array (
 'posts_per_page' => -1,
 'post_type' => 'post',
 'meta_key' => '_thumbnail_id',
 'meta_compare' => 'EXISTS',
 );
 $my_query = new WP_Query($query);
 $posts_with_thumb = $my_query->post_count;
 echo "Posts with thumbnail_id: " . $posts_with_thumb . "\n\r";

 $query = array (
 'posts_per_page' => -1,
 'post_type' => 'post',
 'meta_key' => '_thumbnail_id',
 'meta_compare' => 'NOT EXISTS',
 );
 $my_query = new WP_Query($query);
 $posts_without_thumb = $my_query->post_count;
 echo "Posts without thumbnail_id: " . $posts_without_thumb . "\n\r";
?>

It’s made to run from the terminal NOT as a wordpress plugin, just place it in the root wordpress folder and run with php <filename>

It will output something like

Base: /var/www/{folder}/
Upload DIR: /var/www/{folder}/wp-content/uploads
Posts: 8547
Posts with thumbnail_id: 6824
Posts without thumbnail_id: 1723

Took a few seconds to return, it’s probably not the best way of doing it, but worked for what I wanted. I’d like to give credit for each bit of code but really no idea what I got the different bits from.

GlusterFS woes

If your looking for the gluster error ‘brick2.mount_dir not present’ 
jump to the end

Time for another post 🙂

I’ve been using DigitalOcean for some time now, and I’m still tweaking my setup. Once thing I really hope they sort soon is proper private lan between your own droplets, for now we just have to use a vpn between them.

Being responsible for a new website can give a lot of headaches, especially when you have to try to guess just how popular it will be. So about a year ago I setup a new droplet to host the new site, testing it was going well and I increased the droplet before we launched to handle a spike. Sadly I underestimated just how busy it would get, based on the numbers I was given I think I was about right but unfortunately those numbers were way off.

But each failure it just another learning curve 🙂 so as quickly as possible that was fixed, then the site got to normal volume so we scaled it back down (yes it’s a whole cost exercise especially when your paying for it). Then we had the lead up to Christmas, in an attempt to not repeat the problems at launch, I changed the whole configuration so that I could (if needed) take a server down while staying partly operational. This kind of worked and was needed when some brightspark promoted the site a day early and we hadn’t scaled back up!

Come the new year I decided it was time to seriously sort the infrastructure for the site. It now has an online store and it’s important it keep running, it’s not just a blog anymore. So I put in place the following setup (working around various obstacles).

DNS:

  • All websites name servers are pointing to cloudflare and they handle the first web connection. It works really well on their free tier, and changes (adding new servers) are pretty quick to take effect.

DigitalOcean Droplets:

  • 1x Server running as a load balancer.
  • 2x Servers running as webservers.
  • 1x Server running as database server.
  • 1x Server running as email (not quite running).
  • At the same time as making this setup I decided to ditch apache and move to nginx, so loadbalancer and webservers are running that.

Software:

  • 4x Nginx (loadbalancer and webservers, and installed on database server for stats).
  • 2x Syncthing (webservers) to keep the www folders in sync.
  • 1x MySQL Server.
  • 4x OpenVPN (connections between loadbalancer and webservers, webservers and database).
  • 1x Redis Server (for session data, I tried nginx load balancing options but it still screwed up if I had to take one of the web servers out for maintenance, so installed Redis on the Database server.

As this progressed I dropped the VPN between loadbalancer and webservers and just use HTTPS/SSL instead. Syncthing already has it’s own SSL built-in so I could leave that over the semi-private LAN. but I really would like to change MySQL to be encrypted and drop the vpn from that too, but find info on doing this for wordpress seems to be non-existent at the moment.

Roll forward a few months, this has been working but still has areas to resolve. Such as syncthing: yes it keeps the folders in sync and is actually really good that I can also store them on another system easily. but it doesn’t listen to the OS for changes to files. Instead it polls every x seconds. Although there’s nothing much changing, updating plugins became a problem if you click the update button it downloads but then nginx sends your next request to another server and now the plugin.zip isn’t there so wordpress throws an error.

My whole reason for running syncthing was I wanted the files to be available on each server independently. So if Server A goes down it doesn’t matter Server B has all the files locally anyway. NFS would still give me a single point of failure. On looking into resolving this though I remembers GlusterFS. I’d played with it a long time ago, but dropped it as a solution (can’t remember what I was doing or why it wasn’t working). Now it’s time to try it again. downside I’m back to needing VPN’s and OpenVPN isn’t the easiest to quickly add a new server.

So I’ve done the following:

  • Added a new server just for the files (I don’t like gluster being in a 2 replica incase there’s a problem, there should be a majority who thinks they are holding the correct file).
  • Swapped out OpenVPN for Tinc, I have to say one of the best decisions. yes there are downsides, it creates a mesh (only doable with OpenVPN by running quagga for manually forcing routes) but I have no idea which Server is actually connected to which Servers. There’s no VPN status and I can’t see how much traffic has gone between 2 particular servers (iptables helps but it’s not 100%)
  • Added another new server for nagios and central logging.

There were a load of changes within a few weeks of each other, but I now have a setup I’m confident I can scale more quickly than ever before. Yes it has single points (load balancer, mysql) but I know if the load balancer has a problem it’s pretty static so can be wiped and redeployed quickly, as well as it will take a few minutes to open the webservers to the world and let cloudflare hit them directly. So MySQL is the real problem and I’ll be addressing that one soon enough.

 

So now onto today’s problem 🙂

I’ve had gluster running a few weeks, and I have our testing website (for theme changes etc) setup on our webservers behind the loadbalancer. The last few days I’ve need to do more extensive testing than just changing bits in a theme, so I’ve decided to split the tester site onto it’s own droplet (still behind the loadbalancer and with VPN to the databases). I thought I may as well make use of Gluster here too (yes it would be in a 2 replica setup itself and the fileserver. I don’t like that idea). So I brought up a new server and configured it: new users, firewall rules, tinc, nginx, php, etc.

I added gluster and copied the /etc/hosts entries over from the other servers. All looked good. I gluster peer probe ServerX and it worked, gluster peer status and I could see it fine. but on trying to add a new volume:

gluster volume create xxx-yyy-zzz replica 2 transport tcp FILESERVER:/GLUSTER/xxx.yyy-zzz TESTSERVER:/GLUSTER/xxx.yyy-zzz force

I was getting the error:

volume create: xxx-yyy-zzz: failed: Commit failed on localhost. Please check the log file for more details.

Checking the logs on both servers would show (maybe slight variation):

[2015-07-28 16:00:41.612907] E [glusterd-hooks.c:328:glusterd_hooks_run_hooks] 0-management: Failed to open dir /var/lib/glusterd/hooks/1/create/pre, due to No such file or directory
[2015-07-28 16:00:41.614499] E [glusterd-volume-ops.c:1811:glusterd_op_create_volume] 0-management: brick2.mount_dir not present
[2015-07-28 16:00:41.614587] E [glusterd-syncop.c:1288:gd_commit_op_phase] 0-management: Commit of operation 'Volume Create' failed on localhost

I tried a series of things to fix it:

I thought maybe the /GLUSTER/xxx-yyy-zzz needed to be created (I already made /GLUSTER) – Nope

  • I detached the peer and reattached – No.
  • I reboot the file server and test server – No.
  • I detached, reboot, reattached – No.
  • I tried creating the volume with just the test server and no replica – No.
  • I tried creating the volume on just the fileserver with no replica – Yes.

So the problem is point to the new system, but it’s a brand new system. They’re peers and connected.

  • I tried uninstalling and reinstalling gluster – No.
  • I tried uninstalling, purging and reinstalling – No.
  • I tried uninstalling, purging, manually deleting the /var/lib/gluster (probably a mistake that I didn’t detach first :() and reinstalling – No.
  • I have no idea why this wont WORK!!!!!

Let’s go further back, check the VPN, ping the servers.

  • Ping fileserver from testserver – Yes.
  • Ping testserver from fileserver – Yes/Hang on that’s the wrong IP!! Yes I’d copied an entry from webserverB into /etc/hosts, update the name but missed the IP address. Idiot! correct that. Ping – OK.
  • Try gluster again – Yes.

So if you’re having problems and seeing brickX.mount_dir not present make sure your DNS between servers is correct.

I don’t really know how the peer probe worked, but I think I must have done that from a servers who’s hosts was correct

Hyperion with Sunrise and Sunset

You may have gotten here from my hyperion with nagios writeup, this doesn’t follow on from that and is separate, but maybe of interest.

The basic idea: I’ve now got LED’s in my room and would like them to come on before I go to bed so I can see without falling over. The ones on the stair I just leave running, but like hell am I going to sleep with such a bright LED (I probably could, I can sleep in the day but I thought it would be a better idea for them to come on ready).

I could have just set this up on a basic cron and picked a time early enough to account for summer/winter before I go to bed. but where’s the fun in that. I know my PI can work out when the sunrise/sunset is. so it can’t be that difficult to set something up.

After a little bit of searching I come across sunwait you will need this or a similar program. I wont cover installing sunwait on the PI here. just the config I use with hyperion.

First I need a script that sunwait will call and tell it what it needs to do. Here’s my sun-light.sh

#!/bin/bash

COMMAND=hyperion-remote
COMMAND_PRIORITY=50
COMMAND_PATH="/usr/bin"
case "$1" in
sunset)
        /usr/bin/hyperion-remote -p 50 -e "Knight rider"
;;
sunrise)
        /usr/bin/hyperion-remote -p 50 -e "Little Chaser Blue"
;;

*)
        echo "Usage: $0 {sunrise|sunset}"
        exit 1
esac

exit 0

Don’t forget to make this script executable ‘chmod +x sun-lights.sh’

This is basically told to either run sunset or sunrise and will then call hyperion-remote passing the relevant priority and effect. (Little Chaser Blue is a copy and customisation of Knight Rider)

Then I added the following to /etc/crontab

0 02   * * *   root    sunwait -p sun up 51.xxxxN 3.xxxxW ; /root/sun-lights.sh sunrise
12 02   * * *   root    sunwait -p sun down 51.xxxxN 3.xxxxW ; /root/sun-lights.sh sunset

This basically run’s sunwait which will wait until the sun is either coming up or going down at the specified co-ordinates before running the bit after ;

The important bit to get your head around if you must have cron run this at a time well before the sun will rise or set. midnight and midday seems like a good safe bet.

I know I’ve skipped over the actual installation of sunwait and more details on hyperion and running the scripts to check it works, but it’s 1am and I just want to save this 🙂 So if you’ve got this far and are still confused, comment below and I’ll expand on the relevant bits.

Hyperion LED’s & Nagios (Part 3)

Hopefully you’d read Part 2. If not you’ll need to or this wont work.

So in this part we’re going to setup the nagios stuff to set off the alert LED’s. As a little bit of background my nagios installation is on a completely separate PI to the hyperion LED’s, but I have install hyperion on this pi to make use of hyperion-remote. Yes I was being lazy I could have used other methods instead of installing the whole thing.

First my nagios installation is in ‘/usr/local/nagios’, I’m not going to go through the commands to cd and edit, if you’ve installed and configured nagios I’ll assume you can do them 🙂

This is my /usr/local/nagios/libexec/notify-hyperion.sh

#!/bin/bash
STATE=$1
DURATION=23000
case $STATE in
"CRITICAL")
   EFFECT="Red Alert"
   ;;
"WARNING")
   EFFECT="Yellow Alert"
   ;;
"OK")
   EFFECT="Green Alert"
   ;;
*)
   ;;
esac

hyperion-remote -a osmc-l:19444 -d $DURATION -p 10 -e "$EFFECT" &amp;
hyperion-remote -a rasp-light:19444 -d $DURATION -p 10 -e "$EFFECT" &amp;
hyperion-remote -a webcam-pi:19444 -d $DURATION -p 10 -e "$EFFECT" &amp;

For the nagios saavy amongst you, you’ll see I account for CRITICAL, WARNING & OK. Yes I do need to add DOWN, UNREACHABLE & UP for the host alerts

The DURATION sets how long in ms hyperion will run this effect for (best worked out in conjunction with the speed, freq & step from the hyperion config. I’ve got this just right to cut off the alert after (I think) 4 fades. I force the priority ‘-p 10’ to 10, anything else I do with hyperion generally has a priority 50, 100 or 1000 so these will take over.

The last 3 lines are 1 each for my hyperion installs, you will need to change the name’s or replace them with the IP addresses dependant upon your network configuration.

Don’t forget to make the script executable, and you can test it with ‘./notify-hyperion.sh OK’

With the above tested and working, I’ve added the following to my nagios command.cfg

# 'notify-host-by-hyperion' command definition
define command{
        command_name    notify-host-by-hyperion
        command_line    /usr/local/nagios/libexec/notify-hyperion.sh "$HOSTSTATE$"
}

# 'notify-service-by-hyperion' command definition
define command{
        command_name    notify-service-by-hyperion
        command_line    /usr/local/nagios/libexec/notify-hyperion.sh "$SERVICESTATE$"
}

Then added the following to contacts.cfg

define contact{
        contact_name                    nagios-hyperion
        alias                           Nagios Hyperion
        service_notification_period     24x7
        host_notification_period        24x7
        service_notification_options    w,u,c,r,f
        host_notification_options       d,u,r,f,s
        service_notification_commands   notify-service-by-hyperion
        host_notification_commands      notify-host-by-hyperion
        }

define contactgroup{
        contactgroup_name       nagioshyperion
        alias                   Nagios Hyperion Notifications
        members                 nagios-hyperion
        }

For my installation I’ve then added

contact_groups                  nagioshyperion

To my template’s. You could add this to each service/host.

Within my setup I’ve stopped using email alerts, so changing the contacts to hyperion was fine. Within the templates I then have the notify_interval setup for 15 minutes. This means it will fire an alert to hyperion every 15 minutes. If you use email on your system too, you may not want to do this. an alternative could be changing the duration above, so that the red and yellow alerts are constant and the green run’s for a limited time before clearing down.

I did contemplate using event filters instead of contacts, so I could have the emails turned back on at some point, but decided against it as I would have to check before sending a green alert that it’s not already in green or I’d just end up with green all the time.

After all of the above make sure you restart nagios for the new config to take effect.

As a side note, I was sat watching TV this evening all of a sudden my room was yellow and I thought WTF. I do have hyperion setup in my room to start the LED’s at sunset but it was still light out and shouldn’t have fired. Then it clicked NAGIOS. and yes hey presto nagios had throw this site into warning status as there were updates available. I can see it getting annoying if my ISP drops out and I end up with alerts for hours. but on the whole I’m really happy it works, all I need now is a red alert klaxon 🙂

If your interest in setting up hyperion at sunrise/sunset I’ll be writing that one up separately.

Hyperion LED’s & Nagios (Part 2)

Maybe you read Part 1, maybe you skipped it 🙁

Either way Part 2 is going to cover creating a new effect for Nagios. I wont say it’s the best bit of python programming I’ve done (Yes I copied and changed another effect as a starting point). The main thing I wanted was an alert status (yes Star Trek does come into it).

There’s 4 files to this effect:

  1. alert.py – The actual python program
  2. alert-green.json – etting a green colour (yes it’s spelt COLOUR not color!!!!!)
  3. alert-yellow.json
  4. alert-red.json

My hyperion is installed to ‘/opt/hyperion’ so first thing is to go to the effects directory

cd /opt/hyperion/effects

Then create the alert.py file

nano alert.py

Paste the following into it and save the file

import hyperion
import time
import colorsys
import numpy as np

# Get the rotation time
colorfrom =     hyperion.args.get('colorfrom', (0,0,0))
colorto =       hyperion.args.get('colorto', (0,255,0))
speed   =       float(hyperion.args.get('speed', 0.05))
freq    =       float(hyperion.args.get('frequency', 2.5))
step    =       float(hyperion.args.get('step', 30.0))
ledCount =      hyperion.ledCount

# Setup the colors
colorfromnp     =       np.array(colorfrom)
colortonp       =       np.array(colorto)

#define the lerp calc
def lerp(a, b, t):
        x = a*(1 - t) + b*t
        x = np.around(x)
        x = x.tolist()
        return x

# Start the write data loop
while not hyperion.abort():
        for i in range(0,int(step)):
                n = i / step
                thiscolor = lerp(colorfromnp, colortonp, n)
                colorLedsData = ledCount * bytearray((int(thiscolor[0]), int(thiscolor[1]), int(thiscolor[2])))
                hyperion.setColor(colorLedsData)
                time.sleep(speed)
        time.sleep(speed*10)
        for i in range(1,int(step)):
                n = i / step
                thiscolor = lerp(colortonp, colorfromnp, n)
                colorLedsData = ledCount * bytearray((int(thiscolor[0]), int(thiscolor[1]), int(thiscolor[2])))
                hyperion.setColor(colorLedsData)
                time.sleep(speed)
        colorLedsData = ledCount * bytearray(colorfrom)
        hyperion.setColor(colorLedsData)
        time.sleep(freq)

I do use a numpy array, I can’t remember installing it but may have done. You can test if you have it installed

python
import numpy as np

Then we create the alert-green.json

nano alert-green.json

And Paste

{
        "name" : "Green Alert",
        "script" : "alert.py",
        "args" :
        {
                "colorfrom" : [0,0,0],
                "colorto" : [0,255,0],
                "speed" : 0.05,
                "freq" : 2.5,
                "step" : 30.0
        }
}

alert-yellow.json

{
        "name" : "Yellow Alert",
        "script" : "alert.py",
        "args" :
        {
                "colorfrom" : [0,0,0],
                "colorto" : [255,255,0],
                "speed" : 0.05,
                "freq" : 2.5,
                "step" : 30.0
        }
}

alert-red.json

{
        "name" : "Red Alert",
        "script" : "alert.py",
        "args" :
        {
                "colorfrom" : [0,0,0],
                "colorto" : [255,0,0],
                "speed" : 0.05,
                "freq" : 2.5,
                "step" : 30.0
        }
}

Finally restart hyperion to pickup the new effects

service hyperion restart

That’s it, you should now be able to run these effects. From the terminal:

hyperion-remote -e "Red Alert"

Should start the led’s glowing Red. If not I’d start with ‘hyperion-remote -l’ to check the effects are listed, then move on to working out what’s missing (see above about numpy array).

A few notes on this effect:

  • Originally I programmed it forced to black, but it made sense to change this to a colorfrom value for future (if you want it green and fading to red for example).
  • The speed is how quickly to progress through the fade.
  • The step is how many colours it goes through to get from the colorfrom to the colorto
  • Working out the speed and the step together are important for a nice fade (not jerky).
  • The Freq in how long it stays on black before running the fade again.

This completes the hyperion side of the setup. I now have this effect installed and running on 3 PI’s and yes when there’s a problem they ALL go to yellow/red alert

See Part 3 for the nagios setup

Hyperion LED’s & Nagios (Part 1)

Part 1 is more Background story on my use of WS2801 WS2812b and Hyperion with the Raspberry Pi. Skip to Part 2 for the techy bit.

I’ve been using Hyperion for a while. I setup light behind the TV first off (using sticky tape) WS2801 and RaspBMC. This work brilliantly and I loved it. Spent hours tweaking the config so the LED’s were picking up the correct colour to the screen.

With all that working and a length of LED’s left over I decided to run some up the stairs. They sit just under the banister lip so you can’t see them, just the light on the stairs. I set these to Rainbow swirl and let it. They’re been running for months, occasionally changing the effect to show off what they can do.

Then disaster struck, the power adapter stopped working. Have to be honest I didn’t really notice until I was going to bed at 2am and almost fell over. They’ve been there giving off light (possibly a bit bright if anything) and I just got used to being able to see in the middle of the night without any other lights.

Anyway I digress, ordering a new power adapter I went searching for more LED’s (yes you can’t have enough of them once you’ve been playing). I decided that I’d really like to run some in my bedroom, the effects are cool and there would be plenty of light I wont need to use the main light with them on.

So I looking at where I originally bought my WS2801’s and nothing 🙁 so off to google, the obvious thing was I was going to have a hard time sourcing them in the UK. but why they’re great. So off I went to the hyperion git site for info and found there’s newer versions WS2811 and WS2812b. ah that may help, another search and I found someone selling a load on ebay. So I bought all he had 3xreels of WS1812b’s.

They turned up and I connected them up to try them as directed by hyperion. It was at this point I read the important bits RPi2 isn’t working yet and there maybe a problem with the PI communicating with them due to the voltage. I really thought I was going to have to make another little circuit to (buffer?) get them working. As a last ditched attempt it was mentioned try removing the resistor and try running them direct from python. I did both at the same time (not the best decision for troubleshooting. But to my surprise they worked.

So I killed the python program and restart hyperion, yep they’re working.

So off I went to stick them to the ceiling (they have sticky tape on the back). Done. If only I’d thought about connecting them before I stuck them up. I now had to work up in the air joining the cables. Not to worry I’ve done worse.

So I go to get what I need, by the time I got back up they’ve come down 🙁 bloody gravity! Now you’d think at this point I’d connect them up and sort out attaching them later Nope! (didn’t even enter my head) I was now on track to get them to stay up. Enter ‘SuperGlue’, applied little dots along the strip and stuck them up (yes I glued my fingers to the ceiling too). Finally they’re up and staying there. Oh I should have connected them when they were down!

‘Bugger it, where’s my screw driver’ I connected them up, put a power connector on the end and powered them and the PI.

Then installed hyperion on yet another pi. and it all worked like magic.

Have a look at the video, there’s no light other than the TV and it’s dark outside, but the room is really bright.

[youtube=https://www.youtube.com/watch?v=khfJW3vXcCE]

Click here for Part 2

Weewx+Raspberry PI+HDMI+PyGame

I’ve been using wview with my WH1080 weather station for some time (actually 2 of them). My main setup has been using my server, and every now and again the WH1080 would seem to lock up and nothing could get data out of it. The solution was to drop it’s power, on reboot it would all start working again.

However wview also seemed to introduce lockups of it’s own and the only solution there was to reboot the server (not ideal). So when it came to setting up a second weather station (in a remote location) I needed something a bit more stable and started looking at alternatives. I was doing this on a Raspberry PI and found wview. After installing it sometime last year it seemed pretty stable (although the WH1080 still manages to lockup).

Back to my house and I’ve finally had enough of missing weather data. One thing I really liked with wview was the ability to pull the archive data if the weather station had been running when the machine hadn’t (providing the USB hadn’t locked up), I really recommend looking at wview if your starting out.

I’ve already covered setting up weewx on a Raspberry PI and I’m not going to post about my exact configuration here. Instead I’m going to share my Python code for displaying the various graphs and gauges straight to the TV. At the moment I have my weather PI connected to the TV via HDMI. This may change later and then I’ll have to adjust the code to pull the images to another pi before displaying them (I have a similar project for displaying webcams already).

So a few things before the code.

  1. It’s my first real attempt at using classes, so my code will be more than a little scattered.
  2. You need to have python, pygame, wview, (and for this exact code Bootstrap for wview but you could just change the file paths to the Standard guages and graphs), mysql and wview configured for mysql.
  3. I have this started using an init script (added below).
  4. This runs the python program as root, I need to find a way to run this as a normal user (but that will affect point 5&6).
  5. This program checks mysql is able to be connected to and restarts mysql of not.
  6. It also checks the freshness of the index.html file (not the best way but a quick way) to make sure wview is running keeping the files upto date. If not it reboot the PI, this causes the weather station to reboot so if the USB locks up the whole system resets fixing it.

So now onto the python code

#!/usr/bin/python
import os
import time
import pygame
import MySQLdb as mdb
import signal
import sys

imglocation = "/var/www/weewx/Bootstrap"

class pyscreen :
   screen = None;

   def __init__(self):
       "Initializes a new pygame screen using the framebuffer"
       disp_no = os.getenv("DISPLAY")
       if disp_no:
           print "I'm running under X display = {0}".format(disp_no)

       # Check which framebuffer drivers are available.
       drivers = ['fbcon', 'directfb', 'svgalib']
       found = False
       for driver in drivers:
           # Make sure that SDL_VIDEODRIVER is set
           if not os.getenv('SDL_VIDEODRIVER'):
               os.putenv('SDL_VIDEODRIVER', driver)
           try:
               pygame.display.init()
           except pygame.error:
               print 'Driver: {0} failed.'.format(driver)
               continue
           found = True
           break

       if not found:
           raise Exception('No suitable video driver found!')

       size = (pygame.display.Info().current_w, pygame.display.Info().current_h)
       print "Framebuffer size: %d x %d" % (size[0], size[1])
       self.screen = pygame.display.set_mode(size, pygame.FULLSCREEN)
       pygame.mouse.set_visible(False)
       # Clear the screen to start
       self.screen.fill((0, 0, 0))
       # Initialise font support
       pygame.font.init()
       # Render the screen
       pygame.display.update()

   def __del__(self):
       "Destructor to make sure pygame shuts down, etc."

   def size(self):
       size = (pygame.display.Info().current_w, pygame.display.Info().current_h)
       return size

   def fill(self, colour):
       if self.screen.get_at((0,0)) != colour:
           self.screen.fill((0, 0, 0))
           self.screen.fill(colour)
           pygame.display.update()

   def image(self, img, locX, locY, sizX, sizY):
       try:
           if (( "week" not in img) and (os.stat(img).st_mtime &gt; time.time() - 600)):
               # 600 = 10 mins.
               image = pygame.image.load(img)
               image = pygame.transform.scale(image, (sizX, sizY))
               self.screen.blit(image, (locX, locY))
           elif(( "week" in img) and (os.stat(img).st_mtime &gt; time.time() - 7200)):
               # 7200 = 2 hours.
               image = pygame.image.load(img)
               image = pygame.transform.scale(image, (sizX, sizY))
               self.screen.blit(image, (locX, locY))
           else:
               pygame.draw.rect(self.screen, (255, 0, 0), (locX, locY, sizX, sizY), 0)
       except pygame.error, message:
           pygame.draw.rect(self.screen, (255, 0, 0), (locX, locY, sizX, sizY), 0)
       pygame.display.update()

   def text_object(self, msg, font):
       black = (0, 0, 0)
       textSurface = font.render(msg, True, black)
       return textSurface, textSurface.get_rect()

   def error(self, msg):
       largeText = pygame.font.Font('freesansbold.ttf', 115)
       TextSurf, TextRect = screeny.text_object(msg, largeText)
       size = screeny.size
       TextRect.center = ((pygame.display.Info().current_w/2),(pygame.display.Info().current_h/2))
       self.screen.blit(TextSurf, TextRect)

       pygame.display.update()

class fileman:
   global imglocation

   def __init__(self):
       "Init for fileman. Nothing to do atm."

   def __del__(self):
       "Destructor for fileman. Nothing to do again."

   def total_files(self):
       count=0
       for file in os.listdir(imglocation):
           if file.endswith(".png"):
               count=count+1
       return count

class sqly:
   def __init__(self):
       "do nothing"

   def test(self):
       try:
           con = mdb.connect('localhost', 'weewx', 'weewx', 'weather')

           cur = con.cursor()
           cur.execute("SELECT VERSION()")
           ver = cur.fetchone()
           return 0
       except mdb.Error, e:
           return 1

       finally:
            "do nothing"
   def restart(self):
       #wait 30 secs and try the connection again.
       time.sleep(30)
       if not mysql_con.test():
           try:
               os.system("service mysql start")
           except:
               "do nothing"

def sigterm_handler(_signo, _stack_frame):
    "When sysvinit send the TERM signal, cleanup before exiting"
    print("[" + get_now() + "] received signal {}, exiting...".format(_signo))
    sys.exit(0)

signal.signal(signal.SIGTERM, sigterm_handler)

def reboot():
    "check if we've been reboot in the last 30 mins"
    uptimef = open("/proc/uptime", "r")
    uptimestr = uptimef.read()
    uptimelst = uptimestr.split()
    uptimef.close()

    if float(uptimelst[0]) &lt; 1800:
        "We've reboot in the last 30 mins, ignoring"
    else:
        try:
            os.system("reboot")
        except:
            "do nothing"


if __name__ == "__main__":
    try:
        screeny = pyscreen()
        filey = fileman()
        screeny.fill((0, 0, 255))
        size = screeny.size()
        border = 7
        sizewquarter = ((size[0]-(border*5))/4)
        sizehthird = ((size[1]-(border*4))/3)

        print("Total files : %d" % (filey.total_files()))
        while 1:
            mysql_con = sqly()
            if mysql_con.test():
                screeny.fill((250, 0, 0))
                screeny.error("Database Offline. Restarting!")
                mysql_con.restart()
            elif (os.stat("/var/www/weewx/Bootstrap/index.html").st_mtime &lt; time.time() - 600):
                screeny.fill((254, 0, 0))
                screeny.error("NOT Updating. Reboot Imminent!")
                reboot()
            else:
                screeny.fill((0, 0, 255))
                screeny.image("/var/www/weewx/Bootstrap/barometerGauge.png", (border*1), (border*1), sizewquarter, sizehthird)
                screeny.image("/var/www/weewx/Bootstrap/outTempGauge.png", ((border*2)+(sizewquarter*1)), (border*1), sizewquarter, sizehthird)
                screeny.image("/var/www/weewx/Bootstrap/windDirGauge.png", ((border*3)+(sizewquarter*2)), (border*1), sizewquarter, sizehthird)
                screeny.image("/var/www/weewx/Bootstrap/windSpeedGauge.png", ((border*4)+(sizewquarter*3)), (border*1), sizewquarter, sizehthird)
                screeny.image("/var/www/weewx/Bootstrap/big_images/weekbarometer-Bootstrap.png", (border*1), ((border*2)+(sizehthird*1)), ((sizewquarter*2)+(border*1)), sizehthird)
                screeny.image("/var/www/weewx/Bootstrap/big_images/weektempchill-Bootstrap.png", (border*1), ((border*3)+(sizehthird*2)), ((sizewquarter*2)+(border*1)), sizehthird)
                screeny.image("/var/www/weewx/Bootstrap/big_images/weekwinddir-Bootstrap.png", ((border*3)+(sizewquarter*2)), ((border*2)+(sizehthird*1)), ((sizewquarter*2)+(border*1)), sizehthird)
                screeny.image("/var/www/weewx/Bootstrap/big_images/weekwind-Bootstrap.png", ((border*3)+(sizewquarter*2)), ((border*3)+(sizehthird*2)), ((sizewquarter*2)+(border*1)), sizehthird)
            time.sleep(1)
    except KeyboardInterrupt:
        "We've got an interupt"

and now the init.d code

#!/bin/sh
#
# init script for displayweewx
#

### BEGIN INIT INFO
# Provides:          displayweewx
# Required-Start:    $syslog $network
# Required-Stop:     $syslog $network
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: init script to display weewx charts via HDMI output
# Description:       The python script queries mysql and file ages, so does not rely on mysql as a backup way to kick it.
### END INIT INFO

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
NAME=displayweewx
DAEMON=/root/displayweewx/main.py
DAEMONARGS=""
PIDFILE=/var/run/$NAME.pid
LOGFILE=/var/log/$NAME.log

. /lib/lsb/init-functions

test -f $DAEMON || exit 0

case "$1" in
    start)
        start-stop-daemon --start --background \
            --pidfile $PIDFILE --make-pidfile --startas /bin/bash \
            -- -c "exec stdbuf -oL -eL $DAEMON $DAEMONARGS &gt; $LOGFILE 2&gt;&amp;1"
        log_end_msg $?
        ;;
    stop)
        start-stop-daemon --stop --pidfile $PIDFILE
        log_end_msg $?
        rm -f $PIDFILE
        ;;
    restart)
        $0 stop
        $0 start
        ;;
    status)
        start-stop-daemon --status --pidfile $PIDFILE
        log_end_msg $?
        ;;
    *)
        echo "Usage: $0 {start|stop|restart|status}"
        exit 2
        ;;
esac

exit 0

After all of this we get the following on the TV

From far far away.
From far far away.