Raspberry PI + GlusterFS (Part 2)

IMPORTANT: When running through the steps in Part 2 I encounter errors. Thanks to Ashley commenting I’ve created Part 3. I’ve decided to leave Part 2 intact for anyone searching on errors etc.

Hopefully you’ve read Part 1 and understood what I’m trying to do and why.

Here’s Part 2 attempting the install.
Part 3 Actually does the installation now.
Part 4 will cover the encryption and setup.
First things first, I (being naughty) use root far too much in testing, but do not recommend it all the way through on production servers.
So lets get into root
sudo su -
This should place you in root’s home direcoty.

You can Skip down to get past errors I encountered, but it’s possibly still worth reading the below which ended up not working.

Now we’re going to grab gluster 3.5.0

wget http://download.gluster.org/pub/gluster/glusterfs/3.5/3.5.0/glusterfs-3.5.0.tar.gz
tar xzvf glusterfs-3.5.0.tar.gz
cd glusterfs-3.5.0/
DON’T do this step yet. At this point I jumped straight into a configure attempt
./configure
This threw errors that I’m missing flex or lex
configure: error: Flex or lex required to build glusterfs.
Clearly I’m going to need to install a few dependencies before going further

apt-get update
apt-get install make automake autoconf libtool flex bison pkg-config libssl-dev libxml2-dev python-dev libaio-dev libibverbs-dev librdmacm-dev libreadline-dev liblvm2-dev libglib2.0-dev pkg-config
I’m not sure if all these are needed, but after digging around that’s the list that’s used elsewhere.
Actually reading the INSTALL file says to start with ./autogen.sh so this time we will
./autogen.sh
This took about 5 mins, but didn’t throw errors. Next onto
./configure
This eventually spits out
GlusterFS configure summary
===========================
FUSE client          : yes
Infiniband verbs     : yes
epoll IO multiplex   : yes
argp-standalone      : no
fusermount           : yes
readline             : yes
georeplication       : yes
Linux-AIO            : yes
Enable Debug         : no
systemtap            : no
Block Device xlator  : yes
glupy                : yes
Use syslog           : yes
XML output           : yes
QEMU Block formats   : yes
Encryption xlator    : yes
Now we can get on with the actually compiling

make
This threw a number of warnings for me
warning: function declaration isn’t a prototype [-Wstrict-prototypes]
But didn’t seem to be of great concern.
After about an hour it was ready to continue (I wrote Part 4 from memory while waiting).
We install with


make install

Here’s where I’m getting an error.


../../py-compile: Missing argument to --destdir.

So it’s on stop until I can figure out how to resolve it. one suggestion from version 3.4.x was to add the prefix path to the configure, but this didn’t do anything for me.

Move on to Part 3

Raspberry PI + GlusterFS (Part 1)

Here’s Part 1 which is really background information.
Part 2 will be actually doing stuff, if you dont want to read how/why I ended up here skip to Part 2.

A few days ago I yet again ran out of space on my server. Normally this would just mean deleting a load of junk files, but I’ve been doing that for months I’m now at the point there are no junk files left to delete. So time to increase the storage. Unfortunately problem #2 I currently have 4 sata drives in the server taking up all the connections. Expanding wouldn’t be a problem as I originally setup the drives with lvm to handle large storage requirements. but now there’s nowhere to turn to increase the capacity in this server.

So instead I thought I’d have a look at the alternatives. I’d been hoping to move some of the server services over to PI’s since I first heard about the Raspberry PI project (long before they were released), I knew I could make good use of lots of them.

After a little searching I found Gluster, and this seems to be ideal for what I need. At this point I should say that I clearly expect any USB drive connected to the PI to be slower access than sata on my server. but for my use slower doesn’t matter. I doubt this would suit everyone, but I think Gluster is even a good option on beefy servers/pc’s.

I have an idea that needs testing, so I setup 2 PI’s with 2 USB sticks for storage. A simple apt-get install glusterfs-server gets me moving quickly while reading 2 guides http://www.linuxjournal.com/content/two-pi-r?page=0,0 and http://www.linuxuser.co.uk/tutorials/create-your-own-high-performance-nas-using-glusterfs I quickly have a working system. I’m not going to go into the steps, both the links give details, but here’s my layout:-
2 PI’s
2 8Gb SD Cards
2 4Gb USB Sticks
2 512Mb USB Sticks.
So each PI has an SD card, a 4gb USB Stick and a 512Mb USB Stick.

As I prefer to encrypt all data on my drives I installed cryptsetup and using luksFormat got the USB Sticks ready (more on this later), I unlocked the USB sticks on both PI’s which opens the drive to /dev/mapper/USB1_Crypt and /dev/mapper/USB2_Crypt
I then mount each to /mnt/USB1 and /mnt/USB2 (both were already ext4 filesystems) with USB1 being the 4Gb.
The PI’s are called Gluster-1 and Gluster-2, I know my DHCP and DNS are working so I can use the names when setting up the gluster peers and volumes instead of IP addresses.
Once I’d created the Gluster volume (I tested both stipe and replica) I mounted the testvol (Gluster Volume) into /media/testvol
I made a quick file, and could see it from both servers. It’s also interesting looking in the /mnt/USB1/ folder (but do NOT change anything in here), while using a stripe volume the new file only appears on 1 of the PI’s, creating more files puts some on Gluster-1 and some on Gluster-2. So to see what happens I reboot Gluster-1 and watched the files in /media/testvol yep all the files that are in testvol and actually on Gluster-1 disappear from the file list. As soon as Gluster-1 was back (decrypted and mounted /mnt/USB1) the files were back.

So this is looking pretty good, and adding more storage was easy (I didn’t go into rebalancing the files across new bricks, but I don’t see this being a huge problem). Now I moved onto using Replica instead of Stripe (my ultimate intention will be to use both). Here’s where the fun begins, I actually didn’t delete the test files in /mnt/USB1 from either server, just stopped the gluster volume and deleted it. Creating a new gluster volume in stripe mode was easy, and a few seconds after I started it gluster sync’d up the files already in /mnt/USB1 on both PI’s so now in /media/testvol I can still see all the files and access them fine. Now I reboot Gluster-2 and checked the files in /media/testvol on Gluster-1 yep all the files are there (there’s was a 2-3 second delay on ls I assume I was previously connected to Gluster-2 and it had to work out it wasn’t there anymore but from there on was fine).

I have a working replica system 🙂 and it’s taken me no-time at all to get running. I brought Gluster-2 back online, then thought what if I change files when one of them is offline. Reboot Gluster-1, and change some of the test in a few of the test files in /media/testvol and create a new testfile. Then I bring Gluster-1 back online, and here’s where things fall apart!!!
The new testfile is on both and fine, the existing files I hadn’t changed are all fine too. But the files I changed I can’t access anymore, I’m getting cat: testy1: Input/output error. This isn’t good so I check the file testy1 file in /mnt/USB1 on both PI’s, Gluster-2 has the changes (as expected it was online when I made the changes) Gluster-1 has the original.

So head over to google to find out how I’d fix such a scenario (a friend told me years ago “your never the first person to have a problem” google every error message) yep there’s information on it, it’s called split-brain. The first solution I find is to delete the file that’s wrong from /mnt/USB1 that is incorrect. This didn’t solve my problem, testy1 was recreated but was still giving I/O error a little more reading says that gluster creates hard links and you need to go and delete these too, apparently in a .glusterfs folder but I couldn’t find it, there was also a heal function that would show me what files are affected. But not on my system. gluster doesn’t know heal. WHY????
gluster -V tells me I’m running an old version from 2013 (I think 3.2.8 but that’s from memory) the latest version is 3.5.0

I also came across some information saying that Stripes using 3 or more can quorum changes to resolve issues with file differences (I think this has to be setup though, I can see situations that automatically doing so could cause data loss, such as log files being written and having different data not just that one may have stopped being written to).
Anyway it looks like having 3.5.0 with a heal function would be beneficial, and I always prefer for things to be upto date where possible anyway.
Looking at the Gluster download page there’s a simple way to add gluster to the debian apt system. So I followed the steps to add it, and run apt-get update. Now another problem it’s not able to download the gluster info, but the link it’s using I can access fine. Then something jumps out at me, “Architectures: amd64″, Raspberry uses arm so this isn’t built for it. Now I have no idea if this is right or wrong but it makes sense to me.

Like many other things, it’s time to manually compile and install.
So there’s the background (sorry it took so long), Part 2 is going to run through manually installing gluster on the PI

Error ‘glibc detected *** /usr/bin/python: double free or corruption (fasttop)’ on Raspberry Pi using Python

Just a very quick post.
I’m working on a small project I’ve had in mind for a few months, basically pull and image and display it on the tv (there’s more to it, or I’d just settle for RasBMC).

So I’ve been coding it all up using python with pygame to display the image, all fine.

Then I introduced loops to refresh the images, and update the display. I had issues around other code and couldn’t keep looking at the tv while the code was running so I introduced a few mp3’s to play as the images were being refreshed and as the display was updating. All worked well, it sounded like I was on the Enterprise.

Further into coding up different functions, and some more headaches I’d cleared up alot of the minor error’s I had been getting, and was now ready to push the system to speed up the refreshes.

I’d managed to put a few images to refresh every second, and it appeared to be going well.
Then it stopped making a sound, checked and python had crashed with error:
glibc detected *** /usr/bin/python: double free or corruption (fasttop)

So I changed the code I’d been working on (clearly that’s the problem, didn’t have this earlier), but nope didn’t solve it.
Onto google, but this brought up alot of big reports about other things. checked a few, made a few changes. but I’m still getting the problem. Troubling though, if this is going to come down to the fact I’m refreshing alot and it’s getting intensive, it’s going to be a show stopper for this project.

Thankfully I did a little more searching and came across
http://www.raspberrypi.org/forums/viewtopic.php?t=36878&p=308220

Now I dont have the same problem (playing wav’s) but this drew me to the fact I’m playing mp3’s and these have increased in how much I’m playing them and they now overlap alot more than earlier.

So I decided to drop them out. Eureka it’s been running about 15 minutes and not died (previously a few minutes), sad thing is I’m now kinda missing my beeps.
I had hoped to be able to play an alert sound, if certain events happened so I may have to rethink how I can do so without causing crashes.

Anyways there it is in case it’s of help to someone else. I’ll be posting about the project at a later date, once I’ve done a little more with the code and tested it a bit more. It’s my first serious attempt in python, I can make programmers cry at the best of times so I’m expecting people to be able to rip this apart (but I’m actually very interested in getting it stable, so will welcome the criticism/ideas.

Nagios Twitter Alerts

I’ve had nagios running for years, so decided to play around with the alerts.
Twitter seemed the obvious choice, it’s easy for a people to follow the twitter account that’s publishing the alerts, and great if you actively use twitter (I don’t, this was more of a ‘how would you’ rather than a need).
First thing is to register a twitter account that nagios will publish as. I setup https://twitter.com/NagiosStarB
Once you’ve registered you need to edit your profile and add a mobile phone number (This is needed before you can change the app permissions later. Once you’ve done that you can delete the mobile number).
Now head over to https://apps.twitter.com/ and create a new app
Fill in Name, Description and Website (This isn’t particularly important, as we’re not pushing this app out to users).
You’ll be taken straight into the new app (if not simply click on it).

We need to change the Access Level, Click on ‘modify app permissions’

I chose ‘Read, Write and Access direct messages’, although ‘Read and Write’ would be fine. Click ‘Update settings’ (If you didn’t add your mobile number to your account earlier, you;ll get an error).
Now click ‘API Keys’

You need to copy the API key and API secret (Please dont try to use mine).
Now click ‘create my access token’ close to the bottom of the page.

You also need to copy your ‘Access token’ and ‘Access tocken secret’

Now we move onto the notification script.
Login to your Nagios server via SSH.
You need to ensure you have python-dev & python-pip installed.

apt-get install python-dev python-pip
pip install tweepy

Then cd into your nagios libexec folder (mines at /usr/local/nagios/libexec)

cd /usr/local/nagios/libexec/

We now add a new file called twitternagiosstatus.py

nano -w twitternagiosstatus.py

Copy and paste the following code into the file

#!/usr/bin/env python2.7
# tweet.py by Alex Eames http://raspi.tv/?p=5908
import tweepy
import sys
import logging

# Setup Debug Logging
logging.basicConfig(filename='/tmp/twitternagios.log',level=logging.DEBUG)
logging.debug('Starting Debug Log')

# Consumer keys and access tokens, used for OAuth
consumer_key = 'jNgRhCGx7NzZn1Cr01mucA'
consumer_secret = 'nTUDfUo0jH2oYyG8i6qdyrQXfwQ6QXT7dwjVykrWho'
access_token = '2360118330-HP5bbGQgTw5F1UIN3qOjdtvqp1ZkhxlHroiETIQ'
access_token_secret = 'rXjXwfoGGNKibKfXHw9YYL927kCBQiQL58Br0qMdaI5tB'

# OAuth process, using the keys and tokens
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)

# Creation of the actual interface, using authentication
api = tweepy.API(auth)

if len(sys.argv) >= 2:
    tweet_text = sys.argv[1]
    logging.debug('Argument #1 ' + tweet_text)

    if len(tweet_text) <= 140:
        logging.debug('Tweeting: ' + tweet_text)
        api.update_status(tweet_text)
    else:
        print "tweet sent truncated. Too long. 140 chars Max."
        logging.debug('Too Long. Tweet sent truncated.')
        api.update_status(tweet_text[0:140])

Replace consumer_key with your API key, consumer_secret with your API secret, access_token with your access token and access_token_secret with your Access token secret.
Now save and exit the editor.

CTRL+x then Y then Enter.

With the file saved, we need to make it executable.

chmod +x twitternagiosstatus.py

You can now test that the script works by typing

./twitternagiosstatus.py "testy testy"

You should now be able to see the Tweet on your new account (you may need to refresh the page).
If all has gone well so far, you can now add your Nagios Configuration.
Change Directory into your nagios etc

cd /usr/local/nagios/etc/

Edit your commands.cfg (mine is inside objects)

nano -w objects/commands.cfg

Where you choose to place the new configurations doesn’t really matter, but to keep things in order I choose just below the email commands.
Copy and paste the following

# 'notify-host-by-twitter' command definition
define command{
        command_name    notify-host-by-twitter
        command_line    /usr/local/nagios/libexec/twitternagiosstatus.py "$NOTIFICATIONTYPE$: $HOSTALIAS$ is $HOSTSTATE$"
}

# 'notify-service-by-twitter' command definition
define command{
        command_name    notify-service-by-twitter
        command_line    /usr/local/nagios/libexec/twitternagiosstatus.py "$NOTIFICATIONTYPE$: $SERVICEDESC$ ON $HOSTALIAS$ is $SERVICESTATE$"
}

You can adjust the specifics, but adding other $$ arguments (Use the email notification commands as an example). Save and exit

CTRL+x, then Y, then ENTER

Now we add a new contact. Edit contacts.cfg

nano -w objects/contacts.cfg

Copy and Paste the following

define contact{
        contact_name                    nagios-twitter
        alias                           Nagios Twitter

        service_notification_period     24x7
        host_notification_period        24x7
        service_notification_options    w,u,c,r,f
        host_notification_options       d,u,r,f,s
        service_notification_commands   notify-service-by-twitter
        host_notification_commands      notify-host-by-twitter
        }

define contactgroup{
        contactgroup_name       nagiostwitter
        alias                   Nagios Twitter Notifications
        members                 nagios-twitter
        }

I decided to create a specific contact and contact-group for this, but you can adjust as you wish, add the contact to other contact-groups if you wish.
Now the last bit,
Add the new contact group to the hosts & services, templates or host-groups and service-groups.
How you decide to do this will depend on how you’ve set out your hosts, services, templates and contacts. For me I edit the each of the host files and add contact_groups  nagiostwitter to each host and service.
(IMPORTANT: this will override settings that are inherited from templates, so if you already have email notifications active you’ll either have to just add nagiostwitter to the template or add users to this). Dont forgot to , delimited
An example host of mine

define host{
        use                     linux-server            ; Name of host template$
                                                        ; This host definition $
                                                        ; in (or inherited by) $
        host_name               excalibur
        alias                   Excalibur
        address                 192.168.1.27
        parents                 switch-netgear8
        hostgroups              linux-servers
        statusmap_image         linux40.gd2
        contact_groups          nagiostwitter,sysadm
        }

An example service on this host

define service{
        use                             generic-service         ; Name of servi$
        host_name                       excalibur
        service_description             PING
        check_command                   check_ping!100.0,20%!500.0,60%
        contact_groups                  nagiostwitter,sysadm
        }

That’s it, hopefully if all’s done right you can restart the nagios service.

/etc/init.d/nagios restart

Now your twitter feed will start to be populated with each alert. I can’t emphasis enough that if the nagios configuration is done wrong you may break other alerts that are already setup.
I really need to thank http://raspi.tv/2013/how-to-create-a-twitter-app-on-the-raspberry-pi-with-python-tweepy-part-1#install here as I used this as a starting point.

UPDATE:
A few weeks ago I received an email from twitter telling me my application had been blocked for write operations. It also said to check the Twitter API Terms of Service. I didn’t think this would cause a problem, I’m not spamming anyone other than myself or users I’ve asked to follow the alerts. So I read the Terms of Service, and it’s all fine. I raised a support request with Twitter and had a very quick response saying “Twitter has automated systems that find and disable abusive API keys in bulk. Unfortunately, it looks like your application got caught up in one of these spam groups by mistake. We have reactivated the API key and we apologize for the inconvenience.”
This did stop my alerts for a few days though.So just be aware of this.

UPDATE 2:
Thanks to a comment from Claudio to truncate messages over 140 characters. I’ve incorporated this recommendation into the code above.

Raspberry PI + RTorrent +Apache2 + RUTorrent

I’ve been using one of my PI’s as a torrent server for some time. Recently I decided to refresh the entire system. This will NOT go into the legalities of downloading anything, I expect everyone to only be using this for downloading raspberry images 🙂

Version Info:
2014-01-07-wheezy-raspbian.img
libtorrent 0.13.2
rtorrent 0.9.2
rutorrent 3.6

I’m going to assume you can SSH to your PI, and recommend you get all the latest updates before you start. I’m also going to be naughty and be running all the commands as root.
sudo su –

Then we’ll get the stuff needed to compile rtorrent and a few things needed for the plugins

apt-get install subversion build-essential automake libtool libcppunit-dev libcurl3-dev libsigc++-2.0-dev libxmlrpc-c-dev unzip unrar-free curl libncurses-dev apache2 php5 php5-cli php5-curl libapache2-mod-scgi mediainfo ffmpeg screen

While your waiting may as well get a coffee. With that all finished we’re going to grab the rtorrent packages.

mkdir /root/rtorrent
cd /root/rtorrent

wget http://libtorrent.rakshasa.no/downloads/libtorrent-0.13.2.tar.gz
wget http://libtorrent.rakshasa.no/downloads/rtorrent-0.9.2.tar.gz
wget http://dl.bintray.com/novik65/generic/rutorrent-3.6.tar.gz
wget http://dl.bintray.com/novik65/generic/plugins-3.6.tar.gz

tar xvf libtorrent-0.13.2.tar.gz
tar xvf rtorrent-0.9.2.tar.gz
tar xvf rutorrent-3.6.tar.gz
tar xvf plugins-3.6.tar.gz

Now that you’ve got everything extracted it’s time to compile and install libtorrent

cd /root/rtorrent/libtorrent-0.13.2
./autogen.sh
./configure
make
make install

With libtorrent installed it’s time to compile and install rtorrent

cd /root/rtorrent/rtorrent-0.9.2
./autogen.sh
./configure --with-xmlrpc-c
make
make install
ldconfig

Once you’ve reached this bit, we’re finished with the hanging around. We’ll now install rutorrent and it’s plugins

cd /root/rtorrent
rm /var/www/index.html
cp -r rutorrent/* /var/www/
cp -r plugins/* /var/www/plugins
chown -R www-data:www-data /var/www
a2enmod scgi
service apache2 restart

We’ll next create a new user account for rtorrent to run as

adduser -m -r rtorrent

Now we switch to the new user account to add the required rtorrent directories and config

su rtorrent
mkdir .sessions
mkdir complete
mkdir torrents
mkdir watch
nano -w .rtorrent.rc

Copy and Paste the following:-

# This is an example resource file for rTorrent. Copy to
# ~/.rtorrent.rc and enable/modify the options as needed. Remember to
# uncomment the options you wish to enable.

# Maximum and minimum number of peers to connect to per torrent.
#min_peers = 40
#max_peers = 100

# Same as above but for seeding completed torrents (-1 = same as downloading)
#min_peers_seed = 10
#max_peers_seed = 50

# Maximum number of simultanious uploads per torrent.
#max_uploads = 15

# Global upload and download rate in KiB. "0" for unlimited.
download_rate = 0
upload_rate = 100

# Default directory to save the downloaded torrents.
directory = ~/torrents

# Default session directory. Make sure you don't run multiple instance
# of rtorrent using the same session directory. Perhaps using a
# relative path?
session = ~/.sessions

# Watch a directory for new torrents, and stop those that have been
# deleted.
schedule = watch_directory,5,5,load_start=~/watch/*.torrent
schedule = untied_directory,5,5,stop_untied=~/watch/*.torrent

# Close torrents when diskspace is low.
schedule = low_diskspace,5,10,close_low_diskspace=200M

# Stop torrents when reaching upload ratio in percent,
# when also reaching total upload in bytes, or when
# reaching final upload ratio in percent.
# example: stop at ratio 2.0 with at least 200 MB uploaded, or else ratio 20.0
#schedule = ratio,60,60,"stop_on_ratio=200,200M,2000"
#schedule = ratio,5,5,"stop_on_ratio=1,1M,10"
ratio.enable=
ratio.min.set=1
ratio.max.set=2
ratio.upload.set=1K
system.method.set = group.seeding.ratio.command, d.close=, d.stop=

# Set Schedules
#schedule = throttle_1,00:10:00,24:00:00,download_rate=0
#schedule = throttle_2,07:50:00,24:00:00,download_rate=200

# Stop Seeding When complete
#system.method.set_key = event.download.finished,1close_seeding,d.close=
#system.method.set_key = event.download.finished,2stop_seeding,d.stop=

# The ip address reported to the tracker.
#ip = 127.0.0.1
#ip = rakshasa.no

# The ip address the listening socket and outgoing connections is
# bound to.
##bind = 127.0.0.1
#bind = rakshasa.no

# Port range to use for listening.
port_range = 51515-51520

# Start opening ports at a random position within the port range.
#port_random = no

# Check hash for finished torrents. Might be usefull until the bug is
# fixed that causes lack of diskspace not to be properly reported.
#check_hash = no

# Set whetever the client should try to connect to UDP trackers.
use_udp_trackers = yes

# Alternative calls to bind and ip that should handle dynamic ip's.
#schedule = ip_tick,0,1800,ip=rakshasa
#schedule = bind_tick,0,1800,bind=rakshasa

# Encryption options, set to none (default) or any combination of the following:
# allow_incoming, try_outgoing, require, require_RC4, enable_retry, prefer_plaintext
#
# The example value allows incoming encrypted connections, starts unencrypted
# outgoing connections but retries with encryption if they fail, preferring
# plaintext to RC4 encryption after the encrypted handshake
#
# encryption = allow_incoming,enable_retry,prefer_plaintext
#encryption = allow_incomming,try_outgoing

# Enable DHT support for trackerless torrents or when all trackers are down.
# May be set to "disable" (completely disable DHT), "off" (do not start DHT),
# "auto" (start and stop DHT as needed), or "on" (start DHT immediately).
# The default is "off". For DHT to work, a session directory must be defined.
#
# dht = auto
dht = off

# UDP port to use for DHT.
#
# dht_port = 6881

# Enable peer exchange (for torrents not marked private)
#
# peer_exchange = yes
peer_exchange = no

#
# Do not modify the following parameters unless you know what you're doing.
#

# Hash read-ahead controls how many MB to request the kernel to read
# ahead. If the value is too low the disk may not be fully utilized,
# while if too high the kernel might not be able to keep the read
# pages in memory thus end up trashing.
#hash_read_ahead = 10

# Interval between attempts to check the hash, in milliseconds.
#hash_interval = 100

# Number of attempts to check the hash while using the mincore status,
# before forcing. Overworked systems might need lower values to get a
# decent hash checking rate.
#hash_max_tries = 10

#Added for rutorrent stuff
encoding_list = UTF-8
#scgi_local = /tmp/rpc.socket
#schedule = chmod,0,0,"execute=chmod,777,/tmp/rpc.socket"
scgi_port = localhost:5000

# Start The Plugins when Rtorrent Starts not when the page is first opened. If apache service is restart separately the plugins are likely to be stopped. Only really needed for RSS feeds.
execute = {sh,-c,/usr/bin/php /var/www/php/initplugins.php &}

Save and Exit (ctrl+x then y then enter)
We now need to perform a test run of rtorrent.

rtorrent

It should start without any problems. You may get a few warnings inside rtorrent, but it should still be running. To Exit press ctrl+q.
You should now exit the rtorrent user.

exit

Finally we’re going to setup rtorrent to automatically start when the PI is powered up.

nano -w /etc/init.d/rtorrent

Copy and Paste the Following:-

#!/bin/bash

# To start the script automatically at bootup type the following command
# update-rc.d torrent defaults 99

RTUSER=rtorrent
TORRENT=/usr/local/bin/rtorrent

case $1 in
start)
#display to user that what is being started
echo "Starting rtorrent..."
sleep 4
#start the process and record record it's pid
rm /home/rtorrent/.sessions/rtorrent.lock
start-stop-daemon --start --background --pidfile /var/run/rtorrent.pid --make-pidfile --exec /bin/su -- -c "/usr/bin/screen -dmUS torrent $TORRENT" $RTUSER
## start-stop-daemon --start --background --exec /usr/bin/screen -- -dmUS torrent $TORRENT
#output failure or success
#info on how to interact with the torrent
echo "To interact with the torrent client, you will need to reattach the screen session with following command"
echo "screen -r torrent"
if [[ $? -eq 0 ]]; then
echo "The process started successfully"
else
echo "The process failed to start"
fi
;;

stop)
#display that we are stopping the process
echo "Stopping rtorrent"
#stop the process using pid from start()
start-stop-daemon --stop --name rtorrent
#output success or failure
if [[ $? -eq 0 ]]; then
echo "The process stopped successfully"
else
echo "The process failed to stop"
fi
;;

*)
# show the options
echo "Usage: {start|stop}"
;;
esac

Save and Exit (ctrl+x then y then enter)
Then run


chmod +x /etc/init.d/rtorrent

update-rc.d rtorrent defaults

And that’s it. You could now start rtorrent using “/etc/init.d/rtorrent start”, but it’s just as easy to reboot and test that the startup scripts runs. Once you’ve reboot or started rtorrent you can access the webpage at http://{ip-address or name} Notes:-

This setup is meant to run internally, as such there is no security on the apache setup.

Personally I forward ports 51515-51520 on the router onto the PI, this makes a difference in download speed (much quicker) but as it's opening ports it's a security risk so you'll have to decided whether or not to.

I run this setup behind a vpn using ipredator.se, if there's any demand I'll write up another guide on how to configure that and ensure your traffic is locked to only go over the vpn.

Raspberry PI – LDAP Auth

Using Rasbian 20-12-2013 with updates

Install libnss-ldap

apt-get install libnss-ldap

Once complete you’ll be prompted for ldap details

ldap server e.g ldap://192.168.1.3/ ldap://192.168.1.2/
base dn e.g dc=system,dc=local
ldap version e.g 3
Does LDAP require login e.g No
Special LDAP privileges for Root e.g No

Once you’ve given the ldap details you need to update nsswitch.conf

nano -w /etc/nsswitch.conf

Previous config:

passwd:         compat
group:          compat
shadow:         compat

hosts:          files dns

networks:       files

protocols:      db files
services:       db files
ethers:         db files
rpc:            db files

netgroup:       nis

New config:

#passwd:         compat
passwd:         files ldap
#group:          compat
group:          files ldap
#shadow:         compat
shadow:         files ldap

hosts:          files dns

networks:       files

protocols:      db files
services:       db files
ethers:         db files
rpc:            db files

netgroup:       nis

Then we add the following so that home directories are automatically created

nano -w /usr/share/pam-configs/my_mkhomedir
Name: activate mkhomedir

Default: yes
Priority: 900
Session-Type: Additional
Session:
required                        pam_mkhomedir.so umask=0022 skel=/etc/skel

Apply the above using

pam-auth-update

To make sure everything is applied and the cache daemon doesn’t screw about I reboot. Once reboot login worked fine. A few commands that can help see what’s happening

getent passwd
getent group
tail /var/log/auth.log

Raspberry Pi Scanner using HP PSC 2170

A few days ago I decided to have a good sort of all my old documents, statements and piles of paper that I’ve kept for years just incase I need it (I doubt I’m ever going to need my mobile phone statement from 2008).
I’ve been looking on ebay for a few months for a networked duplex document scanner. but these seem to easy break £70 and I don’t think I’d get alot of use out of it beyond initially scanning everything.
I already have a HP PSC 2170 Printer/scanner (not duplex), but the windows software is pretty crap, I use irfanview (which is a great bit of software), but the HP scanning software insists on loading for each scan then warmup the lamp and do a prescan before allowing me to actually scan. It’s always put me off using it.

So while thinking I’ve got papers everywhere, a scanner, and a few free PI’s a little light went off in my head, surely I can use a PI (I’m not expecting wonders with the scanner) but it’s something to play with. So google being your friend I went looking and came across:-
http://eduardoluis.com/raspberry-pi-and-usb-network-scanner/

ok so I could just take what he’s done and just use it. It looks like a good idea, and he/she has updated the process to include some form of web interface. Anyway it’s not what I need I just want to be able to scan to a device that’s on my network.

I put a rasbian image onto the PI, expanded the 16Gb SD card, named the PI and run updates.
Then I installed ‘sane’ via ‘apt-get install sane’
{note this is from memory, my history doesn’t contain all the commands I’ve used, I’ll need to recheck this on a fresh install}
Once sane was installed I used ‘scanimage -L’ to check my device is being detected. Nope nothing get’s listed. A quick google search for ‘sane HP PSC 2170’ I found a page listing HP PSC 2170 as good support. so I thought, well it should just be working. after a bit of scrolling around I noticed the table had an end column listing driver as hpaio, this wasn’t apparent against my printer, nevermind it’s another clue. So a bit more poking around to find out if the driver is installed I found ‘apt-get install libsane-hpaio’. Now when running ‘scanimage -L’ I get device `hpaio:/usb/PSC_2170_Series?serial=XXXXXXXXX’ is a Hewlett-Packard PSC_2170_Series all-in-one’.

With all this up and running I’m now onto getting an image. Running ‘scanimage -p –format=tiff –resolution=150 > test.tiff’ gives me a nice image within a few seconds. I was surprised at just just how quick the scanner sprang into action. No scanning the same piece of paper twice, no warming up the lamp.

Now that I have this, I’d much rather a jpg than a tiff. so I use ‘convert temp.pnm temp.jpg’ oh look another error, convert isn’t a valid command. Of course it’s not, I forgot to install anything. ‘apt-get install imagemagick’ and rerun the convert and now I have a much smaller jpg file. (I’m going to skip the steps I used to install and configure samba, to be able to pickup my images, but take it I did. I could have just as easily used WinSCP to access the files)

With all the tests done, and alot of {press up, press enter} I decided to write a quick bas script. First to just capture an image then convert it using a filename provided, then I improved it to save the filename based on the unix time, next I improved it to be able to group multiple pages into a single folder. Below is the script that when run, gives an option of single scan page or multi scan page, goes off and runs the scan and convert and returns either to the initial prompt or in multimode ask if there’s another page.

cat /usr/sbin/scan_it.sh
#!/bin/bash
scan()
{
image_date=`date +%s`
pwd=`pwd`
echo "  :  Scanning Image $image_date into $pwd"
scanimage --resolution=200 > temp.pnm
convert temp.pnm $image_date.jpg
rm temp.pnm
}
while [ true ]
do
  read -n 1 -p "Single, Multiple, Quit? " INPUT
  case "$INPUT" in
  s)
    cd /var/scans/
    scan
    ;;
  m)
    folder_date=`date +%s`
    mkdir /var/scans/$folder_date
    cd /var/scans/$folder_date
    YN=y
    while [ "$YN" == "y" ]
    do
      scan
      echo "Another Page? (Y/N) "
      read -n 1 YN
    done
    ;;
  q)
    echo " Quitting... Goodbye..."
    exit 0
    ;;
  *)
    echo " Not an Option! "
    ;;
  esac
done
This script presumes you have already made a folder ‘/var/scans’ using ‘mkdir /var/scans’
I did also setup sane to run as a network service making the scanner available from other machines, but I haven’t tested this yet. For my needs I decided to keep the images as jpgs rather than make PDF’s I still may make PDF’s out of all my documents, especially the multi paged ones. But I like the idea that I can access jpg’s on pretty much anything (including my recent experiments with python and pygame to display graphics on my TV via HDMI. I think PDF’s are a little more restrictive).
When doing the initial google search I also came across:-
Which looks like a really promising idea. To move away from needing terminal access to the PI to scan stuff.
The main reason I’ve decided not to go any further at the moment, is my printer/scanner I think is starting to die (not related to this setup) on a few occasions it’s refused to scan and needed a reboot, it’s also locked the scanner at the bottom of the page instead of returning after a scan, and went through alot of clunking (which just wouldn’t stop) when it was initializing after one reboot.
While it has the possibility of failing to complete a scan I’d rather be able to see any errors on screen.
I’ve spent tonight scanning over 400 pages, at the end of which I copied all the files to my server and run a backup then happily set fire to each piece of paper I’ve been keeping for years. It’s been something of a therapeutic exercise. I have no doubt I’ll be returning to this little project in time to actually incorporate sending the files directly to the server, having the option to email them, converting to PDF on request, and god knows whatelse. For the time being this script is doing me just fine. The only option I may incorporate is a selection of a ‘scan to’ folder before selecting single/multiple after scanning alot of bank statements, phone statements, payslips, etc I know I’m going to be sorting them all into subfolders and doing this at the time of scan just makes more sense.
Anyway that’s my rambling for today. Hopefully it will give someone some ideas.

Twonky Server Slow Scanning

Ok so first a little about my setup.
I have twonky running on a RaspberryPI along with OpenVPN. The whole point is so that I can play my files when away from home. This worked great a few months ago simply plug in to the internet, the VPN connects and then the shares are connected, and twonky scans the folders. It’s not perfect in that twonky can scan empty folders and remove stuff from it’s database so it kinda screws with the playlists and I can never remember what episode I got upto. But it still works. Then I upgraded to 6.0.39 and things went a little weird. It used to complete a scan within a few minutes. But now it was taking over an hour to complete. For the most part it didn’t really bother me, plug it in and leave it do it’s thing. But if the VPN ever went a bit weird it could cause a full rescan, it also seemed to use more data, previously a few Mb now it could be a few 100Mb.

It was more of an annoyance than anything. I did have a search around but couldn’t find anything that would have caused it in the version changes that jumped out at me.

That is until today.
Today I found an article on Series and Movie thumbnails http://server.vijge.net/tw-video-scraper/ so I grabbed the files. I’m not sure if they work 100% yet, I’m getting a symlink error when I run it manually, but it does pull and save a thumbnail. As I’m watching something I don’t want to restart twonky to test it.
But I noticed in the cgi-bin folder a few other {scripts}, in particular ffmpeg-video-thumb.desc
Now the stuff in the link does say to disable this, but it got me thinking. Is this running on the PI, so I decided to have a look. There’s alot more files in the cgi-bin for 6.0.39 than previous versions, and this will try to make a thumbnail for each video file. So I disabled the code by putting # at the start of each line. It may not be the cleanest approach but I want to be able to put it back if it breaks something.

Restarted twonky on the PI, and watched the status page. It managed to scan everything across the VPN within a few minutes again. And looking at the network stats probably pulled around 10Mb of data.
So I think that’s solved this little problem of slow scanning in Twonky for me.

Raspberry PI

After waiting all of Feb for news of when the Raspberry PI was going to be released, We finally got told that an announcement would be made at 6am on Wednesday 29th.
As I was still awake at 3am, I thought no point in setting the alarm I’ll just stay awake. Hoping that this would indeed be the launch day.
(As it was very early in the morning and I’ve definitely slept since the following times are approx.)
Sometime after 5am raspberrypi.com was replaced with a down for maintenance page.
Then about 5:45am, raspberrypi.org seemed to disappear off the net, then come back with a nice server error. but this only lasted a few mins.
As the time got closer to 6am, I could feel the pain of the PI’s servers. As I had at least 10 mins to go, I wasn’t hitting refresh 60 times per min (I’m sure enough people were). Every now and again the server wouldn’t respond, so I’d gave it 30 secs and try again.

All of a sudden (I’d say at about 5:58am), the raspberrypi.org page loaded up and was completely different. This is it the announcement we’re all waiting for.
As I scanned through quickly, I noticed them saying about licensing 2 resellers I kept looking quickly for the ‘where to buy’, then I got it 2 links. So I quickly right clicked on each and opened in a new tab.

Both loaded fairly quickly. Once into the Farnell site I searched raspberry, as the results come back it said 2 products but didn’t show them. So I click on the words products and it reloaded and displayed. Clicking the add to cart option threw up a nice little dialogue under the cart saying adding, then that was replaced with an ‘unable to find price in database. Item not added to cart’ error.
SHIT!! was the exact thought, never mind try the 2nd obviously farnell has an issue.
As I got to the RS site again there wasn’t anything on the homepage like I’d expect for a pretty big launch. So again type raspberry into the search and hit enter. It comes straight back with a specific RaspberryPI page. I scrolled quickly down looking for the add to cart button or buy button, but nothing only a form to fill out with my details to register my interest.
‘Surely that’s not right, I’m on the wrong page.’ I thought. Again type raspberry into the search box and back I come to the same page. Bugger it, as least I found it on the farnell site so I went back to that tab. reloaded the page and searched again, expanded the products and clicked the add to cart button. Again I got the nice error!! I’m seriously not impressed now, I’ve hit both sites very early on but not getting me any closer to having a nice order number.
Back I went to the RS site and thought well I’ll fill in the details and maybe it’ll direct me then to another page. Filled them all in and submit. Great but still no page where I can order it. So decided to go back to the start page for RS and look at the categories incase it’s not coming up on a search. It was at this point that the RS site started to load slowly for me. So back I jump to farnell, and refreshed back to the homepage.
Now the fun begins, Farnell just completely stopped loading anything for me. give it a couple of secs and refresh, oh I’m pushed to a page telling me their site is overloaded but I can phone an order through.
Great stick the number into my mobile and press dial. Nope it didn’t dial, instead I’m being sent to a page. Thinking to myself wtf, as the page sprung up with a list of alternative numbers, I quickly realised I installed No to 08 app ages ago (I dont ring 08 numbers very often and forgot I did it). Never mind though it’s given me an alternative so clicked it and dial commences. Ring-Ring, and I’m answered. Oh NO it’s an IVR happily telling me their lines open at 8am but I can place and order or enquiry at their website. Not F###ing likely I thought while hanging up.
Refreshed Farnell, dead. back I go to RS. It’s loading but the search is still giving me the same register my interest page.

All this has taken place within about 5-10 mins.

So thinking quite calmly, ‘Maybe they haven’t released yet, I should really read through the raspberry announcement page’. So back to that tab and scrolled to the top. This time paying attention, I take in all the info and yep they are on sale from today.
Back to Farnell tab and refresh = nothing.
Over to the RS tab and refresh = oooohh it’s slow, and now it’s gone.
After about 5-10 mins of jumping back and forth between tabs, and I have to admit I’d also opened additional tabs since I’d been refreshing the Product Search on Farnell, I’d opened a new one for their front page. and similar on the RS.
As I’m getting nowhere fast, I went back to the RPI page. Then noticed they’re twitter link. So I opened this (in a new tab).
oh there’s lots of people shouting on here that they can’t get anywhere. All of a sudden I noticed a posting from Liz saying if your getting the register your info page on RS your on the wrong page.
‘F#ck It!!’ I thought, back to the RS tabs and refresh, luckily the one did load and did another search still it took me back to the register interest page. Another search gave no results.
Back to Twitter I go, in the 20 or so seconds I’d been away twitter was very happy to tell me 80+ new posts.
Now I have to admit, I’ve gone through product launches from a tech side and it’s always worse than what the marketing people tell ya. I knew that the R-PI was going to be big and that the first 10,000 was going to sell out fast. I’d only judged that on the number of downloads that the (pre) software image had a few weeks back and the number of people talking in the forums. Having never looked at the number of followers the R-PI had on twitter, I didn’t expect the overwhelming chatter I now threw myself into.
I’m not a big fan of twitter personally, it just doesn’t appeal or click with me. But at 6:30 in the morning I have to say I was enjoying managing to read 2/3 comments before it was popping up another 10+ new comments, clicking that reading a few and repeating.
While doing this jumping to other tabs and trying to refresh, unlike early though where the pages did load but slowly, I was now hit with no response across all the Farnell & RS refreshes.

After another 20 mins of reading tweets and trying to refresh pages, I had a call from my cousin did I want to go for coffee and he’s now on the way home.
‘Yep, fine’, not getting anywhere with this now. 2 sites down, lots of angry tweets, and my hopes of getting a R-PI early was pretty much wiped out.

After an eventful trip to the coffee place, which consisted of the coffee girl in tears because a boss she doesn’t like is coming back to the store and she now has to transfer stores.
We’re back in the car, oh the times 7:59am. Well their lines open at 8am, I doubt very much that they’ll have any left now, but don’t ask don’t get so I pressed redial on my phone. It rang and rang then an announcement ‘higher call volumes than expect, please hold’, at least their open. Then I get a nice double beep off my phone 8% battery remaining.

Now I’ve been in much worse positions with little battery life, but the timing just sucks. never mind I’ll keep on the line until the battery is dead. Now I have to say a big thumbs up to HTC, every other phone I’ve had has pretty much died within seconds of telling me it’s low when on a call, but this one managed a 10 min journey home while still on the call (in a queue I should add), and my 5 mins fumbling around single handed trying to find my keys, while not dropping either my phone or tablet.
Alas after 15 mins of holding, I decided there’s little point in holding, if they had any left at 8am, 15 mins worth of calls would definitely wiped them out. So I hung up.
Went back to my laptop and refreshed. Both pages still out, so back to twitter I go.

Plenty more chatter to catch up on, but still there’s one hell of a constant flow of new stuff coming through.
All of a sudden a refresh of Farnell kicked in. It was my search one, so I quickly clicked add to cart. oh it’s added!!!!!
Checkout!! – it’s only gone straight through and given me the checkout.
So in I filled the address info, and proceed. it’s trying to load an invoice page but nope timed out. I can see in the addressbar I have a session ID, so maybe just maybe it will remember me if I refresh and it wont kick me back to the start.
5 refreshes later and I have an invoice page loaded. Quickly filled in my details and onto the next page.
All in all it took me about 5 mins to get all the way through and a hell of a lot of refreshes along the process. but finally it’s complete my order.

I received an email confirmed my new account, a few hours later I had some mails confirming my order and giving an estimated delivery of end of march (I’d be quite happy with that I thought).

Today I received another confirmation of my order, but now pushing estimated delivery back to the end of April. I’m kind of expecting that to get further back as time progresses, but we’ll see what happens.

This evening I received a mail from RS confirming my interest and saying they will be mailing out to take orders later in the week. They also put that the orders will be sent out in a first come first served.
Now I’m not entirely sure if they mean the register your interest order, or a new order starting with another gold rush when they mail people saying you can buy them from them.
If it’s the former, I’m hopeful that my submission for interest was early enough to be amongst the first while everyone else was still looking to buy it. It was after all done while they’re site was quite alive.

Time will tell. I’m still really excited with all this, and think that the enormous following it’s gathered has to be a good thing. I’ve got so many little projects that I want to do with it, if I could get a few by the summer-time it would definitely keep me busy all summer.

Raspberry PI

Not sure if anyone reads these at all, but thought I’d make a comment as I go.
I’m waiting on news from Raspberry PI to know when they’re being launched. At first they’re going to have a run of just 1000 units, I have a load of ideas I’d like to try and do on it from some home automation, security system, all the way to maybe getting zoneminder running on it with a webcam.
It looks a fantastic little board and I think the idea of putting them into schools and letting kids play, program and create it a great one. I wish they’d done that kind of thing when I was in school. I wanted to do the computer control part of CDT but the teacher wouldn’t run it, or even let me do it as an extra after school in my own time.

Still I’ve done little bits and pieces as I go, but I’m really looking forward to the Raspberry PI coming out and getting to just play and experiment again. After looking on the forums yesterday I doubt very much that I’d be in with a chance of getting one of the 1st 1k, and it’s looking like a good couple of runs of 1k each are going to shift like hot cakes, but I’ll be looking out hoping the site doesn’t crash as soon as I know they’re out.