Mining DarkCoin on Linux using Digital Ocean

A friend recent;y asked me about DarkCoin, and I had to admit I hadn’t heard anything about it. Sure I have some bitcoins but only really to say I have them. So when he asked I just thought `ah there’s another one`. He then asked how someone would mine these things, So I showed him a few youtube videos of pretty impressive setups.

Anyway to the point, I ran through a quick demonstration of how the miner would work and setup one up on a Virtual Server (Droplet). So for anyone that’s interested here goes (p.s. I can’t say you’ll make a fortune from this method, it’s more proof of concept).

We’ll be using DigitalOcean for our server, so you’ll need an account. Signup here There’s always some promo codes for some credit at present SSDMAY10 give you an extra $10 credit. I personaly use DigitalOcean to host a few of my projects and they’re great for being able to test ideas.

Once you have your account setup, Create a new Droplet I normally use the Latest Ubuntu.
So the specs I’m testing this on are:-

Hostname : Testy
Size : 512MB, 1CPU, 20GB SSD, 1TB XFER $5pm
Region : Amsterdam 2
Image : Ubuntu 14.04 x64
SSH Key : testy (optional, you may not have added any ssh keys so you can skip this selection)
Settings : Enable VirtIO - Checked, Private Networking - Unchecked, Enable Backups - Unchecked.

Once created you will be emailed your root password (unless you selected SSH Keys).
More CPU’s would be better, but this one’s just for testing it out.

Using Putty login to your new Droplet

Now we’ll install some dependencies, and download the mining git.

apt-get update
apt-get install build-essential m4 libssl-dev libdb++-dev libboost-all-dev libminiupnpc-dev git automake libcurl4-openssl-dev screen

git clone https://github.com/ig0tik3d/darkcoin-cpuminer-1.2c.git

cd darkcoin-cpuminer-1.2c
chmod a+x autogen.sh
./autogen.sh
./configure
make
make install
cd ~

If all has gone well you can run the miner using

minerd -a X11 -o stratum+tcp://drkpool.com:3333 -u bighippo999.testy -p password

This should output something similar to

One thing to note, the above command will test under MY pool test login. You will not receive any coins doing this and should only be used as a test. You can stop this using CTRL+C

As long as that all seems to be ok, we should now join a pool. I’m currently using DarkCoin Official Pool I have no idea how ‘Official’ it is, but I was drawn to the graphs 🙂 I did briefly run a miner on windows using this guide and received coins directly into my wallet. Whereas using the pool I’ll have to cash anything out to my wallet and that will incur a charge. I would have also shown using the servers from the other guide, but at present they seem to be down I suppose that’s why this Pool does charge small admin % on each transaction. I can’t say I mind for good service.

Once you’ve signed up and have your details just run the miner again

e.g. minerd -a X11 -o server:port -u user.worker -p password

Replacing server:port user.worker and password with your own details.

If your concerned of your pool failing a handy script a come across

#!/bin/bash
## Miner Failover Script

## Will continously try each pool until one responds, ordered by priority.

#GLOBAL
## Set options
RETRIES="3"
SECONDS="5"
MINERD="minerd"

#POOL1
## Set userpass information for first pool
USERPASS1="name.worker:password"
URL1="stratum+tcp://lotterymining.com"

#POOL2
## Set userpass information for second pool
USERPASS2="name.worker:password"
URL2="stratum+tcp://www.drkpool.com:3333"

while :
do
$MINERD -a X11 --retries=$RETRIES --retry-pause=$SECONDS --userpass=$USERPASS1 --url=$URL1
$MINERD -a X11 --retries=$RETRIES --retry-pause=$SECONDS --userpass=$USERPASS2 --url=$URL2
done

You can copy and past this into a new document

e.g. nano -w startmining.sh

Paste the code.
CTRL+X, then Y and enter. Will exit saving the file.
Then issue

chmod +x startmining.sh

To make the file executable. Lastly it’s always better to run processes you want to keep running (but also be able to check on) in a screen.
The command

screen -d -m -S MINER minerd -a X11 -o stratum+tcp://drkpool.com:3333 -u bighippo999.testy -p password

Will start the miner in a screen, and you can view this with

screen -r

Then either disconnect with CTRL+A, then D or stop the miner using CTRL+C.
If you used the above script you can start that in a screen with

screen -d -m -S MINERPOOL ./startmining.sh

Again to view it

screen -r

Then either disconnect with CTRL+A, then D or stop the miner using CTRL+C.

EDIT:
The other pool I’m using came back online check it out here. This would be started with the following

minerd -a X11 -o http://q30.qhor.net:7903 -u Xn7JauXvQmorx82yEN2EMvbt3dX45uoiCh -p password

I hope you’ve found some of this useful, if you feel like sending me a few DarkCoins my address Xn7JauXvQmorx82yEN2EMvbt3dX45uoiCh and my BitCoin address 1DpfEhiVjNM4WZT49X18m3vXyUvGpuvz9i

Raspberry PI + GlusterFS (Part 3)

After hitting errors when installing  in Part 2 I decided to split out the solution.
Ashley saw part 2 and had already ran into the same problem (see the comment), thanks to his comment it gave me a huge help on what to do next.
I’ve started with a fresh raspberry pi image so that nothing conflicts. Again get the latest updates

apt-get update
apt-get upgrade

Then download the needed files with the following commands

wget http://download.gluster.org/pub/gluster/glusterfs/3.5/LATEST/Debian/apt/pool/main/g/glusterfs/glusterfs_3.5.0.orig.tar.gz
wget http://download.gluster.org/pub/gluster/glusterfs/3.5/LATEST/Debian/apt/pool/main/g/glusterfs/glusterfs_3.5.0-1.dsc
wget http://download.gluster.org/pub/gluster/glusterfs/3.5/LATEST/Debian/apt/pool/main/g/glusterfs/glusterfs_3.5.0-1.debian.tar.gz

Now extract the archives

tar xzvf glusterfs_3.5.0.orig.tar.gz
tar xzvf glusterfs_3.5.0-1.debian.tar.gz

We need some tools so

apt-get install devscripts

Then we move the debian folder into the glusterfs folder and change into the glusterfs folder

mv debian glusterfs-3.5.0/
cd glusterfs-3.5.0

Next run

debuild -us -uc

This will start but will throw dependency errors.
The important line is

Unmet build dependencies: dh-autoreconf libfuse-dev (>= 2.6.5) libibverbs-dev (>= 1.0.4) libdb-dev attr flex bison libreadline-dev libncurses5-dev libssl-dev libxml2-dev python-all-dev (>= 2.6.6-3~) liblvm2-dev libaio-dev librdmacm-dev chrpath hardening-wrapper

Which I resolved with

apt-get install dh-autoreconf libfuse-dev libibverbs-dev libdb-dev attr flex bison libreadline-dev libncurses5-dev libssl-dev libxml2-dev python-all-dev liblvm2-dev libaio-dev librdmacm-dev chrpath hardening-wrapper

With the dependencies installed I ran

debuild -us -uc

This may output some warnings. On my system I had a few warnings and 2 errors “N: 24 tags overridden (2 errors, 18 warnings, 4 info)”, but it didn’t seem to affect anything.
Now we’ll wrap up with

make
make install

The Make probably isn’t necessary, Once installed we need to start the service

/etc/init.d/glusterd start

You can check everything is working ok with

gluster peer status

This should return

Number of Peers: 0

The only thing left to do is ensure glusterd starts with the system

update-rc.d glusterd defaults

And we’re all set. Now you can take a look at Part 4

Error ‘glibc detected *** /usr/bin/python: double free or corruption (fasttop)’ on Raspberry Pi using Python

Just a very quick post.
I’m working on a small project I’ve had in mind for a few months, basically pull and image and display it on the tv (there’s more to it, or I’d just settle for RasBMC).

So I’ve been coding it all up using python with pygame to display the image, all fine.

Then I introduced loops to refresh the images, and update the display. I had issues around other code and couldn’t keep looking at the tv while the code was running so I introduced a few mp3’s to play as the images were being refreshed and as the display was updating. All worked well, it sounded like I was on the Enterprise.

Further into coding up different functions, and some more headaches I’d cleared up alot of the minor error’s I had been getting, and was now ready to push the system to speed up the refreshes.

I’d managed to put a few images to refresh every second, and it appeared to be going well.
Then it stopped making a sound, checked and python had crashed with error:
glibc detected *** /usr/bin/python: double free or corruption (fasttop)

So I changed the code I’d been working on (clearly that’s the problem, didn’t have this earlier), but nope didn’t solve it.
Onto google, but this brought up alot of big reports about other things. checked a few, made a few changes. but I’m still getting the problem. Troubling though, if this is going to come down to the fact I’m refreshing alot and it’s getting intensive, it’s going to be a show stopper for this project.

Thankfully I did a little more searching and came across
http://www.raspberrypi.org/forums/viewtopic.php?t=36878&p=308220

Now I dont have the same problem (playing wav’s) but this drew me to the fact I’m playing mp3’s and these have increased in how much I’m playing them and they now overlap alot more than earlier.

So I decided to drop them out. Eureka it’s been running about 15 minutes and not died (previously a few minutes), sad thing is I’m now kinda missing my beeps.
I had hoped to be able to play an alert sound, if certain events happened so I may have to rethink how I can do so without causing crashes.

Anyways there it is in case it’s of help to someone else. I’ll be posting about the project at a later date, once I’ve done a little more with the code and tested it a bit more. It’s my first serious attempt in python, I can make programmers cry at the best of times so I’m expecting people to be able to rip this apart (but I’m actually very interested in getting it stable, so will welcome the criticism/ideas.

Ubuntu LDAP Authentication (You are required to change your password immediately (password aged)) Part 2

In an earlier post I was encountering password problems when authenticating via OpenLDAP. This was prompting me to change my password while login onto certain servers but not all. The change prompt would then disappear after typing the current password and close the putty session.

Having resolved that particular problem I’m left with another. Although the password change is successful I now have to change the password on each login.

When I encountered the first problem a few months back I thought it was to do with the LDAP ACL. I think I was partly right as this is a continuation of that problem and it does look like this will be ACL related.

So looking at what information I can pull together, looking at the shadow information:-

root@Exxxxxxxx:~# getent shadow
root:*:::45::::
nobody:*:::::::
{username}:*:::365:::16177:

Using slapcat to pull all the information off ldap below are the relevant bits:-

shadowMax: 365
shadowExpire: 16177
shadowLastChange: 15921

So it looks like the shadowLastChange isn’t allowed to be viewed. I found someone else recommending that you make shadowLastChange readable by all. Below is the current ACL:-

dn: olcDatabase={1}hdb,cn=config
olcAccess: {0}to attrs=userPassword,shadowLastChange by self write by anonymous auth by dn=”cn=admin,dc=domain,dc=local” write by * none
olcAccess: {1}to dn.base=”” by * read
olcAccess: {2}to * by self write by dn=”cn=admin,dc=domain,dc=local” write by * read

And here is the configuration that supposed to work (I say supposed to as I’m writing this while doing):-

dn: olcDatabase={1}hdb,cn=config
olcAccess: {0}to attrs=userPassword by self write by anonymous auth by dn=”cn=admin,dc=domain,dc=local” write by * none
olcAccess: {1}to attrs=shadowLastChange by self write by dn=”cn=admin,dc=domain,dc=local” write by * read
olcAccess: {2}to dn.base=”” by * read
olcAccess: {3}to * by self write by dn=”cn=admin,dc=domain,dc=local” write by * read

I’m not going to address any security concerns on making this field readable, for me it’s minimal.
So how do I change the ACL from the 1st to the 2nd. Make a new text file:-

nano -w auth_new.ldif
dn: olcDatabase={1}hdb,cn=config
changetype: modify
replace: olcAccess
olcAccess: {0}to attrs=userPassword by self write by anonymous auth by dn="cn=admin,dc=domain,dc=local" write by * none
olcAccess: {1}to attrs=shadowLastChange by self write by dn="cn=admin,dc=domain,dc=local" write by * read
olcAccess: {2}to dn.base="" by * read
olcAccess: {3}to * by self write by dn="cn=admin,dc=domain,dc=local" write by * read

Make sure to change the dn to your specific setup. Failure to do so may result in you loosing admin access. Useful command:-

ldapsearch -Q -LLL -Y EXTERNAL -H ldapi:/// -b cn=config '(olcAccess=*)' olcAccess
Next you modify the ldap using:-

ldapmodify -Q -Y EXTERNAL -H ldapi:/// -f auth_new.ldif

Now when I checkout the shadow information I get:-

root@Exxxxxxxx:~/ldap# getent shadow
root:*:15797::45::::
nobody:*:::::::
{username}:*:15921::365:::16177:

Now when I login I’m not being prompted to change my password. I’m not entirely sure if this is right or wrong anymore as I’ve been changing my password all night, so I guess I’ll just wait for a few user accounts to expire and check that it does all work.

Update: It does work. I tested it with a users account that was having problems login into one of the servers, they were still prompted for their ldap password and told they must change it. Did that and then closed putty and tried again, logged in with the new password and wasn’t reprompted to change it again.

Ubuntu LDAP Authentication (You are required to change your password immediately (password aged))

Been hitting a problem on one of my servers for a while, when trying to login users keep getting prompted to change their password but it just closes putty after they retype their password.

I thought I narrowed it down to an ldap option roobinddn, I use this on some of my servers (those I consider secure) For the servers that I dont have the rootbinddn setup for, they receive the password change prompt for those that have it set they just allow login.
I looked at it a few months back but never had the time to really investigate and resolve it. I thought it had something to do with the ldap ACL permissions that the user doesn’t have access to the password fields for their own account. However looking at it today I think I may be only partly correct.

If I run login {username} I get the below:-

root@Exxxxxxxxx:~# login {username}
Password:
You are required to change your password immediately (password aged)
Enter login(LDAP) password:

Authentication information cannot be recovered

 I haven’t seen the ‘Authentication information cannot be recovered’ before as putty always closes. Checking out this error (I google every error) I found the solution was installing libpam-cracklib:-

apt-get install libpam-cracklib 

So now when I run login {username} I get:-

root@Exxxxxxxxx:~# login {username}
Password:
You are required to change your password immediately (password aged)
Enter login(LDAP) password:
New password:
Retype new password:
LDAP password information changed for {usernae}
Last login: Sun Aug 4 04:48:45 BST 2013 on pts/1

And a nice bash prompt.
Now onto problem #2, although I can now login after changing the password I get the password change prompt each login. Changing the password does take as login in the 2nd time uses the new password. So I think it’s now down to the ldap ACL for shadowLastChange so I’m going to investigate that, and will put anything to correct that one in another post.

Quagga Automatic Restart/Recovery

As I’m sure I’ve posted before, I use OpenVPN and Quagga to build up my network. After recently updating all my Ubuntu servers something strange happened, quagga that had been pretty rock solid started screwing up. Previously I’ve had the odd problem where a VPN would drop out and somehow block coming back up, so I scripted some VPN checks to confirm each link was up. If not the script restarts the VPN link that’s down. This has been working fine on each server and with quagga running routes around the entire network just keep working. Until of course the recent updates on each system that seem to have introduced a fault with quagga. Although quagga is remains running, all the routes disappear and just wont come back. An error does get logged in one of the logs (I’ll try to find what the error was and update here), but the quagga watchdog doesn’t see a problem since everything is still running. So I’ve put together a little script below that checks the routing table, and if there’s no entries relating to other networks (not local) then it’s considered that quagga has gone faulty and restarts it.

nano -w /usr/sbin/check_quagga_routes.sh
#!/bin/bash
checktime=`date`
echo $checktime : Checking Routing... >> /var/log/connection.info
routing=`route -n | grep -i 255.255.255.0 | grep -vi eth0`
if [ -z "$routing" ]
then
# No Routes to VPNs Detected. Restart Quagga
/etc/init.d/quagga restart
# log the restart
echo $checktime : VPN Routes NOT Detected. Restarting Quagga! >> /var/log/connection.info
fi
echo $checktime : Routing Check Complete. >> /var/log/connection.info
exit 0

The crontab entry is:-

*/1 * * * *   root     /usr/sbin/check_quagga_routes.sh > /dev/null 2>&1

This should mean that I wont have to manually restart quagga again if the fault occurs. Hopefully whatever has happened in the update will be fixed, but there’s no harm in leaving this in place as far as I can see.

Normally I’d opt for Nagios to run a check and on failure run a handler script, but since all the nagios checks and handlers get run across the VPN, as soon as the routes go down nagios is pretty useless. So this has to be run on each of the servers.

Twonky Server Slow Scanning

Ok so first a little about my setup.
I have twonky running on a RaspberryPI along with OpenVPN. The whole point is so that I can play my files when away from home. This worked great a few months ago simply plug in to the internet, the VPN connects and then the shares are connected, and twonky scans the folders. It’s not perfect in that twonky can scan empty folders and remove stuff from it’s database so it kinda screws with the playlists and I can never remember what episode I got upto. But it still works. Then I upgraded to 6.0.39 and things went a little weird. It used to complete a scan within a few minutes. But now it was taking over an hour to complete. For the most part it didn’t really bother me, plug it in and leave it do it’s thing. But if the VPN ever went a bit weird it could cause a full rescan, it also seemed to use more data, previously a few Mb now it could be a few 100Mb.

It was more of an annoyance than anything. I did have a search around but couldn’t find anything that would have caused it in the version changes that jumped out at me.

That is until today.
Today I found an article on Series and Movie thumbnails http://server.vijge.net/tw-video-scraper/ so I grabbed the files. I’m not sure if they work 100% yet, I’m getting a symlink error when I run it manually, but it does pull and save a thumbnail. As I’m watching something I don’t want to restart twonky to test it.
But I noticed in the cgi-bin folder a few other {scripts}, in particular ffmpeg-video-thumb.desc
Now the stuff in the link does say to disable this, but it got me thinking. Is this running on the PI, so I decided to have a look. There’s alot more files in the cgi-bin for 6.0.39 than previous versions, and this will try to make a thumbnail for each video file. So I disabled the code by putting # at the start of each line. It may not be the cleanest approach but I want to be able to put it back if it breaks something.

Restarted twonky on the PI, and watched the status page. It managed to scan everything across the VPN within a few minutes again. And looking at the network stats probably pulled around 10Mb of data.
So I think that’s solved this little problem of slow scanning in Twonky for me.

Asterisk UK Caller ID

I’m sure I’ve got an older post giving details of a patch to get caller ID working. I’d used the same patch for about 5 years over the different versions of Asterisk and it always worked, until recently.

A fresh install on one of the servers, applied the patch and made a bunch of test calls, half caught the Caller ID half missed it. So I worked on it for a while, and could not get it to reliably detect the Caller ID. I posted onto the forums incase anyone else had used the patch nothing.
What made it worse was this server was 1 of 3 running asterisk with a Digum Card on a UK phoneline. the other 2 were still working. So in an effort to eliminate possible causes I ended up installing the same version of asterisk and dahdi that was on each of the other 2 servers in turn and running tests, both with the patch and without. Sometimes it looked better than others catching 8 out of 10 CLI’s but that wasn’t the 10/10 that I was getting on this server before a fresh install of the OS.

After a while I gave up, it wasn’t an important server. I’d only put the asterisk card in to track calls coming into the home line, but it didn’t do anything more than log the Date and CLI. Everything else was done over SIP on this server, so I left it kind of working.
Over time the logs seemed to fill with unknown CLI’s more often.

Anyway onto this month. Earlier in the month I reboot my server remotely, when it didn’t come back online I went to investigate why. It was at this point I was almost in tears. I’d stupidly left an SD card in the server that I was imaging, and the server had boot from it. What made this a really really bad move was the fact the SD card I’d imaged with the Rasberry OS, and it decided to go off and format sda (I presume hard coded) which unfortunately was my HDD NOT the SD Card. To make things worse my OS on this server was encrypted, so little chance of recovering anything at all.
As the server runs alot of different stuff, I had to rebuild it as quickly as I possibly could.
A few days ago I got onto the asterisk installation, so ran through installing Dahdi and Asterisk, and changed the configuration based on memory and copying from the other 2 servers. Then I hit the Caller ID problem, out of 8 test calls 3 CLI’s. Now I know this one was working 100% so it has to be something obvious.
Then I remembered seeing something on the “working” server as I was checking it’s configuration to copy and paste.

nano -w /etc/modprobe.d/dahdi.conf
# You should place any module parameters for your DAHDI modules here
# Example:
#
# options wctdm24xxp latency=6
options wctdm opermode=UK fwringdetect=1 battthresh=4

Not something I would have paid attention to, I only ever remember editing a dahdi.blacklist.conf to stop the card being detected as a NetJet. Anyway I added the above file and contents, then reboot. (It should be noted at this point that I hadn’t applied the patch and was almost about to).
After the reboot I started asterisk and run through test calls. 9 out of 9 calls all CLI detected.

So the following day I went and checked the server I was originally having problems with after a fresh install. That didn’t have the line

options wctdm opermode=UK fwringdetect=1 battthresh=4

Bit it did have the blacklist setup. So I added the above and reboot. 10 test calls later all CLI’s detected.

I’ve no idea where I got the above from, but I must have stumbled upon it as a replacement fixed instead fo the patch I’d been using. I say that because the patch files wasn’t on the server that had this configuration, and I always leave the patch files in root incase I need them again.

So if your having a problem with UK Caller ID it maybe worth adding the above.
For info I’m running:-
Server 3:
dahdi-linux-complete-2.6.0+2.6.0
Asterisk 1.8.6.0

Server 2:
dahdi-linux-complete-2.6.1+2.6.1
Asterisk 1.8.6.0

Server 1:
dahdi-linux-complete-2.6.2+2.6.2
asterisk-11.2.1

Server 1 being the newest install, and Server 2 being the one I left with the problem for months.

Hope this helps someone. As UK Caller ID on Asterisk used to be a huge pain with little support and not something that people seemed to cover because Businesses tend to go SIP, IAX or ISDN eliminating the need to fix missing CLI’s.

NRPE compile error Cannot find ssl libraries

For years I’ve avoided SSL with NRPE because it just never seemed to work for me and on an internal network is it really needed?

I’m now doing a new fresh install of Nagios on a raspberry PI and decided after recently setting up SSL certificates on all my sites, to see if I can get this working with NRPE.

First things first, I’ve made sure the following are installed

apt-get install openssl libssl-dev build-essential

It’s also presumed you have already compiled and installed the nagios plugins and have a nagios user and group. Download and unpacked nrpe 2.13

mkdir nagios
cd nagios
wget http://prdownloads.sourceforge.net/sourceforge/nagios/nrpe-2.13.tar.gz
tar xzvf nrpe-2.13.tar.gz

Now onto the actual work:-

cd nrpe-2.13/
./configure

After a little time I got the nice error I’m used to seeing

checking for SSL libraries… configure: error: Cannot find ssl libraries

So after a little searching I hit upon libssl.so missing. Well not so much missing, it just doesn’t have a link where it’s expected. The answer to which is to create a new symlink to it

ln -s /usr/lib/arm-linux-gnueabihf/libssl.so /usr/lib/libssl.so

UPDATE: Noticed on another system it didn’t work, this was because /usr/lib/libssl.so was already present but pointing to the wrong place. This stopped the link being created. So I ran rm/usr/lib/libssl.so then reran the above, this create the link properly and the ./configure then runs as normal.

On my system it’s in the /usr/lib/arm-linux-gnueabihf folder, but on an x86 system this could be something like /usr/lib/x86_64-linux-gnu/

So now I rerun the configure step

./configure

At the point I see

*** Generating DH Parameters for SSL/TLS ***
Generating DH parameters, 512 bit long safe prime, generator 2
This is going to take a long time
…………………+…………………+…………+…………………….++*++*++*++*++*++*

I know it’s taken the SSL stuff well. When it’s complete the configure run without errors you can continue to the make stage

make all

Update /etc/services inserting “nrpe 5666/tcp # NRPE”

nano -w /etc/services

I’ve always run nrpe under xinit.d for all my installs, so make sure xinitd is installed

apt-get install xinetd

Once it’s installed you need to add a new service. edit the file /etc/xinit.d/nrpe

nano -w /etc/xinit.d/nrpe

My sample configuration:-

# default: on
# description: NRPE (Nagios Remote Plugin Executor)
service nrpe
{
flags = REUSE
socket_type = stream
port = 5666
wait = no
user = nagios
group = nagios
server = /usr/local/nagios/bin/nrpe
server_args = -c /usr/local/nagios/etc/nrpe.cfg --inetd
log_on_failure += USERID
disable = no
only_from = 127.0.0.1 192.168.0.0/16
}

You will probably need to change the user, group and only_from fields to suit your installation.
Next you’ll need a configuration. I’m just going to copy the sample for now

mkdir /usr/local/nagios/etc
cp sample-config/nrpe.cfg /usr/local/nagios/etc/nrpe.cfg
chown -R nagios:nagios /usr/local/nagios/etc/

Lastly restart the xinitd service

etc/init.d/xinetd restart

You can test your nrpe installation from your nagios server with the check_nrpe command. It’s probably worth also checking the syslog or messages for the system after restarting xinitd as any errors regarding startup will be reported.
I’ve tested from my server using

/usr/local/nagios/libexec/check_nrpe -H titan

titan being the name of the server I have just complete an installation of plugins+nrpe on (I’ve already run through the above on titan itself). It responds with

NRPE v2.13

So all working.

UPDATE #2
I skipped through the steps above and was trying to just return the NRPE version, I kept hitting the error: “CHECK_NRPE: Error – Could not complete SSL handshake.”
Thinking something was wrong with the SSL I went back through everything, but same error. I then realised I hadn’t yet put a config in place for NRPE (as I was only trying to return the version number didn’t think it was too important) a silly mistake, but returns an off putting error.

Nagios Plugins fail to Compile

Getting the error

check_http.c:150:12: warning: ignoring return value of âasprintfâ, declared with attribute warn_unused_result [-Wunused-result]

When trying to compile the nagios plugins 1.4.16 in Ubuntu 12.04, it comes down to SSL.
Running : apt-get install libssl-dev
Then ./configure and make again fixed the problem.

As a side note I was also having problems with NRPE not installing. This one was down to not reading the error. the user and group nagios doesn’t get created by the nrpe install script. So just have to add the user and group and then the install was fine.