Mining DarkCoin on Linux using Digital Ocean

A friend recent;y asked me about DarkCoin, and I had to admit I hadn’t heard anything about it. Sure I have some bitcoins but only really to say I have them. So when he asked I just thought `ah there’s another one`. He then asked how someone would mine these things, So I showed him a few youtube videos of pretty impressive setups.

Anyway to the point, I ran through a quick demonstration of how the miner would work and setup one up on a Virtual Server (Droplet). So for anyone that’s interested here goes (p.s. I can’t say you’ll make a fortune from this method, it’s more proof of concept).

We’ll be using DigitalOcean for our server, so you’ll need an account. Signup here There’s always some promo codes for some credit at present SSDMAY10 give you an extra $10 credit. I personaly use DigitalOcean to host a few of my projects and they’re great for being able to test ideas.

Once you have your account setup, Create a new Droplet I normally use the Latest Ubuntu.
So the specs I’m testing this on are:-

Hostname : Testy
Size : 512MB, 1CPU, 20GB SSD, 1TB XFER $5pm
Region : Amsterdam 2
Image : Ubuntu 14.04 x64
SSH Key : testy (optional, you may not have added any ssh keys so you can skip this selection)
Settings : Enable VirtIO - Checked, Private Networking - Unchecked, Enable Backups - Unchecked.

Once created you will be emailed your root password (unless you selected SSH Keys).
More CPU’s would be better, but this one’s just for testing it out.

Using Putty login to your new Droplet

Now we’ll install some dependencies, and download the mining git.

apt-get update
apt-get install build-essential m4 libssl-dev libdb++-dev libboost-all-dev libminiupnpc-dev git automake libcurl4-openssl-dev screen

git clone https://github.com/ig0tik3d/darkcoin-cpuminer-1.2c.git

cd darkcoin-cpuminer-1.2c
chmod a+x autogen.sh
./autogen.sh
./configure
make
make install
cd ~

If all has gone well you can run the miner using

minerd -a X11 -o stratum+tcp://drkpool.com:3333 -u bighippo999.testy -p password

This should output something similar to

One thing to note, the above command will test under MY pool test login. You will not receive any coins doing this and should only be used as a test. You can stop this using CTRL+C

As long as that all seems to be ok, we should now join a pool. I’m currently using DarkCoin Official Pool I have no idea how ‘Official’ it is, but I was drawn to the graphs 🙂 I did briefly run a miner on windows using this guide and received coins directly into my wallet. Whereas using the pool I’ll have to cash anything out to my wallet and that will incur a charge. I would have also shown using the servers from the other guide, but at present they seem to be down I suppose that’s why this Pool does charge small admin % on each transaction. I can’t say I mind for good service.

Once you’ve signed up and have your details just run the miner again

e.g. minerd -a X11 -o server:port -u user.worker -p password

Replacing server:port user.worker and password with your own details.

If your concerned of your pool failing a handy script a come across

#!/bin/bash
## Miner Failover Script

## Will continously try each pool until one responds, ordered by priority.

#GLOBAL
## Set options
RETRIES="3"
SECONDS="5"
MINERD="minerd"

#POOL1
## Set userpass information for first pool
USERPASS1="name.worker:password"
URL1="stratum+tcp://lotterymining.com"

#POOL2
## Set userpass information for second pool
USERPASS2="name.worker:password"
URL2="stratum+tcp://www.drkpool.com:3333"

while :
do
$MINERD -a X11 --retries=$RETRIES --retry-pause=$SECONDS --userpass=$USERPASS1 --url=$URL1
$MINERD -a X11 --retries=$RETRIES --retry-pause=$SECONDS --userpass=$USERPASS2 --url=$URL2
done

You can copy and past this into a new document

e.g. nano -w startmining.sh

Paste the code.
CTRL+X, then Y and enter. Will exit saving the file.
Then issue

chmod +x startmining.sh

To make the file executable. Lastly it’s always better to run processes you want to keep running (but also be able to check on) in a screen.
The command

screen -d -m -S MINER minerd -a X11 -o stratum+tcp://drkpool.com:3333 -u bighippo999.testy -p password

Will start the miner in a screen, and you can view this with

screen -r

Then either disconnect with CTRL+A, then D or stop the miner using CTRL+C.
If you used the above script you can start that in a screen with

screen -d -m -S MINERPOOL ./startmining.sh

Again to view it

screen -r

Then either disconnect with CTRL+A, then D or stop the miner using CTRL+C.

EDIT:
The other pool I’m using came back online check it out here. This would be started with the following

minerd -a X11 -o http://q30.qhor.net:7903 -u Xn7JauXvQmorx82yEN2EMvbt3dX45uoiCh -p password

I hope you’ve found some of this useful, if you feel like sending me a few DarkCoins my address Xn7JauXvQmorx82yEN2EMvbt3dX45uoiCh and my BitCoin address 1DpfEhiVjNM4WZT49X18m3vXyUvGpuvz9i

Raspberry PI + GlusterFS (Part 4)

In Part 1 I mentioned encrypting my disks. but didn’t go into it, so here I’m going to run through encrypting, decrypting and using it with GlusterFS.
Part 2 Was an attempted but failed install of the latest GlusterFS (3.5.0) Server
Part 3 Covered installing GlusterFS Server with the new information from Ashley

To recap I’m using the following:-
2 PI’s
2 8Gb SD Cards
2 4GB USB Sticks
2 512Mb USB Sticks.

As yet we haven’t setup any Gluster Volumes and this is all on a pretty fresh system.

First we need to install some tools we’ll be using.

apt-get install cryptsetup pv

I know my 4GB USB Stick is on /dev/sda and 512Mb is on /dev/sdb, I’ll only be concentrating on the 4Gb in this, but make sure if your following along that your using the correct paths. Using the wrong paths can wipe your data.

I dont want any partitions on the stick (I’ll be encrypting the whole drive)

fdisk -l

Shows me I’ve got a few partitions on the stick:-

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   ?   778135908  1919645538   570754815+  5b  Unknown
/dev/sda2   ?   168689522  2104717761   968014120   65  Novell Netware 386
/dev/sda3   ?  1869881465  3805909656   968014096   79  Unknown
/dev/sda4   ?  2885681152  2885736650       27749+   d  Unknown

I can’t remember what this stick was used for (to my knowledge I’ve never used Novell partitions), but we’ll delete them all.

fdisk /dev/sda
d
1
d
2
d
3
d
wq
My partitions were listed 1-4 so it was nice and easy. You can rerun the fdisk -l command to check they’ve all gone.
This step wasn’t strictly necessary but I always like to make sure I’m working with the correct drives.
With the Drive empty of partitions I like to unplug it and plug it back in (keep everything fresh) Note: if you do reconnect the drive make sure your still working with the correct /dev/sd* path. Sometimes this can change.
Now run
cryptsetup -y -v luksFormat /dev/sda
This creates a new encryption key for the Drive (note this is not how you add new keys on a drive, only do this once!!)
Then we need to unlock the drive for use
cryptsetup luksOpen /dev/sda USB1_Crypt
/dev/sda is the drive path
USB1_Crypt is what we’re going to be labelling the decrypted drive.
You’ll be prompted for the Drive passphrase that you just created. If successful it doesn’t actually tell you, just drop you back to a prompt. From here on we wont be doing any drive work on /dev/sda as this will be outside the encrypted bit, we’ll be using /dev/mapper/USB1_Crypt
We can check it’s unlocked with
ls -l /dev/mapper/
You should see something similar to
lrwxrwxrwx 1 root root       8 May 13 18:39 USB1_Crypt -> ../dm-1
You can also check the status using
cryptsetup -v status USB1_Crypt
Now that we have the drive with an encryption key and unlocked we’ll write a bunch of data across the drive
pv -tpreb /dev/zero | dd of=/dev/mapper/USB1_Crypt bs=128M
Writing zero’s to a drive is generally considered bad for data security, but we’re writing them to the encrypted system not the actual stick, so the output to the stick will be encrypted data.
Once the data has finished writing we’ll create a new filesystem on the encrypted disk
mkfs.ext4 /dev/mapper/USB1_Crypt
You don’t have to use ext4, but I generally do.
That’s the USB Stick encrypted
We close the encrypted drive and remove it from /dev/mapper/ with

cryptsetup luksClose USB1_Crypt

If all you wanted was an encrypted Drive that’s it, and you can unlock the drive on systems with cryptsetup installed and then mount away.

So far we’ve encrypted the entire USB Stick, written a bunch of encrypted data across the entire Stick, created a new filesystem, and closed the Stick.
Now we’re ready to mount the Stick ready for Gluster to use.
We’re going to create a folder to mount the Drive into

mkdir /mnt/USB1
We’ll open the encrypted Drive again using

cryptsetup luksOpen /dev/sda USB1_Crypt
Then mount the decrypted drive

mount /dev/mapper/USB1_Crypt /mnt/USB1
If you

ls -l /mnt/USB1
You should see the lost+found directory on the filesystem.
I should mention again I’ve been running through this process on 2 PI’s, and to keep things simple I’m keeping the same names on both systems /mnt/USB1
Now it’s time to get GlusterFS running with these drives.
So while on Gluster-1(PI) issue the command

gluster peer probe Gluster-2
This should find and add the peer Gluster-2 and you can check with

gluster peer list
and

gluster peer status
Now because I always want each Gluster system working by name from Gluster-2 I issue

gluster peer probe Gluster-1
This updates the Gluster-1 peer to it’s name not it’s IP address, There’s nothing wrong with using IP addresses if your using static assigned IP’s on your PI’s, but I wouldn’t recommend doing so if your IP address is DHCP’d
With glusterfs knowing about both Gluster-1 and Gluster-2 we can create a new volume (It’s important that /mnt/USB1 has been mounted on both system before proceeding)
On either PI you can create a new replica volume with 

gluster volume create testvol replica 2 Gluster-1:/mnt/USB1 Gluster-2:/mnt/USB1
This will create a new volume called testvol using /mnt/USB1 on both PI’s. The folder /mnt/USB1 is now referred to as a brick. and volumes consist of bricks.
Now we start the volume

gluster volume start testvol
Finally we need somewhere to mount the gluster filesystem

mkdir /media/testvol
Then we mount it

mount.glusterfs Gluster-1:/testvol /media/testvol
It doesn’t matter which host we use in this command, apparently it’s only used to pull the list of bricks for this volume and will then balance the load.
Now you can write data to /media/testvol. If you’ve mounted the volume on both PI’s you will see the files on both.
You can also

ls -l /mnt/USB1
To see the actual files on the stick (DO NOT do any more than just read the files from /mnt/USB1, playing in this folder can cause issues, you should only be using /media/testvol from now on).
If instead of replica you used a stripe, you’ll be able to see all the files in /media/testvol but only some files in /mnt/USB1 on each PI.
Shutting down 1 of the PI’s in a replca mode volume wont show any difference in /media/testvol (and hopefully on the new 3.5.0 version wont cause you as much of a headache if files get updated while 1 PI is offline, though it is likely to need manual intervention to fix maybe a part 4 🙂 when I get that far) but in striped mode with 1 of the PI’s offline you’ll notice files in /media/testvol have gone missing. For this reason I’m hoping to do both stripe and replica to keep files available across multiple PI’s and allow me to increase the storage space easily.
Replicating across 2 drives will mean I will need to add new storage 2 drives at a time.
Replicating across 3 drives would mean I need to add 3 new drives each time.
Just to make things easy I’ll list the commands to decrypt and mount after the PI has been reboot

cryptsetup luksOpen /dev/sda USB1_Crypt

mount /dev/mapper/USB1_Crypt /mnt/USB1

mount.glusterfs Gluster-1:/testvol /media/testvol

Raspberry PI + GlusterFS (Part 3)

After hitting errors when installing  in Part 2 I decided to split out the solution.
Ashley saw part 2 and had already ran into the same problem (see the comment), thanks to his comment it gave me a huge help on what to do next.
I’ve started with a fresh raspberry pi image so that nothing conflicts. Again get the latest updates

apt-get update
apt-get upgrade

Then download the needed files with the following commands

wget http://download.gluster.org/pub/gluster/glusterfs/3.5/LATEST/Debian/apt/pool/main/g/glusterfs/glusterfs_3.5.0.orig.tar.gz
wget http://download.gluster.org/pub/gluster/glusterfs/3.5/LATEST/Debian/apt/pool/main/g/glusterfs/glusterfs_3.5.0-1.dsc
wget http://download.gluster.org/pub/gluster/glusterfs/3.5/LATEST/Debian/apt/pool/main/g/glusterfs/glusterfs_3.5.0-1.debian.tar.gz

Now extract the archives

tar xzvf glusterfs_3.5.0.orig.tar.gz
tar xzvf glusterfs_3.5.0-1.debian.tar.gz

We need some tools so

apt-get install devscripts

Then we move the debian folder into the glusterfs folder and change into the glusterfs folder

mv debian glusterfs-3.5.0/
cd glusterfs-3.5.0

Next run

debuild -us -uc

This will start but will throw dependency errors.
The important line is

Unmet build dependencies: dh-autoreconf libfuse-dev (>= 2.6.5) libibverbs-dev (>= 1.0.4) libdb-dev attr flex bison libreadline-dev libncurses5-dev libssl-dev libxml2-dev python-all-dev (>= 2.6.6-3~) liblvm2-dev libaio-dev librdmacm-dev chrpath hardening-wrapper

Which I resolved with

apt-get install dh-autoreconf libfuse-dev libibverbs-dev libdb-dev attr flex bison libreadline-dev libncurses5-dev libssl-dev libxml2-dev python-all-dev liblvm2-dev libaio-dev librdmacm-dev chrpath hardening-wrapper

With the dependencies installed I ran

debuild -us -uc

This may output some warnings. On my system I had a few warnings and 2 errors “N: 24 tags overridden (2 errors, 18 warnings, 4 info)”, but it didn’t seem to affect anything.
Now we’ll wrap up with

make
make install

The Make probably isn’t necessary, Once installed we need to start the service

/etc/init.d/glusterd start

You can check everything is working ok with

gluster peer status

This should return

Number of Peers: 0

The only thing left to do is ensure glusterd starts with the system

update-rc.d glusterd defaults

And we’re all set. Now you can take a look at Part 4

Raspberry PI + GlusterFS (Part 2)

IMPORTANT: When running through the steps in Part 2 I encounter errors. Thanks to Ashley commenting I’ve created Part 3. I’ve decided to leave Part 2 intact for anyone searching on errors etc.

Hopefully you’ve read Part 1 and understood what I’m trying to do and why.

Here’s Part 2 attempting the install.
Part 3 Actually does the installation now.
Part 4 will cover the encryption and setup.
First things first, I (being naughty) use root far too much in testing, but do not recommend it all the way through on production servers.
So lets get into root
sudo su -
This should place you in root’s home direcoty.

You can Skip down to get past errors I encountered, but it’s possibly still worth reading the below which ended up not working.

Now we’re going to grab gluster 3.5.0

wget http://download.gluster.org/pub/gluster/glusterfs/3.5/3.5.0/glusterfs-3.5.0.tar.gz
tar xzvf glusterfs-3.5.0.tar.gz
cd glusterfs-3.5.0/
DON’T do this step yet. At this point I jumped straight into a configure attempt
./configure
This threw errors that I’m missing flex or lex
configure: error: Flex or lex required to build glusterfs.
Clearly I’m going to need to install a few dependencies before going further

apt-get update
apt-get install make automake autoconf libtool flex bison pkg-config libssl-dev libxml2-dev python-dev libaio-dev libibverbs-dev librdmacm-dev libreadline-dev liblvm2-dev libglib2.0-dev pkg-config
I’m not sure if all these are needed, but after digging around that’s the list that’s used elsewhere.
Actually reading the INSTALL file says to start with ./autogen.sh so this time we will
./autogen.sh
This took about 5 mins, but didn’t throw errors. Next onto
./configure
This eventually spits out
GlusterFS configure summary
===========================
FUSE client          : yes
Infiniband verbs     : yes
epoll IO multiplex   : yes
argp-standalone      : no
fusermount           : yes
readline             : yes
georeplication       : yes
Linux-AIO            : yes
Enable Debug         : no
systemtap            : no
Block Device xlator  : yes
glupy                : yes
Use syslog           : yes
XML output           : yes
QEMU Block formats   : yes
Encryption xlator    : yes
Now we can get on with the actually compiling

make
This threw a number of warnings for me
warning: function declaration isn’t a prototype [-Wstrict-prototypes]
But didn’t seem to be of great concern.
After about an hour it was ready to continue (I wrote Part 4 from memory while waiting).
We install with


make install

Here’s where I’m getting an error.


../../py-compile: Missing argument to --destdir.

So it’s on stop until I can figure out how to resolve it. one suggestion from version 3.4.x was to add the prefix path to the configure, but this didn’t do anything for me.

Move on to Part 3

Raspberry PI + GlusterFS (Part 1)

Here’s Part 1 which is really background information.
Part 2 will be actually doing stuff, if you dont want to read how/why I ended up here skip to Part 2.

A few days ago I yet again ran out of space on my server. Normally this would just mean deleting a load of junk files, but I’ve been doing that for months I’m now at the point there are no junk files left to delete. So time to increase the storage. Unfortunately problem #2 I currently have 4 sata drives in the server taking up all the connections. Expanding wouldn’t be a problem as I originally setup the drives with lvm to handle large storage requirements. but now there’s nowhere to turn to increase the capacity in this server.

So instead I thought I’d have a look at the alternatives. I’d been hoping to move some of the server services over to PI’s since I first heard about the Raspberry PI project (long before they were released), I knew I could make good use of lots of them.

After a little searching I found Gluster, and this seems to be ideal for what I need. At this point I should say that I clearly expect any USB drive connected to the PI to be slower access than sata on my server. but for my use slower doesn’t matter. I doubt this would suit everyone, but I think Gluster is even a good option on beefy servers/pc’s.

I have an idea that needs testing, so I setup 2 PI’s with 2 USB sticks for storage. A simple apt-get install glusterfs-server gets me moving quickly while reading 2 guides http://www.linuxjournal.com/content/two-pi-r?page=0,0 and http://www.linuxuser.co.uk/tutorials/create-your-own-high-performance-nas-using-glusterfs I quickly have a working system. I’m not going to go into the steps, both the links give details, but here’s my layout:-
2 PI’s
2 8Gb SD Cards
2 4Gb USB Sticks
2 512Mb USB Sticks.
So each PI has an SD card, a 4gb USB Stick and a 512Mb USB Stick.

As I prefer to encrypt all data on my drives I installed cryptsetup and using luksFormat got the USB Sticks ready (more on this later), I unlocked the USB sticks on both PI’s which opens the drive to /dev/mapper/USB1_Crypt and /dev/mapper/USB2_Crypt
I then mount each to /mnt/USB1 and /mnt/USB2 (both were already ext4 filesystems) with USB1 being the 4Gb.
The PI’s are called Gluster-1 and Gluster-2, I know my DHCP and DNS are working so I can use the names when setting up the gluster peers and volumes instead of IP addresses.
Once I’d created the Gluster volume (I tested both stipe and replica) I mounted the testvol (Gluster Volume) into /media/testvol
I made a quick file, and could see it from both servers. It’s also interesting looking in the /mnt/USB1/ folder (but do NOT change anything in here), while using a stripe volume the new file only appears on 1 of the PI’s, creating more files puts some on Gluster-1 and some on Gluster-2. So to see what happens I reboot Gluster-1 and watched the files in /media/testvol yep all the files that are in testvol and actually on Gluster-1 disappear from the file list. As soon as Gluster-1 was back (decrypted and mounted /mnt/USB1) the files were back.

So this is looking pretty good, and adding more storage was easy (I didn’t go into rebalancing the files across new bricks, but I don’t see this being a huge problem). Now I moved onto using Replica instead of Stripe (my ultimate intention will be to use both). Here’s where the fun begins, I actually didn’t delete the test files in /mnt/USB1 from either server, just stopped the gluster volume and deleted it. Creating a new gluster volume in stripe mode was easy, and a few seconds after I started it gluster sync’d up the files already in /mnt/USB1 on both PI’s so now in /media/testvol I can still see all the files and access them fine. Now I reboot Gluster-2 and checked the files in /media/testvol on Gluster-1 yep all the files are there (there’s was a 2-3 second delay on ls I assume I was previously connected to Gluster-2 and it had to work out it wasn’t there anymore but from there on was fine).

I have a working replica system 🙂 and it’s taken me no-time at all to get running. I brought Gluster-2 back online, then thought what if I change files when one of them is offline. Reboot Gluster-1, and change some of the test in a few of the test files in /media/testvol and create a new testfile. Then I bring Gluster-1 back online, and here’s where things fall apart!!!
The new testfile is on both and fine, the existing files I hadn’t changed are all fine too. But the files I changed I can’t access anymore, I’m getting cat: testy1: Input/output error. This isn’t good so I check the file testy1 file in /mnt/USB1 on both PI’s, Gluster-2 has the changes (as expected it was online when I made the changes) Gluster-1 has the original.

So head over to google to find out how I’d fix such a scenario (a friend told me years ago “your never the first person to have a problem” google every error message) yep there’s information on it, it’s called split-brain. The first solution I find is to delete the file that’s wrong from /mnt/USB1 that is incorrect. This didn’t solve my problem, testy1 was recreated but was still giving I/O error a little more reading says that gluster creates hard links and you need to go and delete these too, apparently in a .glusterfs folder but I couldn’t find it, there was also a heal function that would show me what files are affected. But not on my system. gluster doesn’t know heal. WHY????
gluster -V tells me I’m running an old version from 2013 (I think 3.2.8 but that’s from memory) the latest version is 3.5.0

I also came across some information saying that Stripes using 3 or more can quorum changes to resolve issues with file differences (I think this has to be setup though, I can see situations that automatically doing so could cause data loss, such as log files being written and having different data not just that one may have stopped being written to).
Anyway it looks like having 3.5.0 with a heal function would be beneficial, and I always prefer for things to be upto date where possible anyway.
Looking at the Gluster download page there’s a simple way to add gluster to the debian apt system. So I followed the steps to add it, and run apt-get update. Now another problem it’s not able to download the gluster info, but the link it’s using I can access fine. Then something jumps out at me, “Architectures: amd64″, Raspberry uses arm so this isn’t built for it. Now I have no idea if this is right or wrong but it makes sense to me.

Like many other things, it’s time to manually compile and install.
So there’s the background (sorry it took so long), Part 2 is going to run through manually installing gluster on the PI