Ansible Part 1

I already have a Droplet for management so I’m going to be using this for Ansible and a new droplet to test some deployments. I’ve done the following on the server:

add-apt-repository ppa:rquillo/ansible
apt-get update
apt-get install ansible

I have no doubt that I will learn I’ve done bits wrong as I get further in, but I’m going to start with the following hosts config

[initial]
C3PO-1

[balancers]

[backends]

[databases]

[fileservers]

So the first thing I’m going to tackle is setting up new users. I’ve been playing with it for about an hour, although I got the user created very quickly I hit a problem with my setup. I need to send multiple ssh keys for some users (we use different keys on PC’s, Laptops, Mobiles). Every example I found seemed to a) want to pull a key from a file, b) just use one key.

After quite some playing, and trying different things I found a way. This in turn meant I had to slightly change the create user bit of the playbook.

Here’s the users.yml playbook so far

---
- hosts: all

  tasks:
    - name: Add Users from group_vars file
      action: user name={{ item.name }} password={{ item.password }} shell={{ item.shell }} state={{ item.state }} update_password=always
      with_items: users

    - name: Add SSH User Keys from group_vars files
      authorized_key: user={{ item.0.name }} key='{{ item.1 }}'
      with_subelements:
        - users
        - authorized

and uses a group_vars file (group_vars/initial)

---

users:
  - name: NAME
    password: HASHED_PASSWD
    authorized:
      - ssh-rsa SSH_KEY_1
      - ssh-rsa SSH_KEY_2
      - ssh-rsa SSH_KEY_3
      - ssh-rsa SSH_KEY_4
    shell: /bin/bash
    state: present

I did think about pulling the keys in from the authorized_keys files, but not all users are allowed on the management, so I’d have to keep files for them and if I’m going that far, I may as well just keep them in group_vars. It doesn’t look as nice if you cat the file but it’s structured and makes sense.

The last thing I want to do is set out add the user to some groups.

This was nice and easy, add the groups to the group_vars file

    groups: sudo,www-data

Then change the users.yml to add the groups

    action: user name={{ item.name }} password={{ item.password }} shell={{ item.shell }} state={{ item.state }} groups={{ item.groups }} update_password=always

I was worried that it may screw up the users own group as the manual says it delete’s all other groups except the primary, since I haven’t told it a primary using ‘group’ I thought it may be a problem, but thankfully it’s not this just worked.

Well I thought at this point I was pretty finished. Ha not a chance. I added a user for ansible to be able to connect as since I’ll be removing root ssh access. This means I’m going to need to let ansible sudo so I’d best sort that now. Here’s a bit of code I found and changed slightly. Make sure you change the username in the sudoers.d/ansible file

Appended to the end of users.yml

  - name: Ensure /etc/sudoers.d directory is present
    file: path=/etc/sudoers.d state=directory

  - name: Ensure /etc/sudoers.d is scanned by sudo
    action: lineinfile dest=/etc/sudoers regexp="#includedir\s+/etc/sudoers.d" line="#includedir /etc/sudoers.d"

  - name: Add ansible user to the sudoers
    action: 'lineinfile dest=/etc/sudoers.d/ansible state=present create=yes regexp="ansible .*" line="USERNAME ALL=(ALL) NOPASSWD: ALL"'

  - name: Ensure /etc/sudoers.d/ansible file has correct permissions
    action: file path=/etc/sudoers.d/ansible mode=0440 state=file owner=root group=root

On first run you’d use root to connect up. Then you would use

ansible-playbook users.yml -u USERNAME -s

All working. It’s taken about 2 hours to get to the point of deploying a couple of users automatically. I’m not so sure this has saved me time in the long run 🙂 but it’s the first step in a much bigger project. I’m kind of glad it wasn’t just copy and paste others code and stuff broke, it gave me a chance to understand a bit more.

Part 2 will be coming soon. There we’ll lock down SSHd and apply the default firewall rules.