Ansible for mounting nfs shares
Using Ansible to locally mount nfs shares remotely served from a NAS.
A role called admin is setup to use for the following example.
Vars
---
# vars file for admin
vol_mount_ext1: /mnt/ext1
nas_host: diskstation
df_tmp: /tmp/.df
volume: volume1
mount_points:
- /mnt/ext1
user: username
group: groupname
Tasks
---
# tasks file for admin
# setup mount directory if not exist
- name: Get hostname
command: hostname
register: hostcheck
- debug: msg="Hostname = {{ hostcheck.stdout }}"
- name: "Get id"
command: whoami
register: whoamicheck
- debug: msg="id = {{ whoamicheck.stdout }}"
- name: Ping NAS
command: ping -c1 {{ nas_host }}
register: ping_ret
- name: Check mnt directory volume-ext1 exist
stat:
path: "{{ vol_mount_ext1 }}"
register: path_vol_ext1
- when: path_vol_ext1.stat.isdir is not defined and ping_ret is success
file:
path: "{{ vol_mount_ext1 }}"
state: directory
mode: '0755'
# get df -kh
- name: Get mounted disks
shell: df -kh > {{ df_tmp }}
- name: Process {{ df_tmp }} -- verify volume ext1 mounted
shell: cat {{ df_tmp }} | grep -w {{ vol_mount_ext1 }} | awk {'print $6'}
register: df_vol_ext1
- name: Setup nfs mount point for ext1
mount:
src: "{{ nas_host }}:/volumeUSB1/usbshare"
path: "{{ vol_mount_ext1 }}"
fstype: nfs
state: mounted
when: df_vol_ext1.stdout != vol_mount_ext1 and ping_ret is success
- name: Last steps to apply file ownership and cleanup
command: df -kh
register: out
notify: cleanup
- debug:
msg: "{{ out.stdout_lines }}"
Handlers
---
# handlers file for admin
- name: check file ownership
stat:
path: "{{ item }}"
register: fw
loop:
"{{ mount_points }}"
listen: "cleanup"
- name: Set file ownership
file:
path: "{{ item }}"
state: directory
recurse: yes
owner: "{{ owner }}"
group: "{{ group }}"
loop:
"{{ fw.results }}"
when:
item.stat.pw_name is defined and item.stat.gr_name is defined and item.stat.pw_name != 'username' and item.stat.gr_name
!= 'username'
listen: "cleanup"
- name: Set file permissions
file:
path: "{{ item }}"
state: directory
recurse: yes
mode: "0777"
loop:
"{{ fw.results }}"
when:
item.stat.mode is defined and item.stat.mode != '0777'
listen:
"cleanup"
- name: Check r/w access to mounts
shell: echo
loop:
"{{ fw.results }}"
loop_control:
label: "{{ item.stat.path }} --> [Read]: {{ item.stat.rusr}} [Write]: {{ item.stat.wusr}} "
listen:
"cleanup"
- name: remove temp file
file:
path: "{{ df_tmp }}"
state: absent
listen:
"cleanup"
Playbook
- hosts: local
become: yes
become_user: root
vars_files:
- ~/myvault.yml
roles:
- admin
Inventory
[local]
localhost ansible_connection=local ansible_become=yes ansible_become_method=sudo
ansible_become_pass=
"{{ host_root_pass }}"
Ansible vault file
The playbook requires sudo access to mount the nfs mount points.
In order to do this without any password prompt I setup an Ansible vault file which contains an encrypted version of my sudo password.
In my ~/.bashrc file I invoke the playbook using:
ansible-playbook -i ../ansible-hosts mount-playbook.yml
This executes the Ansible playbook at login.
The _{{ host_root_pass }} _variable is the key setup in the Ansible vault file. In my vault file the contents are in key=value format - in this case it is host_root_pass=<root_password_is_here>
In my playbook I’ve added the location of the vault file.
vars_files:
- ~/myvault.yml
View Ansible vault contents
ansible-vault view ~/myvault.yml
This will prompt for the password you used to create the initial vault file.
Edit Ansible vault contents
ansible-vault edit ~/myvault.yml