Python + Ansible Automation
Objective
Build a multi-host configuration management system combining Python and Ansible. A Python orchestrator script generates a dynamic Ansible inventory from a YAML host database, validates host connectivity, renders Jinja2 templates for per-host configuration files, and invokes Ansible playbooks via the subprocess module. The Ansible playbooks handle idempotent package installation, service configuration, and file deployment across three Ubuntu server nodes. The result is a fully automated infrastructure provisioning pipeline runnable from a single Python command.
Tools & Technologies
Python 3.10+— orchestrator and inventory generationAnsible 2.15+— idempotent configuration managementJinja2— template rendering for config filesPyYAML— YAML parsing for host databasesubprocess— Ansible execution from Pythonparamiko— SSH connectivity pre-validationansible-lint— playbook static analysisVagrant + VirtualBox— multi-VM lab environmentSSH agent / key management— passwordless authentication
Architecture Overview
Step-by-Step Process
Defined the host database in YAML and wrote a Python script to convert it into an Ansible-compatible JSON dynamic inventory.
# hosts.yaml
hosts:
web-01:
ip: "10.0.0.11"
role: webserver
packages: [apache2, php, libapache2-mod-php]
user: labadmin
web-02:
ip: "10.0.0.12"
role: webserver
packages: [apache2, php, libapache2-mod-php]
user: labadmin
db-01:
ip: "10.0.0.13"
role: dbserver
packages: [mariadb-server, python3-pymysql]
user: labadmin
#!/usr/bin/env python3
# generate_inventory.py
import yaml, json
from pathlib import Path
def generate_inventory(hosts_file: str = 'hosts.yaml') -> dict:
with open(hosts_file) as f:
data = yaml.safe_load(f)
inventory = {'_meta': {'hostvars': {}}}
groups = {}
for hostname, attrs in data['hosts'].items():
role = attrs['role']
groups.setdefault(role, {'hosts': [], 'vars': {}})
groups[role]['hosts'].append(hostname)
inventory['_meta']['hostvars'][hostname] = {
'ansible_host': attrs['ip'],
'ansible_user': attrs['user'],
'ansible_ssh_private_key_file': '~/.ssh/lab_key',
'host_packages': attrs.get('packages', []),
}
inventory.update(groups)
return inventory
if __name__ == '__main__':
inv = generate_inventory()
print(json.dumps(inv, indent=2))
Created three playbooks: base hardening (applied to all hosts), web server deployment, and database setup. All tasks use Ansible modules for idempotency.
# site.yml — Base hardening for all hosts
---
- name: Base server hardening
hosts: all
become: true
tasks:
- name: Update package cache and upgrade
apt:
update_cache: yes
upgrade: dist
cache_valid_time: 3600
- name: Install base packages
apt:
name: [ufw, fail2ban, unattended-upgrades, curl, vim]
state: present
- name: Configure UFW defaults
ufw:
direction: "{{ item.direction }}"
policy: "{{ item.policy }}"
loop:
- { direction: incoming, policy: deny }
- { direction: outgoing, policy: allow }
- name: Allow SSH through UFW
ufw:
rule: allow
port: '22'
proto: tcp
- name: Enable UFW
ufw:
state: enabled
- name: Start and enable fail2ban
service:
name: fail2ban
state: started
enabled: yes
# webservers.yml
---
- name: Deploy Apache web servers
hosts: webserver
become: true
tasks:
- name: Install per-host packages
apt:
name: "{{ host_packages }}"
state: present
- name: Deploy virtual host config from template
template:
src: templates/vhost.conf.j2
dest: "/etc/apache2/sites-available/{{ inventory_hostname }}.conf"
notify: Reload Apache
- name: Enable site
command: "a2ensite {{ inventory_hostname }}.conf"
notify: Reload Apache
- name: Allow HTTP/HTTPS through UFW
ufw:
rule: allow
port: "{{ item }}"
proto: tcp
loop: ['80', '443']
handlers:
- name: Reload Apache
service:
name: apache2
state: reloaded
Created a Jinja2 template for Apache virtual hosts that renders per-host configuration from Ansible variables.
# templates/vhost.conf.j2
<VirtualHost *:80>
ServerName {{ inventory_hostname }}.lab.local
DocumentRoot /var/www/{{ inventory_hostname }}
<Directory /var/www/{{ inventory_hostname }}>
Options -Indexes +FollowSymLinks
AllowOverride All
Require all granted
</Directory>
ErrorLog /var/log/apache2/{{ inventory_hostname }}-error.log
CustomLog /var/log/apache2/{{ inventory_hostname }}-access.log combined
# Security headers
Header always set X-Content-Type-Options "nosniff"
Header always set X-Frame-Options "SAMEORIGIN"
Header always set X-XSS-Protection "1; mode=block"
</VirtualHost>
The main orchestrator script ties everything together: SSH preflight checks, inventory generation, and sequential playbook execution with error handling and logging.
#!/usr/bin/env python3
# orchestrate.py
import subprocess, sys, json, logging
from pathlib import Path
import paramiko
import yaml
logging.basicConfig(level=logging.INFO, format='%(asctime)s %(levelname)s %(message)s')
log = logging.getLogger('orchestrator')
def check_ssh_connectivity(hosts_file: str = 'hosts.yaml') -> bool:
"""Verify SSH connectivity to all hosts before running Ansible."""
with open(hosts_file) as f:
data = yaml.safe_load(f)
all_ok = True
for hostname, attrs in data['hosts'].items():
ip = attrs['ip']
try:
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(ip, username=attrs['user'],
key_filename=str(Path.home() / '.ssh/lab_key'),
timeout=5)
client.close()
log.info(f"SSH OK: {hostname} ({ip})")
except Exception as e:
log.error(f"SSH FAILED: {hostname} ({ip}): {e}")
all_ok = False
return all_ok
def run_playbook(playbook: str, inventory_script: str, extra_args: list = None) -> int:
"""Run an Ansible playbook and return the exit code."""
cmd = ['ansible-playbook', '-i', inventory_script, playbook, '-v']
if extra_args:
cmd.extend(extra_args)
log.info(f"Running: {' '.join(cmd)}")
result = subprocess.run(cmd, capture_output=False)
return result.returncode
def main():
log.info("Starting infrastructure automation pipeline")
# Preflight SSH check
if not check_ssh_connectivity():
log.error("SSH preflight failed — aborting")
sys.exit(1)
# Lint playbooks
for pb in ['site.yml', 'webservers.yml', 'dbservers.yml']:
result = subprocess.run(['ansible-lint', pb], capture_output=True)
if result.returncode != 0:
log.warning(f"Lint warnings in {pb}: {result.stdout.decode()}")
# Run playbooks in order
playbooks = ['site.yml', 'webservers.yml', 'dbservers.yml']
for pb in playbooks:
rc = run_playbook(pb, './generate_inventory.py')
if rc != 0:
log.error(f"Playbook {pb} failed with exit code {rc}")
sys.exit(rc)
log.info(f"Playbook {pb} completed successfully")
log.info("All playbooks completed. Infrastructure is configured.")
if __name__ == '__main__':
main()
Ran all playbooks twice in succession to verify idempotency — the second run should show 0 changes and 0 failures.
# First run — shows changes
python3 orchestrate.py
# Expected output:
# PLAY RECAP *****
# web-01 : ok=12 changed=8 unreachable=0 failed=0
# Second run — idempotent, no changes
python3 orchestrate.py
# Expected output:
# PLAY RECAP *****
# web-01 : ok=12 changed=0 unreachable=0 failed=0
# web-02 : ok=12 changed=0 unreachable=0 failed=0
# db-01 : ok=10 changed=0 unreachable=0 failed=0
# Verify services on each node
ansible all -i ./generate_inventory.py -m service_facts \
-a "name=apache2" | grep -A2 "state"
Complete Workflow
Challenges & Solutions
- Dynamic inventory script not marked executable — Ansible expects a dynamic inventory script to be executable and output JSON to stdout. Had to
chmod +x generate_inventory.pyand add a shebang line. - Ansible template module vs copy module — Initially used the
copymodule which doesn't process Jinja2 variables. Switched to thetemplatemodule which renders Jinja2 before deploying. - paramiko rejecting host keys on fresh VMs — Using
AutoAddPolicy()adds keys without verification — acceptable in a lab environment. For production, keys should be pre-populated inknown_hosts. - Playbook not idempotent for a2ensite command — The raw
command: a2ensite ...task always reported "changed" even when the site was already enabled. Fixed by addingcreates: /etc/apache2/sites-enabled/...as a condition, or using theapache2_modulecommunity module.
Key Takeaways
- Ansible's idempotency guarantee depends on using proper modules — raw
commandandshelltasks always report changed; use module-specific tasks wherever possible. - Dynamic inventory generation is a powerful pattern that allows Ansible to consume host data from any source (CMDB, cloud API, YAML file) without maintaining static inventory files.
- Combining Python orchestration with Ansible is the right tool split: Python handles dynamic logic, API calls, and pre/post-processing; Ansible handles idempotent system state declaration.
- SSH connectivity pre-validation before Ansible runs saves significant time during long playbooks — failing fast on unreachable hosts prevents partial deployments that are harder to debug.