Sunday, April 26, 2026

Ansible_Facts

 Facts


Ansible implements fact collecting through the a module called the setup module.
It collects detailed information (called facts) from managed nodes.
It collects detailed information (called facts) from managed nodes.
By default, Ansible automatically runs setup at the start of every play.


$ ansible ubuntu -m setup


- name: Gather facts manually
hosts: all
tasks:
- name: Run setup module
ansible.builtin.setup:
- debug:
msg: "Host {{ ansible_hostname }} has {{ ansible_memtotal_mb }} MB RAM"


Filtering Facts

Collect only specific facts
- name: Get only network-related facts
ansible.builtin.setup:
filter: ansible_default_ipv4

- name: Get only facts that start with 'ansible_processor'
ansible.builtin.setup:
filter: 'ansible_processor*'
- name: Get only facts that start with 'ansible_processor' and 'ansible_mem'
ansible.builtin.setup:
filter: 'ansible_processor*', 'ansible_mem*'

$ ansible all -m setup -a 'filter=ansible_all_ipv6_addresses'

Gather_Subset
Control what facts to collect
- name: Gather only minimal facts
ansible.builtin.setup:
gather_subset:
- min

Other options:
all (default)
hardware
network
virtual
ohai
facter


gather_subset: ! 'all,!hardware' # Gather all facts except hardware facts
gather_subset: ! 'all,!network' # Gather all facts except network facts
gather_subset: ! 'all,!virtual' # Gather all facts except virtual facts
gather_subset: ! 'all,!ohai' # Gather all facts except ohai facts
gather_subset: ! 'all,!facter' # Gather all facts except facter facts

Set timeout for fact gathering
- ansible.builtin.setup:
gather_timeout: 10

set_fact → create custom facts
Set_fact is a powerful module that allows you to create custom facts during playbook execution.

- name: Set a variable
set_fact:
my_var: "hello"

- name: Set multiple variables
set_fact:
env: "prod"
version: "1.2.3"

Append to a list variable
- name: Append to a list variable
set_fact:
my_list: "{{ my_list | default([]) + ['new_item'] }}"

Update a dictionary variable
- name: Update a dictionary variable
set_fact:
my_dict: "{{ my_dict | default({}) | combine({'key': 'value'}) }}"

Loop with set_fact
- name: Create a list of hostnames
set_fact:
hostnames: "{{ hostnames | default([]) + [item] }}"
loop: "{{ ansible_play_hosts }}"

- name: Build list dynamically
set_fact:
servers: "{{ servers | default([]) + [item] }}"
loop:
- web1
- web2


hostvars → access facts of other hosts
Hostvars is a powerful Ansible variable that allows you to access the facts and variables of other hosts in your inventory.
This can be particularly useful when you need to reference information about other hosts during playbook execution.

HOSTVARS VERSUS HOST_VARS
Please be warned that hostvars is computed when you run Ansible, while host_vars is a directory that you can use to define variables for a particular system.

What is hostvars?
👉 It is a dictionary of all hosts and their variables.

hostvars['hostname']['variable_name']


- name: Access hostvars
hosts: all
tasks:
- name: Show IP address of another host
debug:
msg: "The IP address of {{ item }} is {{ hostvars[item]['ansible_default_ipv4']['address'] }}"
loop: "{{ ansible_play_hosts }}"

Local facts
Local facts are custom facts that you can create on the managed nodes themselves.
They are stored in the /etc/ansible/facts.d/ directory on the managed nodes.
Local facts are useful for storing information that is specific to a particular host and may not be easily gathered through the setup module.
You can place one or more files on the remote host machine in the
/etc/ansible/facts.d directory. These files can be in JSON or INI format and must have a .json or .ini extension. When Ansible runs the setup module, it will automatically read these files and include the custom facts in the gathered facts for that host.

These facts are available as keys of a special variable named ansible_local.

To create a local fact, you can create a JSON or INI file in the /etc/ansible/facts.d/ directory on the managed node. For example, you could create a file called /etc/ansible/facts.d/custom_facts.json with the following content:

{
"custom_fact": "This is a custom fact"
}
Once you have created the local fact file, you can access the custom fact in your playbooks using the ansible_local variable. For example:
- name: Access local fact
hosts: all
tasks:
- name: Show custom fact
debug:
msg: "The custom fact is: {{ ansible_local.custom_facts.custom_fact }}"
Dynamic Inventory Script
An Ansible dynamic inventory script must support two command-line flags:
--host=<hostname> for showing host details
--list for listing groups


Magic Variables
Ansible provides several magic variables that are automatically available in your playbooks. These variables provide information about the playbook execution context, such as the current host, group, and task. Some commonly used magic variables include:

hostvars A dict whose keys are Ansible hostnames and values are dicts that map variable names to values for that host. This variable is useful for accessing variables of other hosts in the inventory.

inventory_host name The name of the current host as known in the Ansible inventory, might include domain name
inventory_hostname_short Name of the current host as known by Ansible, without the domain name(e.g., myhost)
group_names A list of all groups that the current host is a member of

- ansible_play_hosts: A list of all hosts in the current play
- ansible_play_batch: A list of hosts in the current batch (when using serial)
- ansible_play_name: The name of the current play
- ansible_play_role_names: A list of roles applied to the current play
- ansible_play_task_name: The name of the current task
- ansible_play_hosts_all: A list of all hosts in the inventory
- ansible_play_hosts_all_groups: A list of all groups in the inventory
- ansible_play_hosts_all_hosts: A list of all hosts in the inventory
- ansible_play_hosts_all_groups_hosts: A dictionary of all groups and their hosts in the inventory
- ansible_play_hosts_all_groups_hosts_vars: A dictionary of all groups, their hosts, and their variables in the inventory
- ansible_play_hosts_all_groups_hosts_vars_hostvars: A dictionary of all groups, their hosts, their variables, and the hostvars for each host in the inventory



Extra variable with the command-line option -e var=value
$ ansible-playbook playbook.yml -e "my_var=value"
$ ansible-playbook playbook.yml -e "my_var=value" -e "my_list=['item1', 'item2']"
$ ansible-playbook playbook.yml -e "my_dict={'key1': 'value1', 'key2': 'value2'}"


Ansible Variables

 Variables


Variables can be used in tasks, as well as in template files.
You reference variables by using {{ variable }}. Ansible replaces this {{ variable }} with the value of the variable value

eg
vars:
conf_file: /etc/nginx/sites/default
Ansible will substitute "{{ conf_file }}" with /etc/nginx/sites/default when it executes this task.


Ansible uses the Jinja2 template engine to implement templating
We use the .j2 extension to indicate that the file is a Jinja2 template. However, we can use a any extension if you like.
Ansible also uses the Jinja2 template engine to evaluate variables in playbooks.


Loop
When you want to run a task with each item from a list, you can use loop.
A loop executes the task multiple times, each time replacing item with different values from the specified list.

Handlers
Handlers are one of the conditional forms, A handler is similar to a task, but it runs only if it has been notified by a task. A task will run the notification if Ansible recognizes that the task was changed.

handlers:
- name: Restart nginx
service:
name: nginx
state: restarted

- name: Manage nginx
template:
src:
dest:
notify: Restart nginx

Handlers usually run at the end of the play after all of the tasks have been run.

To force a notified handler in the middle of a play, we need to use flush_handlers
- name: Restart nginx
meta: flush_handlers

If a play contains multiple handlers, the handlers always run in the order that they are defined in the handlers section, not the notification order.
They run only once, even if they are notified multiple times.



Variables in Separate Files
vars_files:
- nginx.yml

viewing the value of variables
To view the values of variables, you can use the debug module. The debug module allows you to print the value of a variable to the console during playbook execution.

- name: Print the value of a variable
debug:
var: variable_name

- debug: var=myvarname

Variable Interpolation
- name: Display the variable
debug:
msg: "The file used was {{ conf_file }}"

Variables can be concatenated between the double braces by using the tilde operator ~, as shown here:
- name: Concatenate variables
debug:
msg: "The URL is https://{{ server_name ~'.'~ domain_name }}/"

Registering Variables
The register keyword allows you to capture the output of a task and store it in a variable for later use.
- name: Capture output of whoami command
command: whoami
register: login

Example of using register to capture the output of a command and then display it using the debug module:
---
- name: Show return value of command module
hosts: fedora
gather_facts: false
tasks:
- name: Capture output of id command
command: id -un
register: login
- debug: var=login
- debug: msg="Logged in as user {{ login.stdout }}"
...

Output of the above playbook:
TASK [Capture output of id command] ******************************************************************************************
changed: [localhost]

TASK [debug] ****************************************************************************************************************
ok: [localhost] => {
"login": {
"changed": true,
"cmd": ["id", "-un"],
"delta": "0:00:00.003123",
"end": "2024-06-01 12:00:00.000000",
"rc": 0,
"start": "2024-06-01 12:00:00.000000",
"stderr": "",
"stdout": "user"
}
}
TASK [debug] ****************************************************************************************************************
ok: [localhost] => {
"msg": "Logged in as user user"
}

The changed key is present in the return value of all Ansible modules, and Ansible uses it to determine whether a state change has occurred. For the command and shell modules, this will always be set to true unless overridden with the changed_when clause
- name: Capture output of whoami command
command: whoami
register: login
changed_when: false

The cmd key contains the invoked command as a list of strings.
The rc key contains the return code. If it is nonzero, Ansible willassume the task failed to execute successfully.
The stderr key contains any text written to standard error, as a single string.
The stdout key contains any text written to standard out, as a single string.
The stdout_lines key contains any text written to split by newlines, as a list of strings.

ACCESSING DICTIONARY KEYS IN A VARIABLE
If a variable contains a dictionary, you can access the keys of the dictionary by using either a dot (.) or a subscript ([])
{{ result.stat }}
{{ result['stat'] }}
result['stat']['mode']
result['stat'].mode
result.stat['mode']
result.stat.mode

- name: Display result.stat detail
debug: var=result['stat'][stat_key]

- name: Access dictionary keys using dot notation
debug:
msg: "The command was {{ login.cmd }} and the return code was {{ login.rc }}"
- name: Access dictionary keys using subscript notation
debug:
msg: "The command was {{ login['cmd'] }} and the return code was {{ login['rc'] }}"


Ansible_inventory Parameters

 Inventory Parameters

inventory is called as collection of hosts
inventory parameters are variables or settings you define in your inventory file to control how hosts and groups behave during playbook execution.

ansible_host # Hostname or IP address to SSH to
ansible_port # Port to SSH to
ansible_user # User to SSH as
ansible_password # Password to use for SSH authentication
ansible_ssh_private_key_file # SSH private key to use for SSH authentication
ansible_become=true
ansible_become_user=root
ansible_become_method=sudo(or)su
localhost ansible_connection=local

Custom variable
web1 app_port=8080 env=prod

ansible_connection
Ansible supports multiple transports to connect hosts
The default transport, smart

Ansible will check whether the locally installed SSH client supports a feature called ControlPersist. If the SSH client supports ControlPersist, Ansible will use the local SSH client.
If not, the smart transport will fall back to using a Python-based SSH client library called Paramiko.

ansible_shell_type
Ansible works by making SSH connections to remote machines and then invoking scripts. By default, Ansible assumes that the remote shell is the Bourne shell located at /bin/sh, and will generate the appropriate command-line parameters that work with that. It creates temporary directories to store these scripts.

Ansible also accepts csh, fish, and (on Windows) powershell as valid values for this parameter. Ansible doesn’t work with restricted shells.

ansible_python_interpreter
Ansible needs to know the location of the Python interpreter on the remote machine
ansible_python_interpreter="/usr/bin/env python3"

ansible_*_interpreter
If you are using a custom module that is not written in Python, you can use this parameter to specify the location of the interpreter

eg
[web]
web1 ansible_host=192.168.1.10 ansible_user=ec2-user ansible_port=22

[web]
web1
web2

[web:vars]
ansible_user=ec2-user
ansible_ssh_private_key_file=/home/user/key.pem

eg
web1 app_port=8080 env=prod


Pattern Matching Inventory
$ ansible web -m ping
$ ansible 'web:&prod' -m ping
$ ansible 'web:!db' -m ping

Operators
: → OR
& → AND
! → NOT

$ ansible-inventory --list
$ ansible-inventory --graph

Ansible automatically defines a group called all (or *)
$ ansible all -a "date"
or
$ ansible '*' -a "date"


Bill Baker of Microsoft came up with the distinction between treating servers as pets versus treating them like cattle.
The “cattle” approach to servers is much more scalable
20 servers are named web1.example.com, web2.example.com ... and so on

[web]
web[1:20].example.com

web-a.example.com, web-b.example.com, and so on....
[web]
web-[a:t].example.com



Ansible will let you add hosts and groups to the inventory during the execution of a playbook.
Adding Entries at Runtime with add_host and group_by
This is useful when managing dynamic clusters, such as Redis Sentinel.

add_host
The add_host module adds a host to the inventory; this is useful if you’re using Ansible to provision new virtual machine instances

- name: Add the host
add_host
name: hostname
groups: web,staging
myvar: myval

group_by
Ansible’s group_by module allows you to create new groups while a playbook is executing.
Any group you create will be based on the value of a variable that has been set on each host, which Ansible refers to as a fact.

- name: Create groups based on Linux distribution
group_by:
key: "{{ ansible_facts.distribution }}"


Friday, April 24, 2026

yaml_Ansible_6

 yaml tags

In YAML, tags are a way to explicitly define the data type of a value. 

Normally YAML auto-detects types (like string, int, boolean), but tags let you override or clarify that behavior.



Basic Syntax of YAML Tags

Tags are written using !! before a value:


Integer $ count: !!int 10

Float $ count: !!int 10

boolean $ enabled: !!bool yes

Null $ value: !!null null

String $ name: !!str 12345

yaml_Ansible_5

 Date and Timestamps in yaml


YYYY-MM-DD #standard date format 

YYYY-MM-DDTHH:MM:SSZ  # Full Timestamps

T = Seprates Date and Timestamps

Z = for UTC Timestamps


iso8601 format

YYYY-MM-DDTHH:MM:SS+(OR)-HH:MM (The offset timezone (-5))

Ansible_9 Automation Execution Environment (EE)

 An Automation Execution Environment (EE) is a portable, consistent runtime for running Ansible automation. Think of it as a pre-built container image (like Docker/Podman) that already has everything your automation needs.


🔹 Simple Idea
Instead of installing Python, Ansible, collections, and dependencies on every control node, you package everything into one environment and run it anywhere.


Ansible execution environments (EE) were introduced in Ansible Automation Platform 2 to provide a defined, consistent and portable environment for executing automation jobs.
Execution environments are basically Linux container images that help execute Ansible playbooks.

The container images for the execution environments contain the necessary components to execute Ansible automation jobs. These include Python, Ansible (ansible-core), Ansible Runner, required Python libraries, and dependencies.

When you install Ansible Automation Platform, the installer deploys the following container images whether you're in a connected or an unconnected installation:

* The ee-29-rhel8 image contains Ansible 2.9 to use with older Ansible playbooks.
* ee-minimal-rhel8 is the minimal container image with ansible-core and basic collections.
* ee-supported-rhel8 is the container image with ansible-core and automation content collections supported by Red Hat.

Ansible Automation Platform's default container images let you start doing automation without any additional configurations.

You can follow the standard container image build process for building execution environment container images, but Ansible Automation Platform also includes a command-line utility called ansible-builder to build container images for custom execution environments.

The ansible-builder tool can be installed from the upstream Python repository or the Red Hat RPM repository:

## Install ansible-builder utility
$ pip3 install ansible-builder

## Ansible Automation Platform repository subscription is required
$ sudo dnf install ansible-builder

The ansible-builder helps you build container images with the definition file execution-environment.yml.

A typical execution-environment.yml contains the base container image (EE_BASE_IMAGE), ansible.cfg, and other dependency file details:
---
version: 1
build_arg_defaults:
EE_BASE_IMAGE: 'registry.redhat.io/ee-minimal-rhel8:latest'
ansible_config: 'ansible.cfg'
dependencies:
galaxy: requirements.yml
python: requirements.txt
additional_build_steps:
append:
- RUN microdnf install which


Once you've prepared the execution-environment.yml, execute the ansible-builder build command to create a build context that includes the Containerfile.
$ ansible-builder build --tag my_custom_ee
Running command:
podman build -f context/Containerfile -t my_custom_ee context
Complete! The build context can be found at: /home/ralagarasan/ansible-aap-demo/context

Two options to build and use custom execution environments with Ansible Automation Platform: building and transferring the container image or creating a custom environment.

1. Build and transfer a container image
2. Create a custom execution environment in an unconnected environment

1. Build and transfer a container image
You can create a container image from a connected machine (for example, a developer workstation) with all the dependencies inside and transfer it to the private automation hub (or another supported registry).

Step 1. Create and archive the container image from a connected machine:

## build the container image
$ ansible-builder build --tag my_custom_ee
## Save the container image as archive file
$ podman save --quiet -o my_custom_ee-1.0.tar localhost/my_custom_ee:1.0
Step 2. Copy the archived container image (for example, my_custom_ee-1.0.tar) to the target
Step 3. Load the container image from the TAR file to the system on the unconnected machine, and build the container image: $ podman load -i my_custom_ee-1.0.tar
Step 4. Follow the tag and push process for private automation hub.
$ podman login automationhub22-1.lab.local

Tag the local container image with the private automation hub path:
$ podman tag localhost/network-ee:1.0 automationhub22-1.lab.local/network-ee:1.0

Push the image to the private automation hub (registry):
$ podman push automationhub22-1.lab.local/network-ee:1.0



2. Create a custom execution environment in an unconnected environment
Step 1. Transfer the dependencies to the target unconnected system
Step 2. Prepare the Containerfile with instructions to build the container image for the execution environment:
## Containerfile for custom execution environment
ARG EE_BASE_IMAGE=registry.redhat.io/ansible-automation-platform-22/ee-minimal-rhel8:latest
ARG EE_BUILDER_IMAGE=registry.redhat.io/ansible-automation-platform-22/ansible-builder-rhel8

FROM $EE_BASE_IMAGE

ADD ansible.cfg ansible.cfg
ADD python-packages.tar python
RUN python3 -m pip install -r python/python-packages/requirements.txt --find-links=python/python-packages/ --no-index
Step 3. Build the container image using Podman:
$ podman build -f Containerfile -t localhost/network-ee:1.0
[...]

Looking in links: python/python-packages/
Processing ./python/python-packages/pan_os_python-1.7.3-py2.py3-none-any.whl
Processing ./python/python-packages/pan_python-0.17.0-py2.py3-none-any.whl
Installing collected packages: pan-python, pan-os-python
[...]

Successfully tagged localhost/network-ee:1.0
01e210e05a60dcf49c1b4a2b1bf1e58c49a487823b585233a15d1ecd66910bab
The TAR file is copied, extracted, and the content is installed inside the image.




[Thanks Redhat](https://www.redhat.com/en/blog/ansible-execution-environment-unconnected)

Thursday, April 23, 2026

Ansible_8

Ansible_8

Use Python virtual environments for Ansible


To avoid version conflicts (Ansible, Python libs, collections) and its safe upgrades/testing without breaking system Python.


Intall Required Packages
# dnf install -y python3 python3-pip python3-virtualenv # optional suggestion pkges gcc python3-devel libffi-devel openssl-devel
Create virtual environment
# python3 -m venv ansible-env #This creates a folder ansible-env/ with isolated Python
If venv is missing # virtualenv ansible-env

Activate the environment
# source ansible-env/bin/activate
prompt will change like below
(ansible-env) user@host:~$

Install Ansible inside venv
$ pip install --upgrade pip
$ pip install ansible
$ ansible --version
$ ansible-galaxy collection install community.general
$ ansible-galaxy collection install ansible.posix
$ ansible-galaxy collection install community.vmware
$ ansible-playbook -i inventory.ini playbook.yml

Deactivate
$ ansible-playbook -i inventory.ini playbook.yml

eg
python3 -m venv /home/ralagarasan/uc-01
pip freeze > requirements.txt
pip install -r requirements.txt

Saturday, April 18, 2026

yaml_Ansible_4_map

 Map Structure


Map refers to Dictionary or Key-Value pair structure,
Map Structured-data

A YAML map (mapping) is basically a key → value structure (like a dictionary in Python or JSON object).

name: Alagarasan
role: DevOps Consultant
experience: 8

👉 Here:
name, role, experience → keys
Values → strings / numbers

- name: Create VM
hosts: localhost
vars:
vm_name: test-vm
vm_cpu: 2
👉 Here:
vars is a map
Inside it → vm_name, vm_cpu are keys

Types of Maps
Inline Map
Block Map

Inline Map
person: {name: Alagarasan, role: DevOps Consultant, experience: 8}

Block Map
person:
name: Alagarasan
role: DevOps Consultant
experience: 8

Access full map
- debug:
var: person
Access individual values
- debug:
msg: "{{ person.name }}"

- debug:
msg: "{{ person.role }}"


🔹 Nested Map (Map inside Map)
Inside map we always have key value pairs. We can have multiple key value pairs inside a map. We can also have a map inside a map which is called nested map.

employee:
name: Alagarasan
role: DevOps Consultant
skills:
primary: Ansible
secondary: AWS
Access nested value
- debug:
msg: "{{ person.address.city }}"


👉 Structure:
employee → parent key
Inside it → another map


Map with list
person:
name: Alagarasan
skills:
- Ansible
- Docker
- Kubernetes



Nested Map + List (Real-world 🔥🔥)
servers:
- name: web01
config:
ip: 192.168.1.10
ports:
- 80
- 443

- name: web02
config:
ip: 192.168.1.11
ports:
- 8080
👉 Access:
{{ servers[0].config.ip }}
{{ servers[0].config.ports[1] }}


Yaml_Ansible_3_list

 List (Array) # Lists start with - (dash + space), Indentation matters

A sequence of items, represented as a list (array) in YAML.
Lists can be defined using either block style (with dashes) or inline style (with square brackets).
List Multiple-items

Simple List (Basic)
servers:
- web1
- web2
- db1

Inline List (Flow Style)
servers: [web1, web2, db1]

Nested List
matrix:
- - 1
- 2
- - 3
- 4



List of Maps (Very Important in Ansible 🚀)
users:
- name: John
age: 25
- name: Alice
age: 30

🔥 How to "call" (access) list of maps
Access full list {{ users }}
Access first item {{ users[0] }}
Access specific value {{ users[0].name }} # 👉 Output: John
Loop through list (Most important 🚀)
- name: Print users
debug:
msg: "{{ item.name }} is {{ item.age }} years old"
loop: "{{ users }}"


List inside Map
employee:
name: John
skills:
- Python
- Ansible
- Docker
👉 Here:
employee → map
skills → list inside the map
Access full list {{ employee.skills }}
Access specific item (index) {{ employee.skills[0] }} #👉 Output: Python
Loop through the list
- name: Print skills
debug:
msg: "{{ item }}"
loop: "{{ employee.skills }}"

Use inside message
- name: Print employee info
debug:
msg: "{{ employee.name }} knows {{ employee.skills[1] }}"
👉 Output: John knows Ansible

Structure How to Access
Map → key map.key
List → index list[0]
Map → List map.list_key[0]

Friday, April 17, 2026

YAML_Ansible_2

 


Types of YAML Strings
YAML supports 4 main styles:
🟢 1. Plain Strings (No quotes)
🟡 2. Single-Quoted Strings ' '
🔵 3. Double-Quoted Strings " "
🟣 4. Multi-line Strings (VERY IMPORTANT 🔥)

🟢 1. Plain Strings (No quotes)
name: Alagarasan
role: DevOps Engineer
company: TCS

🟢 2. Double-Quoted Strings
message: "Start\nEnd" # \n is treated as a newline character
👉 Output
Start
End

quote: "He said, \"YAML is easy\"". # \" is treated as a literal double quote or \" lets you escape double quotes
👉 Output
He said, "YAML is easy"

tab: "Hello\tDevOps" # \t is treated as a tab character
quote: "He said \"Hi\"" # \" is treated as a literal double quote


path: "C:\\Program Files\\App"
👉 Output
C:\Program Files\App
💡 Why?
\ is a special character → must escape as \\


Prevent Boolean Conversion
value: "yes"
👉 Output
yes (string)

Tabs / Formatting # \t = tab spacing
text: "Name:\tAlagarasan"
👉 Output
Name: Alagarasan

Multi-line Formatting (Inline)
msg: "Line1\nLine2\nLine3"
👉 Output
Line1
Line2
Line3


🟢 3. Single-Quoted Strings # Everything is treated literally,No special character processing
In YAML, single-quoted strings (' ') are used when you want the content to be taken literally — no escape sequences, no special processing.
name: 'Alagarasan'
city: 'Chennai'
✔ Everything inside ' ' is treated as plain text.

Special Characters (No escaping needed)
path: '/usr/local/bin'
message: 'Hello: World!'
✔ Characters like :, /, ! are safe inside single quotes.

Quotes Inside String
To include a single quote inside, double it ('')
text: 'It''s DevOps'
👉 Output: It's DevOps. # Only one escape:

No Escape Sequences
newline: 'Line1\nLine2'
👉 Output (literal, NOT new line):
Line1\nLine2

❌ \n is NOT interpreted
✔ It stays as plain text

admin: 'true'
🔍 What’s happening here?
In YAML, certain words are reserved keywords (called boolean values):
true, false
yes, no
on, off

If you write:
admin: true
👉 YAML interprets it as a boolean, not a string.

🧠 Why use single quotes?
When you write:
admin: 'true'
👉 Now YAML treats it as a string, not a boolean.

Value written YAML interprets as
true Boolean true
'true' String "true"
"true" String "true"


admin: 'true' # string
enabled: true # boolean


Leading / Trailing Spaces Preserved
value: ' hello '
👉 Output:
" hello "

When to Use Single Quotes
Use ' ' when:

You want literal values
You don’t need escape sequences
String contains special characters like :, #, !
You want safe, predictable parsing




🟣 4. Multi-line Strings (VERY IMPORTANT 🔥)
Literal block (|) → preserves format
message: |
Hello
DevOps
Team

👉 Output:
Hello
DevOps
Team

Folded block (>) → merges lines
message: >
Hello
DevOps
Team

👉 Output:
Hello DevOps Team