<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Dinesh's Blog]]></title><description><![CDATA[Dinesh's Blog]]></description><link>https://blog.dineshcloud.in</link><generator>RSS for Node</generator><lastBuildDate>Sat, 11 Apr 2026 19:55:32 GMT</lastBuildDate><atom:link href="https://blog.dineshcloud.in/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Day 32 - Kubernetes Interview Q&A]]></title><description><![CDATA[1. Difference between Docker and Kubernetes
Docker → Builds and runs containers.Kubernetes → Orchestrates containers across multiple nodes.
Key points:

Docker = container runtime.

Kubernetes = container orchestration tool.

Kubernetes provides auto...]]></description><link>https://blog.dineshcloud.in/day-32-kubernetes-interview-qanda</link><guid isPermaLink="true">https://blog.dineshcloud.in/day-32-kubernetes-interview-qanda</guid><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[Devops]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Dinesh Kumar K]]></dc:creator><pubDate>Mon, 15 Dec 2025 13:20:52 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-1-difference-between-docker-and-kubernetes"><strong>1. Difference between Docker and Kubernetes</strong></h2>
<p><strong>Docker</strong> → Builds and runs containers.<br /><strong>Kubernetes</strong> → Orchestrates containers across multiple nodes.</p>
<p>Key points:</p>
<ul>
<li><p>Docker = container runtime.</p>
</li>
<li><p>Kubernetes = container orchestration tool.</p>
</li>
<li><p>Kubernetes provides auto-healing, auto-scaling, load balancing.</p>
</li>
<li><p>Kubernetes runs on a cluster → if one node fails, workloads shift automatically.</p>
</li>
</ul>
<hr />
<h2 id="heading-2-main-components-of-kubernetes-architecture"><strong>2. Main Components of Kubernetes Architecture</strong></h2>
<h3 id="heading-control-plane"><strong>Control Plane</strong></h3>
<ul>
<li><p><strong>API Server</strong> → Entry point for all commands.</p>
</li>
<li><p><strong>Scheduler</strong> → Decides which node runs a pod.</p>
</li>
<li><p><strong>etcd</strong> → Key-value store for cluster state.</p>
</li>
<li><p><strong>Controller Manager</strong> → Runs controllers like ReplicaSet, Node Controller, Job Controller.</p>
</li>
<li><p><strong>Cloud Controller Manager</strong> → Integrates with cloud providers (e.g., creates LoadBalancer IPs).</p>
</li>
</ul>
<h3 id="heading-worker-node"><strong>Worker Node</strong></h3>
<ul>
<li><p><strong>kubelet</strong> → Ensures pods run and are healthy.</p>
</li>
<li><p><strong>kube-proxy</strong> → Manages networking rules and service routing.</p>
</li>
<li><p><strong>Container Runtime</strong> → Docker, containerd, CRI-O, etc.</p>
</li>
</ul>
<hr />
<h2 id="heading-3-docker-swarm-vs-kubernetes"><strong>3. Docker Swarm vs Kubernetes</strong></h2>
<p><strong>Docker Swarm</strong></p>
<ul>
<li><p>Simple and easy.</p>
</li>
<li><p>Limited networking and scaling.</p>
</li>
<li><p>Good for small workloads.</p>
</li>
</ul>
<p><strong>Kubernetes</strong></p>
<ul>
<li><p>Highly scalable, flexible.</p>
</li>
<li><p>Advanced networking (CNI).</p>
</li>
<li><p>Large ecosystem &amp; community.</p>
</li>
<li><p>Industry standard for production.</p>
</li>
</ul>
<hr />
<h2 id="heading-4-docker-container-vs-kubernetes-pod"><strong>4. Docker Container vs Kubernetes Pod</strong></h2>
<ul>
<li><p><strong>Container</strong> → Single isolated runtime.</p>
</li>
<li><p><strong>Pod</strong> → Kubernetes unit that can contain <strong>one or more containers</strong>.</p>
</li>
<li><p>Containers in a pod share:</p>
<ul>
<li><p>Network namespace</p>
</li>
<li><p>Storage volumes</p>
</li>
<li><p>Lifecycle</p>
</li>
</ul>
</li>
</ul>
<p>Pod = wrapper around one or more containers.</p>
<hr />
<h2 id="heading-5-what-is-a-namespace"><strong>5. What is a Namespace?</strong></h2>
<p>Namespace provides <strong>logical isolation</strong> within a Kubernetes cluster.</p>
<p>Use cases:</p>
<ul>
<li><p>Multi-project isolation</p>
</li>
<li><p>Resource separation</p>
</li>
<li><p>Independent RBAC policies</p>
</li>
<li><p>Isolated services, configs, secrets</p>
</li>
</ul>
<hr />
<h2 id="heading-6-role-of-kube-proxy"><strong>6. Role of kube-proxy</strong></h2>
<p>kube-proxy:</p>
<ul>
<li><p>Manages network rules on nodes</p>
</li>
<li><p>Updates <strong>iptables/ipvs</strong></p>
</li>
<li><p>Routes service traffic to appropriate pods</p>
</li>
<li><p>Enables ClusterIP, NodePort, LoadBalancer traffic flow</p>
</li>
</ul>
<hr />
<h2 id="heading-7-types-of-kubernetes-services"><strong>7. Types of Kubernetes Services</strong></h2>
<h3 id="heading-clusterip"><strong>ClusterIP</strong></h3>
<ul>
<li><p>Default type</p>
</li>
<li><p>Internal access only</p>
</li>
<li><p>Used for service-to-service communication</p>
</li>
</ul>
<h3 id="heading-nodeport"><strong>NodePort</strong></h3>
<ul>
<li><p>Opens a port on each node</p>
</li>
<li><p>External access via <code>NodeIP:Port</code></p>
</li>
</ul>
<h3 id="heading-loadbalancer"><strong>LoadBalancer</strong></h3>
<ul>
<li><p>Creates cloud load balancer</p>
</li>
<li><p>Exposes app to the internet</p>
</li>
</ul>
<hr />
<h2 id="heading-8-difference-between-nodeport-and-loadbalancer"><strong>8. Difference Between NodePort and LoadBalancer</strong></h2>
<p><strong>NodePort</strong></p>
<ul>
<li><p>Access via <code>NodeIP:NodePort</code></p>
</li>
<li><p>Limited to your cluster/node network</p>
</li>
<li><p>No external load balancing</p>
</li>
</ul>
<p><strong>LoadBalancer</strong></p>
<ul>
<li><p>Cloud provider allocates a public IP</p>
</li>
<li><p>Global reach over the internet</p>
</li>
<li><p>Adds external LB + NodePort behind the scenes</p>
</li>
</ul>
<hr />
<h2 id="heading-9-role-of-kubelet"><strong>9. Role of kubelet</strong></h2>
<p>kubelet:</p>
<ul>
<li><p>Ensures pods are running</p>
</li>
<li><p>Reports pod/node status to API server</p>
</li>
<li><p>Restarts containers if required</p>
</li>
<li><p>Handles pod lifecycle management</p>
</li>
</ul>
<hr />
<h2 id="heading-10-day-to-day-kubernetes-activities-devops-engineer"><strong>10. Day-to-Day Kubernetes Activities (DevOps Engineer)</strong></h2>
<p>A strong interview-ready answer:</p>
<ul>
<li><p>Deploy and manage applications on Kubernetes</p>
</li>
<li><p>Monitor cluster health and workloads</p>
</li>
<li><p>Troubleshoot pod failures, service issues, networking problems</p>
</li>
<li><p>Handle upgrades and maintenance of master/worker nodes</p>
</li>
<li><p>Manage RBAC, namespaces, resource quotas</p>
</li>
<li><p>Support developers with deployment issues</p>
</li>
<li><p>Manage CI/CD pipeline deployments to Kubernetes</p>
</li>
<li><p>Handle cluster security, patching, vulnerabilities</p>
</li>
<li><p>Operate logging and monitoring (Prometheus, Grafana, Loki, EFK)</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Day 31 - KUBERNETES MONITORING USING PROMETHEUS & GRAFANA]]></title><description><![CDATA[In this session, we learn how to monitor a Kubernetes cluster using Prometheus and Grafana.This is not just theory — there is a GitHub repository containing all installation commands and demo steps.The repo will also be enhanced later with advanced K...]]></description><link>https://blog.dineshcloud.in/day-31-kubernetes-monitoring-using-prometheus-and-grafana</link><guid isPermaLink="true">https://blog.dineshcloud.in/day-31-kubernetes-monitoring-using-prometheus-and-grafana</guid><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Dinesh Kumar K]]></dc:creator><pubDate>Mon, 15 Dec 2025 13:19:52 GMT</pubDate><content:encoded><![CDATA[<p>In this session, we learn how to monitor a Kubernetes cluster using <strong>Prometheus</strong> and <strong>Grafana</strong>.<br />This is not just theory — there is a GitHub repository containing all installation commands and demo steps.<br />The repo will also be enhanced later with <em>advanced Kubernetes monitoring</em> and <em>custom metric server</em> topics, so you can star it and follow future updates.</p>
<hr />
<h2 id="heading-why-monitoring">✅ <strong>WHY MONITORING?</strong></h2>
<p>If you have only <strong>one Kubernetes cluster</strong>, monitoring is easy.<br />But in a real company:</p>
<ul>
<li><p>Multiple teams use the same cluster.</p>
</li>
<li><p>Teams complain: <em>“My deployment is not receiving requests”,</em> or <em>“Service was down for some time.”</em></p>
</li>
<li><p>You may have multiple clusters: <strong>dev</strong>, <strong>staging</strong>, <strong>prod</strong>.</p>
</li>
</ul>
<p>As clusters increase, <strong>you need a monitoring solution</strong> to understand:</p>
<ul>
<li><p>What is happening inside your clusters?</p>
</li>
<li><p>Which deployment is down?</p>
</li>
<li><p>Is API server healthy?</p>
</li>
<li><p>Are replica counts matching?</p>
</li>
<li><p>Are nodes running or not?</p>
</li>
</ul>
<hr />
<h2 id="heading-why-prometheus">✅ <strong>WHY PROMETHEUS?</strong></h2>
<p>Prometheus was created by SoundCloud and is now completely open-source.</p>
<p>Prometheus:</p>
<ul>
<li><p>Scrapes metrics from Kubernetes.</p>
</li>
<li><p>Stores metrics in a <strong>Time Series Database</strong>.</p>
</li>
<li><p>Can trigger alerts using <strong>Alertmanager</strong>.</p>
</li>
<li><p>Provides a UI for running <strong>PromQL</strong> queries.</p>
</li>
</ul>
<hr />
<h2 id="heading-why-grafana">✅ <strong>WHY GRAFANA?</strong></h2>
<p>Prometheus gives output but not visually appealing.</p>
<p>Grafana:</p>
<ul>
<li><p>Connects to Prometheus as a <strong>data source</strong>.</p>
</li>
<li><p>Visualizes data using dashboards.</p>
</li>
<li><p>Makes metrics easy to understand.</p>
</li>
</ul>
<hr />
<h1 id="heading-prometheus-architecture-simple-explanation">✅ <strong>PROMETHEUS ARCHITECTURE (Simple Explanation)</strong></h1>
<p>Prometheus includes:</p>
<h3 id="heading-1-prometheus-server"><strong>1️⃣ Prometheus Server</strong></h3>
<ul>
<li><p>Scrapes metrics from Kubernetes API Server.</p>
</li>
<li><p>Stores metrics in time-series format on disk.</p>
</li>
</ul>
<h3 id="heading-2-kubernetes-api-server"><strong>2️⃣ Kubernetes API Server</strong></h3>
<ul>
<li><p>Exposes built-in metrics at:<br />  <code>/metrics</code></p>
</li>
<li><p>Shows default cluster metrics.</p>
</li>
</ul>
<h3 id="heading-3-alertmanager"><strong>3️⃣ Alertmanager</strong></h3>
<ul>
<li><p>Prometheus pushes alerts to it.</p>
</li>
<li><p>Alertmanager sends notifications (Slack, Email, etc.).</p>
</li>
</ul>
<h3 id="heading-4-promql-interface"><strong>4️⃣ PromQL Interface</strong></h3>
<ul>
<li>Used in Prometheus UI or Grafana to run queries.</li>
</ul>
<h3 id="heading-5-external-access"><strong>5️⃣ External Access</strong></h3>
<ul>
<li><p>Grafana pulls data from Prometheus.</p>
</li>
<li><p>Tools like curl or Postman can also query Prometheus APIs.</p>
</li>
</ul>
<hr />
<h1 id="heading-grafana-use">✅ <strong>GRAFANA USE</strong></h1>
<p>Grafana helps visualize Prometheus data through graphs and dashboards.</p>
<hr />
<h1 id="heading-demo-install-prometheus-grafana-on-minikube">✅ <strong>DEMO: INSTALL PROMETHEUS + GRAFANA ON MINIKUBE</strong></h1>
<p>We create a Kubernetes cluster using Minikube:</p>
<pre><code class="lang-plaintext">minikube start --memory=4096 --driver=hyperkit
</code></pre>
<p>(Use hyperkit on Mac, VirtualBox or Docker on other systems.)</p>
<hr />
<h1 id="heading-install-prometheus-using-helm">🔥 <strong>INSTALL PROMETHEUS USING HELM</strong></h1>
<ol>
<li>Add the Prometheus Helm repo:</li>
</ol>
<pre><code class="lang-plaintext">helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
</code></pre>
<ol start="2">
<li>Install Prometheus:</li>
</ol>
<pre><code class="lang-plaintext">helm install prometheus prometheus-community/prometheus
</code></pre>
<ol start="3">
<li>Check pods:</li>
</ol>
<pre><code class="lang-plaintext">kubectl get pods -A
</code></pre>
<p>You will see:</p>
<ul>
<li><p>Prometheus server</p>
</li>
<li><p>Prometheus alertmanager</p>
</li>
<li><p>kube-state-metrics</p>
</li>
<li><p>node-exporter</p>
</li>
</ul>
<h3 id="heading-kube-state-metrics">❗ kube-state-metrics</h3>
<p>This component provides <strong>extra Kubernetes metrics</strong> NOT available in the Kubernetes API server.<br />It exposes metrics for:</p>
<ul>
<li><p>Deployments</p>
</li>
<li><p>Daemonsets</p>
</li>
<li><p>Pods</p>
</li>
<li><p>ReplicaSets</p>
</li>
<li><p>Services</p>
</li>
<li><p>Replica count</p>
</li>
<li><p>Desired vs Actual state</p>
</li>
</ul>
<p>Without kube-state-metrics you only get basic Kubernetes metrics.</p>
<hr />
<h1 id="heading-expose-prometheus-server-using-nodeport">🔥 <strong>EXPOSE PROMETHEUS SERVER USING NODEPORT</strong></h1>
<p>Default service is ClusterIP, so expose it:</p>
<pre><code class="lang-plaintext">kubectl expose service prometheus-server --type=NodePort --name=prometheus-server-ext
</code></pre>
<p>Get service:</p>
<pre><code class="lang-plaintext">kubectl get svc
</code></pre>
<p>Use Minikube IP:</p>
<pre><code class="lang-plaintext">minikube ip
</code></pre>
<p>Open Prometheus:</p>
<pre><code class="lang-plaintext">http://&lt;minikube-ip&gt;:&lt;node-port&gt;
</code></pre>
<p>You can now run PromQL queries.</p>
<hr />
<h1 id="heading-install-grafana-using-helm">🔥 <strong>INSTALL GRAFANA USING HELM</strong></h1>
<ol>
<li>Add Grafana repo:</li>
</ol>
<pre><code class="lang-plaintext">helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
</code></pre>
<ol start="2">
<li>Install Grafana:</li>
</ol>
<pre><code class="lang-plaintext">helm install grafana grafana/grafana
</code></pre>
<ol start="3">
<li>Get admin password:</li>
</ol>
<pre><code class="lang-plaintext">kubectl get secret -n default grafana -o jsonpath="{.data.admin-password}" | base64 --decode
</code></pre>
<ol start="4">
<li>Expose Grafana:</li>
</ol>
<pre><code class="lang-plaintext">kubectl expose service grafana --type=NodePort --name=grafana-ext
</code></pre>
<ol start="5">
<li>Access Grafana:</li>
</ol>
<pre><code class="lang-plaintext">http://&lt;minikube-ip&gt;:&lt;node-port&gt;
</code></pre>
<p>Login using:</p>
<ul>
<li><p><strong>User</strong>: admin</p>
</li>
<li><p><strong>Password</strong>: (from command above)</p>
</li>
</ul>
<hr />
<h1 id="heading-add-prometheus-as-a-datasource-in-grafana">🔥 <strong>ADD PROMETHEUS AS A DATASOURCE IN GRAFANA</strong></h1>
<p>Grafana → <em>Data Sources</em> → Add → Select <strong>Prometheus</strong></p>
<p>Enter URL:</p>
<pre><code class="lang-plaintext">http://&lt;minikube-ip&gt;:&lt;prometheus-nodeport&gt;
</code></pre>
<p>Save &amp; Test → should show <strong>Data source is working</strong>.</p>
<hr />
<h1 id="heading-import-kubernetes-dashboard-id-3662">🔥 <strong>IMPORT KUBERNETES DASHBOARD (ID: 3662)</strong></h1>
<p>Grafana → Dashboards → Import → Enter ID:</p>
<pre><code class="lang-plaintext">3662
</code></pre>
<p>Select data source = Prometheus → Import.</p>
<p>A beautiful Kubernetes dashboard appears showing:</p>
<ul>
<li><p>API server health</p>
</li>
<li><p>Nodes</p>
</li>
<li><p>CPU &amp; Memory usage</p>
</li>
<li><p>Cluster info</p>
</li>
<li><p>Pod counts</p>
</li>
<li><p>Node uptime</p>
</li>
</ul>
<hr />
<h1 id="heading-enable-kube-state-metrics-for-deployment-level-metrics">🔥 <strong>ENABLE kube-state-metrics FOR DEPLOYMENT-LEVEL METRICS</strong></h1>
<p>Expose kube-state-metrics:</p>
<pre><code class="lang-plaintext">kubectl expose service prometheus-kube-state-metrics --type=NodePort --name=kube-state-metrics-ext --target-port=8080
</code></pre>
<p>Get NodePort:</p>
<pre><code class="lang-plaintext">kubectl get svc
</code></pre>
<p>Open:</p>
<pre><code class="lang-plaintext">http://&lt;minikube-ip&gt;:&lt;nodeport&gt;/metrics
</code></pre>
<p>Now you will see <strong>deployment-level, pod-level, and service-level metrics</strong>.</p>
<p>These metrics appear inside your Grafana dashboards.</p>
<hr />
<h1 id="heading-end-result">🎉 <strong>END RESULT</strong></h1>
<p>You successfully installed:</p>
<p>✔️ Kubernetes Cluster (Minikube)<br />✔️ Prometheus<br />✔️ Grafana<br />✔️ kube-state-metrics<br />✔️ Kubernetes monitoring dashboard (ID: 3662)</p>
<p>Now your Grafana shows:</p>
<ul>
<li><p>Nodes</p>
</li>
<li><p>API Server</p>
</li>
<li><p>Pods</p>
</li>
<li><p>Deployments</p>
</li>
<li><p>Resource usage</p>
</li>
<li><p>Replica counts</p>
</li>
<li><p>Cluster uptime</p>
</li>
<li><p>Realtime metrics</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Day 30 - Kubernetes ConfigMaps & Secrets]]></title><description><![CDATA[1. What is a ConfigMap in Kubernetes?
A ConfigMap is used to store non-sensitive configuration data that your application needs — such as:

Database port

Connection type

Any general configuration values


In normal applications (non-Kubernetes), de...]]></description><link>https://blog.dineshcloud.in/day-30-kubernetes-configmaps-and-secrets</link><guid isPermaLink="true">https://blog.dineshcloud.in/day-30-kubernetes-configmaps-and-secrets</guid><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Dinesh Kumar K]]></dc:creator><pubDate>Mon, 15 Dec 2025 13:18:54 GMT</pubDate><content:encoded><![CDATA[<h1 id="heading-1-what-is-a-configmap-in-kubernetes"><strong>1. What is a ConfigMap in Kubernetes?</strong></h1>
<p>A <strong>ConfigMap</strong> is used to store <strong>non-sensitive configuration data</strong> that your application needs — such as:</p>
<ul>
<li><p>Database port</p>
</li>
<li><p>Connection type</p>
</li>
<li><p>Any general configuration values</p>
</li>
</ul>
<p>In normal applications (non-Kubernetes), developers normally store such info using:</p>
<ul>
<li><p>Environment variables</p>
</li>
<li><p>Configuration files</p>
</li>
<li><p>OS environment values</p>
</li>
</ul>
<p><strong>Why?</strong> Because you should <em>never hardcode</em> values inside an application, especially because they might change in future.</p>
<h3 id="heading-in-kubernetes-a-configmap-helps-you-store-such-non-sensitive-values-and-inject-them-into-pods">⭐ In Kubernetes, a ConfigMap helps you store such non-sensitive values and inject them into Pods:</h3>
<p>You can inject ConfigMap data into a Pod as:</p>
<ol>
<li><p><strong>Environment Variables</strong>, or</p>
</li>
<li><p><strong>Files using Volume Mounts</strong></p>
</li>
</ol>
<hr />
<h1 id="heading-2-why-do-secrets-exist-if-configmaps-already-store-data"><strong>2. Why do Secrets exist if ConfigMaps already store data?</strong></h1>
<p>Because ConfigMaps store data <strong>in plain text</strong>.</p>
<p>In Kubernetes:</p>
<ul>
<li><p>All resources (Pods, Deployments, ConfigMaps, etc.) are stored in <strong>etcd</strong>.</p>
</li>
<li><p>ConfigMap values are <strong>NOT encrypted</strong> in etcd.</p>
</li>
</ul>
<p>If a hacker gains access to etcd, they can read ConfigMap data easily.</p>
<p>But for sensitive data like:</p>
<ul>
<li><p>DB Password</p>
</li>
<li><p>API Keys</p>
</li>
<li><p>Token</p>
</li>
<li><p>Certificates</p>
</li>
</ul>
<p><strong>Kubernetes provides Secrets.</strong></p>
<hr />
<h1 id="heading-3-what-is-a-secret-why-is-it-different"><strong>3. What is a Secret? Why is it different?</strong></h1>
<p>A <strong>Secret</strong> is used to store <strong>sensitive</strong> information.</p>
<p>Kubernetes Secrets provide security in two major ways:</p>
<h3 id="heading-1-secrets-are-encrypted-at-rest-in-etcd"><strong>1. Secrets are encrypted at rest (in etcd)</strong></h3>
<p>Before storing Secrets in etcd, Kubernetes encrypts the values.<br />Even if someone gets etcd access, they cannot read the contents without the decryption key.</p>
<h3 id="heading-2-you-can-enforce-strong-rbac-on-secrets"><strong>2. You can enforce strong RBAC on Secrets</strong></h3>
<p>You can configure RBAC so that:</p>
<ul>
<li><p>Developers can access Pods, Deployments, ConfigMaps</p>
</li>
<li><p>But <em>cannot</em> access Secrets</p>
</li>
</ul>
<p>This protects sensitive data even more.</p>
<hr />
<h1 id="heading-4-configmap-vs-secret-interview-style-answer"><strong>4. ConfigMap vs Secret — Interview Style Answer</strong></h1>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Feature</td><td>ConfigMap</td><td>Secret</td></tr>
</thead>
<tbody>
<tr>
<td>Purpose</td><td>Store non-sensitive config</td><td>Store sensitive data</td></tr>
<tr>
<td>Encryption at rest</td><td>❌ No</td><td>✅ Yes</td></tr>
<tr>
<td>RBAC use</td><td>Normal</td><td>Strongly recommended</td></tr>
<tr>
<td>Storage in etcd</td><td>Plain text</td><td>Base64 + Encryption</td></tr>
<tr>
<td>Use case</td><td>Ports, URLs, Settings</td><td>Passwords, keys, tokens</td></tr>
</tbody>
</table>
</div><p><strong>Both</strong> are used to pass data into Pods, but:</p>
<ul>
<li><p><strong>ConfigMap = non-sensitive</strong></p>
</li>
<li><p><strong>Secret = sensitive</strong></p>
</li>
</ul>
<hr />
<h1 id="heading-5-how-kubernetes-handles-configmap-amp-secret-creation"><strong>5. How Kubernetes handles ConfigMap &amp; Secret creation</strong></h1>
<p>When you create:</p>
<ul>
<li><p>ConfigMap or</p>
</li>
<li><p>Secret</p>
</li>
</ul>
<p>using <code>kubectl apply -f file.yaml</code>:</p>
<p>✔ The API server takes the YAML<br />✔ Stores it inside etcd<br />✔ Makes it available for Pods</p>
<hr />
<h1 id="heading-6-problem-with-environment-variable-updates"><strong>6. Problem with environment variable updates</strong></h1>
<p>If you use <strong>ConfigMap as environment variables</strong>, there is a limitation:</p>
<p>❗ <strong>If ConfigMap value changes → existing Pods will NOT update automatically.</strong></p>
<p>Why?</p>
<p>Because container environment variables <strong>cannot be changed</strong> without restarting the container.</p>
<p>This is a well-known container limitation.</p>
<h3 id="heading-solution">Solution</h3>
<p>Use <strong>volume mounts</strong> instead of environment variables.</p>
<hr />
<h1 id="heading-7-using-configmap-as-a-volume-mount"><strong>7. Using ConfigMap as a Volume Mount</strong></h1>
<p>When you mount a ConfigMap as a volume:</p>
<ul>
<li><p>Kubernetes maps each key-value pair as a separate file.</p>
</li>
<li><p>When the ConfigMap updates:</p>
<ul>
<li><p>Kubernetes updates the file contents inside the container automatically</p>
</li>
<li><p><strong>Without restarting the Pod</strong></p>
</li>
</ul>
</li>
</ul>
<p>This is extremely useful for dynamic configuration.</p>
<p>Example:</p>
<pre><code class="lang-plaintext">/opt/DBPort
</code></pre>
<p>contains:</p>
<pre><code class="lang-plaintext">3306
</code></pre>
<p>If ConfigMap changes to <code>3307</code>, the file updates automatically in 2–10 seconds.</p>
<hr />
<h1 id="heading-8-why-volume-mount-method-is-more-dynamic"><strong>8. Why volume mount method is more dynamic</strong></h1>
<p>✔ Pod does NOT restart<br />✔ Container does NOT restart<br />✔ File content updates automatically<br />✔ Application can read updated values</p>
<p>If your app watches the file, it can adjust configuration dynamically.</p>
<hr />
<h1 id="heading-9-secrets-work-the-exact-same-way"><strong>9. Secrets work the exact same way</strong></h1>
<p>Just like ConfigMaps:</p>
<p>✔ Secrets can be injected</p>
<ul>
<li><p>As environment variables, or</p>
</li>
<li><p>As volume mounts</p>
</li>
</ul>
<p>✔ Updating the Secret updates the mounted file automatically.</p>
<p>✔ But environment variables still require Pod restart.</p>
<hr />
<h1 id="heading-10-bonus-types-of-secrets"><strong>10. Bonus: Types of Secrets</strong></h1>
<p>Kubernetes supports:</p>
<ul>
<li><p><code>generic</code> (normal key-value)</p>
</li>
<li><p><code>docker-registry</code></p>
</li>
<li><p><code>tls</code> (for certificates)</p>
</li>
<li><p><code>opaque</code> (default)</p>
</li>
<li><p><code>service account token secrets</code></p>
</li>
</ul>
<hr />
<h1 id="heading-conclusion"><strong>Conclusion</strong></h1>
<p>Your full explanation boils down to this:</p>
<h3 id="heading-configmap">⭐ ConfigMap</h3>
<p>Used to store <strong>non-sensitive</strong> configuration.<br />Can be used as environment variables or mounted as files.</p>
<h3 id="heading-secret">⭐ Secret</h3>
<p>Used to store <strong>sensitive</strong> configuration.<br />Encrypted in etcd and protected by RBAC.</p>
<h3 id="heading-environment-variable-method">⭐ Environment variable method</h3>
<p>Dynamic updates <strong>do not</strong> reflect without Pod restart.</p>
<h3 id="heading-volume-mount-method">⭐ Volume mount method</h3>
<p>Dynamic updates reflect <strong>automatically</strong> inside the container.</p>
]]></content:encoded></item><item><title><![CDATA[Day 29 - Kubernetes Custom Resources]]></title><description><![CDATA[Kubernetes normally supports built-in resources like:

Deployment

Service

Pod

ConfigMap

Secret

Ingress


These are called native resources.
Sometimes companies (Istio, ArgoCD, Prometheus Operator, Kyverno, etc.) want to add new features that Kub...]]></description><link>https://blog.dineshcloud.in/day-29-kubernetes-custom-resources</link><guid isPermaLink="true">https://blog.dineshcloud.in/day-29-kubernetes-custom-resources</guid><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Dinesh Kumar K]]></dc:creator><pubDate>Mon, 15 Dec 2025 12:37:20 GMT</pubDate><content:encoded><![CDATA[<p>Kubernetes normally supports built-in resources like:</p>
<ul>
<li><p><strong>Deployment</strong></p>
</li>
<li><p><strong>Service</strong></p>
</li>
<li><p><strong>Pod</strong></p>
</li>
<li><p><strong>ConfigMap</strong></p>
</li>
<li><p><strong>Secret</strong></p>
</li>
<li><p><strong>Ingress</strong></p>
</li>
</ul>
<p>These are called <strong>native resources</strong>.</p>
<p>Sometimes companies (Istio, ArgoCD, Prometheus Operator, Kyverno, etc.) want to add <strong>new features</strong> that Kubernetes does not support by default.</p>
<p>To do this, Kubernetes allows you to:</p>
<p>👉 <strong>Extend the Kubernetes API</strong><br />This extension is done using:</p>
<ol>
<li><p><strong>CRD — Custom Resource Definition</strong></p>
</li>
<li><p><strong>CR — Custom Resource</strong></p>
</li>
<li><p><strong>Custom Controller</strong></p>
</li>
</ol>
<hr />
<h1 id="heading-1-crd-custom-resource-definition">🟦 <strong>1. CRD — Custom Resource Definition</strong></h1>
<p><strong>CRD = Definition / Schema of a new Kubernetes API.</strong></p>
<p>It tells Kubernetes:</p>
<ul>
<li><p>What is the <strong>name</strong> of the new resource?</p>
</li>
<li><p>What are the <strong>fields</strong> allowed?</p>
</li>
<li><p>What is the <strong>API version</strong> and <strong>kind</strong>?</p>
</li>
<li><p>What does the YAML structure look like?</p>
</li>
</ul>
<p>Example: Istio defines a new resource called <strong>VirtualService</strong>.</p>
<p>Istio provides a <strong>CRD</strong> so Kubernetes understands:</p>
<pre><code class="lang-plaintext">apiVersion: networking.istio.io/v1beta1
kind: VirtualService
</code></pre>
<p>CRD is installed <strong>once</strong> by DevOps engineers (usually via Helm or operator).</p>
<h3 id="heading-purpose-of-crd">Purpose of CRD:</h3>
<p>✔ Introduces a <strong>new type of resource</strong> into Kubernetes<br />✔ Validates all CR created by users<br />✔ Extends Kubernetes API</p>
<hr />
<h1 id="heading-2-cr-custom-resource">🟦 <strong>2. CR — Custom Resource</strong></h1>
<p><strong>CR = User-created object based on the CRD.</strong></p>
<p>Example: After installing the Istio VirtualService CRD, a user can create:</p>
<pre><code class="lang-plaintext">apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: my-app
spec:
  ...
</code></pre>
<p>This YAML is the <strong>Custom Resource</strong>.</p>
<h3 id="heading-summary">Summary:</h3>
<ul>
<li><p>CRD = Template / Model / Schema</p>
</li>
<li><p>CR = Actual object created by the user</p>
</li>
</ul>
<p>Same logic as:</p>
<ul>
<li><p>Deployment resource definition (built-in)</p>
</li>
<li><p>Deployment YAML (your object)</p>
</li>
</ul>
<hr />
<h1 id="heading-3-custom-controller">🟦 <strong>3. Custom Controller</strong></h1>
<p>CRD + CR <strong>alone do nothing</strong>.<br />Like Ingress without an Ingress controller → <strong>useless</strong>.</p>
<p>A <strong>Custom Controller</strong> is required to <em>watch</em> the CR and take action.</p>
<p>For example:</p>
<h3 id="heading-in-istio">In Istio:</h3>
<ul>
<li><p>VirtualService CRDs define the API</p>
</li>
<li><p>VirtualService CRs define the config</p>
</li>
<li><p>Istio Controller watches those CRs</p>
</li>
<li><p>Then configures Envoy proxy accordingly</p>
</li>
</ul>
<h3 id="heading-controller-responsibilities">Controller Responsibilities:</h3>
<p>✔ Watches for <strong>create / update / delete</strong> of CR<br />✔ Performs actions on cluster<br />✔ Maintains desired state</p>
<hr />
<h1 id="heading-flow-diagram-short-amp-clear">⭐ <strong>Flow Diagram (Short &amp; Clear)</strong></h1>
<pre><code class="lang-plaintext">DevOps Engineer:
    1. Install CRD → Adds new API to Kubernetes
    2. Install Controller → Logic to handle CRs

User / Developer:
    3. Creates Custom Resource (CR)

Controller:
    4. Watches CR
    5. Performs required actions
</code></pre>
<hr />
<h1 id="heading-simple-example-using-istio">⭐ <strong>Simple Example Using Istio</strong></h1>
<h3 id="heading-step-1-devops-installs-crds">Step 1: DevOps installs CRDs</h3>
<p>Istio CRDs include:</p>
<ul>
<li><p>VirtualService</p>
</li>
<li><p>DestinationRule</p>
</li>
<li><p>Gateway</p>
</li>
<li><p>PeerAuthentication<br />  …</p>
</li>
</ul>
<h3 id="heading-step-2-devops-installs-istio-controller">Step 2: DevOps installs Istio Controller</h3>
<p>This controller will watch all Istio CRs.</p>
<h3 id="heading-step-3-user-creates-cr">Step 3: User creates CR</h3>
<pre><code class="lang-plaintext">kind: VirtualService
...
</code></pre>
<h3 id="heading-step-4-istio-controller-sees-it-and-configures-envoy-proxies">Step 4: Istio Controller sees it and configures Envoy proxies.</h3>
<hr />
<h1 id="heading-how-custom-controllers-are-written">⭐ <strong>How Custom Controllers Are Written</strong></h1>
<p>Usually written in <strong>Go</strong> because:</p>
<ul>
<li><p>Kubernetes itself is written in Go</p>
</li>
<li><p>Official client library <strong>client-go</strong></p>
</li>
<li><p>Best support + ecosystem</p>
</li>
</ul>
<p>Process (high-level):</p>
<ol>
<li><p>Set watchers for CR events (Add/Update/Delete)</p>
</li>
<li><p>Add events to a workqueue</p>
</li>
<li><p>Process each event</p>
</li>
<li><p>Take action → create/update Kubernetes resources or external systems</p>
</li>
</ol>
<p>Frameworks used:</p>
<ul>
<li><p><strong>controller-runtime</strong></p>
</li>
<li><p><strong>operator-sdk</strong> (for building operators)</p>
</li>
</ul>
<hr />
<h1 id="heading-key-points-to-remember-interview-friendly">⭐ <strong>Key Points to Remember (Interview-Friendly)</strong></h1>
<ul>
<li><p><strong>CRD</strong> adds a new resource type to Kubernetes.</p>
</li>
<li><p><strong>CR</strong> is an instance of that resource.</p>
</li>
<li><p><strong>Controller</strong> makes the CR actually do something.</p>
</li>
<li><p><strong>CRD = schema</strong>, <strong>CR = object</strong>, <strong>Controller = brain/logic</strong>.</p>
</li>
<li><p>Without the controller, CR does nothing.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Day 28 - Kubernetes Service, Ingress, TLS & Ingress Controllers]]></title><description><![CDATA[1. Why Kubernetes Services Are Needed
When a Pod is created in Kubernetes, it receives a dynamic IP address.If the Pod dies and restarts, its IP changes.So other Pods (like checkout → payments) cannot rely on Pod IP because it changes, creating issue...]]></description><link>https://blog.dineshcloud.in/day-28-kubernetes-service-ingress-tls-and-ingress-controllers</link><guid isPermaLink="true">https://blog.dineshcloud.in/day-28-kubernetes-service-ingress-tls-and-ingress-controllers</guid><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Dinesh Kumar K]]></dc:creator><pubDate>Mon, 15 Dec 2025 12:35:45 GMT</pubDate><content:encoded><![CDATA[<h1 id="heading-1-why-kubernetes-services-are-needed">1. Why Kubernetes Services Are Needed</h1>
<p>When a Pod is created in Kubernetes, it receives a <strong>dynamic IP address</strong>.<br />If the Pod dies and restarts, its IP changes.<br />So other Pods (like <em>checkout → payments</em>) cannot rely on Pod IP because it changes, creating issues like <strong>404 Not Found</strong> when trying to call the old IP.</p>
<h3 id="heading-solution-kubernetes-service">Solution → <strong>Kubernetes Service</strong></h3>
<p>A Service gives a <strong>stable virtual IP</strong> (ClusterIP) that stays constant even if Pods change.</p>
<hr />
<h1 id="heading-2-types-of-kubernetes-services">2. Types of Kubernetes Services</h1>
<h3 id="heading-1-clusterip-default"><strong>(1) ClusterIP (default)</strong></h3>
<ul>
<li><p>Only accessible <strong>inside the cluster</strong></p>
</li>
<li><p>Used for internal communication (e.g., checkout → payments)</p>
</li>
</ul>
<h3 id="heading-2-nodeport"><strong>(2) NodePort</strong></h3>
<ul>
<li><p>Exposes service on each node via a port between <strong>30000–32767</strong></p>
</li>
<li><p>Access: <code>NodeIP:NodePort</code></p>
</li>
<li><p>Problems:</p>
<ul>
<li><p>Random high port → cannot open all ports in firewall</p>
</li>
<li><p>Nodes might not be accessible from outside</p>
</li>
<li><p>Not secure for production</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-3-loadbalancer"><strong>(3) LoadBalancer</strong></h3>
<ul>
<li><p>Cloud provider provisions an <strong>external IP</strong></p>
</li>
<li><p>Works well on AWS, Azure, GCP</p>
</li>
<li><p>Drawbacks:</p>
<ul>
<li><p>One LoadBalancer = one external IP</p>
</li>
<li><p>Expensive when you have many services (100+ services = 100+ LoadBalancers)</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-loadbalancer-on-bare-metal">LoadBalancer on bare metal?</h3>
<p>Yes. Use <strong>MetalLB</strong> (CNCF project) to simulate LoadBalancer in on-prem or home labs.</p>
<hr />
<h1 id="heading-3-why-we-need-ingress">3. Why We Need Ingress</h1>
<p>LoadBalancer works but becomes <strong>expensive</strong> and <strong>hard to manage</strong> if you have many services.</p>
<h3 id="heading-ingress-solves-2-big-problems">Ingress solves 2 big problems:</h3>
<h3 id="heading-1-reduce-cost"><strong>1. Reduce cost</strong></h3>
<p>One public IP → route traffic to many services<br /><a target="_blank" href="http://example.com/login￼example.com/checkout￼example.com/payments"><code>example.com/login</code><br /><code>example.com/checkout</code><br /><code>example.com/payments</code></a></p>
<h3 id="heading-2-advanced-routing"><strong>2. Advanced routing</strong></h3>
<p>Ingress supports:</p>
<ul>
<li><p>Host-based routing (<a target="_blank" href="http://foo.example.com">foo.example.com</a>)</p>
</li>
<li><p>Path-based routing (/checkout, /pay)</p>
</li>
<li><p>Wildcards (<code>*.</code><a target="_blank" href="http://example.com"><code>example.com</code></a>)</p>
</li>
<li><p>Authentication (BasicAuth)</p>
</li>
<li><p>Web Application Firewall (in some controllers)</p>
</li>
<li><p>TLS/SSL termination</p>
</li>
</ul>
<hr />
<h1 id="heading-4-what-is-an-ingress">4. What Is an Ingress?</h1>
<p>An <strong>Ingress</strong> is a Kubernetes object containing traffic-routing rules:</p>
<ul>
<li><p>Which host goes to which service</p>
</li>
<li><p>Which path goes where</p>
</li>
<li><p>TLS settings</p>
</li>
</ul>
<p>But <strong>Ingress alone does nothing</strong>.</p>
<hr />
<h1 id="heading-5-what-is-an-ingress-controller">5. What Is an Ingress Controller?</h1>
<p>It is a <strong>software component</strong> that:</p>
<ol>
<li><p>Watches Ingress resources</p>
</li>
<li><p>Reads routing rules</p>
</li>
<li><p>Updates its internal load balancer (e.g., nginx.conf)</p>
</li>
</ol>
<p>Examples:</p>
<ul>
<li><p><strong>NGINX Ingress Controller</strong></p>
</li>
<li><p><strong>HAProxy Ingress</strong></p>
</li>
<li><p><strong>Traefik</strong></p>
</li>
<li><p><strong>Istio Gateway</strong></p>
</li>
<li><p><strong>F5</strong></p>
</li>
<li><p><strong>ALB Ingress (AWS)</strong></p>
</li>
<li><p><strong>Contour</strong></p>
</li>
<li><p>~30+ others</p>
</li>
</ul>
<h3 id="heading-how-it-works">How it works:</h3>
<ul>
<li><p>You install the ingress controller</p>
</li>
<li><p>It runs inside the cluster (except big enterprise LBs)</p>
</li>
<li><p>It watches all Ingress objects</p>
</li>
<li><p>It writes config (e.g., <code>/etc/nginx/nginx.conf</code>)</p>
</li>
<li><p>It handles client traffic and routes correctly</p>
</li>
</ul>
<hr />
<h1 id="heading-6-path-based-amp-host-based-routing-concept">6. Path-Based &amp; Host-Based Routing (Concept)</h1>
<h3 id="heading-host-based-routing"><strong>Host-based routing</strong></h3>
<pre><code class="lang-plaintext">foo.example.com → service A
bar.example.com → service B
</code></pre>
<h3 id="heading-path-based-routing"><strong>Path-based routing</strong></h3>
<pre><code class="lang-plaintext">example.com/checkout → checkout-service
example.com/payments → payments-service
</code></pre>
<h3 id="heading-wildcard-host">Wildcard host (*)</h3>
<pre><code class="lang-plaintext">*.bar.com → same service
</code></pre>
<p>Wildcards are commonly used with TLS (wildcard certificates).</p>
<hr />
<h1 id="heading-7-tls-ssl-in-ingress">7. TLS / SSL in Ingress</h1>
<p>There are <strong>3 ways</strong> to handle HTTPS in Ingress:</p>
<hr />
<h1 id="heading-71-ssl-passthrough">7.1 SSL Passthrough</h1>
<h3 id="heading-how-it-works-1">How it works:</h3>
<ul>
<li><p>Load balancer <strong>does not decrypt</strong></p>
</li>
<li><p>Traffic passes encrypted directly to the backend</p>
</li>
<li><p>Backend Pod decrypts the request</p>
</li>
</ul>
<h3 id="heading-pros">Pros:</h3>
<ul>
<li><p>End-to-end encryption</p>
</li>
<li><p>Maximum privacy (LB can’t see traffic)</p>
</li>
</ul>
<h3 id="heading-cons">Cons:</h3>
<ul>
<li><p>Load balancer cannot:</p>
<ul>
<li><p>inspect packets</p>
</li>
<li><p>block attacks</p>
</li>
<li><p>do routing based on URL path</p>
</li>
</ul>
</li>
<li><p>Backend service handles expensive SSL decryption → <strong>higher CPU usage</strong></p>
</li>
<li><p>LB acts only as TCP forwarder → fewer features</p>
</li>
</ul>
<hr />
<h1 id="heading-72-ssl-offloading-ssl-termination">7.2 SSL Offloading (SSL Termination)</h1>
<h3 id="heading-how-it-works-2">How it works:</h3>
<ul>
<li><p>Load balancer <strong>decrypts traffic</strong></p>
</li>
<li><p>Sends <strong>HTTP (unencrypted)</strong> traffic to backend services</p>
</li>
</ul>
<h3 id="heading-pros-1">Pros:</h3>
<ul>
<li><p>Fastest (backend does not decrypt)</p>
</li>
<li><p>LB can inspect, filter, apply WAF rules, routing, etc.</p>
</li>
<li><p>Good for high traffic loads</p>
</li>
</ul>
<h3 id="heading-cons-1">Cons:</h3>
<ul>
<li><p>Traffic between LB → Pod is <strong>not encrypted</strong></p>
</li>
<li><p>Vulnerable to man-in-the-middle inside the cluster</p>
</li>
<li><p>Not ideal for high-security environments</p>
</li>
</ul>
<hr />
<h1 id="heading-73-ssl-bridging-re-encryption">7.3 SSL Bridging (Re-Encryption)</h1>
<p>Also called <strong>Re-Encrypt</strong> in OpenShift.</p>
<h3 id="heading-how-it-works-3">How it works:</h3>
<ol>
<li><p>Load balancer <strong>decrypts</strong> request</p>
</li>
<li><p>Inspects / applies routing</p>
</li>
<li><p>Load balancer <strong>re-encrypts</strong></p>
</li>
<li><p>Sends encrypted traffic to backend Pod</p>
</li>
</ol>
<h3 id="heading-pros-2">Pros:</h3>
<ul>
<li><p>LB can inspect traffic</p>
</li>
<li><p>Keeps encryption between LB ↔ Pod</p>
</li>
<li><p>Most secure option with advanced LB features</p>
</li>
</ul>
<h3 id="heading-cons-2">Cons:</h3>
<ul>
<li><p>Backend still decrypts → same CPU cost as passthrough</p>
</li>
<li><p>Load balancer also decrypts → more LB CPU usage</p>
</li>
</ul>
<hr />
<h1 id="heading-8-comparison-summary">8. Comparison Summary</h1>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Feature</td><td>Passthrough</td><td>Offloading</td><td>Bridging</td></tr>
</thead>
<tbody>
<tr>
<td>LB decrypts traffic</td><td>❌ No</td><td>✅ Yes</td><td>✅ Yes</td></tr>
<tr>
<td>LB inspects packets</td><td>❌ No</td><td>✅ Yes</td><td>✅ Yes</td></tr>
<tr>
<td>LB→backend encrypted</td><td>❌ No</td><td>❌ No</td><td>✅ Yes</td></tr>
<tr>
<td>Backend decrypts</td><td>✅ Yes</td><td>❌ No</td><td>✅ Yes</td></tr>
<tr>
<td>Best performance</td><td>❌</td><td>⭐ <strong>Best</strong></td><td>❌</td></tr>
<tr>
<td>Best security</td><td>❌</td><td>❌</td><td>⭐ <strong>Best</strong></td></tr>
<tr>
<td>Best LB features</td><td>❌</td><td>⭐</td><td>⭐</td></tr>
</tbody>
</table>
</div><hr />
<h1 id="heading-9-which-one-should-you-use">9. Which One Should You Use?</h1>
<h3 id="heading-if-you-want-maximum-performance-ssl-offloading">If you want <strong>maximum performance</strong> → <strong>SSL Offloading</strong></h3>
<p>Backend stays free from decryption load.</p>
<h3 id="heading-if-you-want-maximum-security-ssl-bridging">If you want <strong>maximum security</strong> → <strong>SSL Bridging</strong></h3>
<p>Everything encrypted end-to-end + LB security features.</p>
<h3 id="heading-if-you-want-zero-lb-involvement-ssl-passthrough">If you want <strong>zero LB involvement</strong> → <strong>SSL Passthrough</strong></h3>
<p>Not recommended unless required.</p>
]]></content:encoded></item><item><title><![CDATA[Day 27 - Kubernetes Ingress]]></title><description><![CDATA[1. Why People Find Kubernetes Ingress Difficult
Two reasons:

They don’t understand why Ingress is required.

Practical setup fails on Minikube or local clusters because Ingress controller is missing.



2. Before Ingress (Before Kubernetes v1.1)
Peo...]]></description><link>https://blog.dineshcloud.in/day-27-kubernetes-ingress</link><guid isPermaLink="true">https://blog.dineshcloud.in/day-27-kubernetes-ingress</guid><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Dinesh Kumar K]]></dc:creator><pubDate>Mon, 15 Dec 2025 12:34:37 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-1-why-people-find-kubernetes-ingress-difficult"><strong>1. Why People Find Kubernetes Ingress Difficult</strong></h2>
<p>Two reasons:</p>
<ol>
<li><p>They don’t understand <strong>why Ingress is required</strong>.</p>
</li>
<li><p>Practical setup fails on Minikube or local clusters because <strong>Ingress controller is missing</strong>.</p>
</li>
</ol>
<hr />
<h2 id="heading-2-before-ingress-before-kubernetes-v11"><strong>2. Before Ingress (Before Kubernetes v1.1)</strong></h2>
<p>People used Kubernetes with:</p>
<ul>
<li><p><strong>Deployment</strong> → creates Pods</p>
</li>
<li><p><strong>Service</strong> → exposes Pods, provides internal/external LB using kube-proxy</p>
</li>
</ul>
<p>Everything worked fine <strong>until real production needs appeared</strong>.</p>
<hr />
<h2 id="heading-3-problems-people-faced-with-only-services"><strong>3. Problems People Faced With ONLY Services</strong></h2>
<h3 id="heading-problem-1-missing-enterprise-load-balancing-features"><strong>Problem 1 — Missing Enterprise Load Balancing Features</strong></h3>
<p>Traditional VM-based environments had powerful load balancers like:</p>
<ul>
<li><p><strong>Nginx (as LB)</strong></p>
</li>
<li><p><strong>F5</strong></p>
</li>
<li><p><strong>HAProxy</strong></p>
</li>
<li><p><strong>Traefik</strong></p>
</li>
</ul>
<p>These offered advanced features:</p>
<ul>
<li><p>Path-based routing <code>/a → service1</code>, <code>/b → service2</code></p>
</li>
<li><p>Host-based routing <a target="_blank" href="http://app.example.com"><code>app.example.com</code></a> <code>→ service1</code></p>
</li>
<li><p>Sticky sessions</p>
</li>
<li><p>Weighted/rationed load balancing (70/30)</p>
</li>
<li><p>Whitelisting / blacklisting</p>
</li>
<li><p>TLS/HTTPS termination</p>
</li>
<li><p>WAF<br />  ...and many more</p>
</li>
</ul>
<p>Kubernetes <strong>Services</strong> DID NOT support these.<br />Service only does <strong>simple round-robin load balancing</strong> using <code>kube-proxy</code>.</p>
<p>This made enterprises <strong>unhappy</strong>.</p>
<hr />
<h3 id="heading-problem-2-huge-cost-with-loadbalancer-services"><strong>Problem 2 — Huge Cost with LoadBalancer Services</strong></h3>
<p>When using:</p>
<pre><code class="lang-plaintext">type: LoadBalancer
</code></pre>
<p>Every Service gets a <strong>public static IP</strong>.</p>
<p>In large companies:</p>
<ul>
<li>1000 services = 1000 load balancers = huge cloud cost<br />  Cloud providers charge for every static LB IP.</li>
</ul>
<p>In VMs earlier:</p>
<ul>
<li><p>They used <strong>one load balancer</strong> for all apps.</p>
</li>
<li><p>Routing done by path/host rules.</p>
</li>
</ul>
<p>So Kubernetes LoadBalancer type became <strong>too expensive</strong>.</p>
<hr />
<h2 id="heading-4-kubernetes-adds-a-new-concept-ingress"><strong>4. Kubernetes Adds a New Concept → INGRESS</strong></h2>
<p>To solve above two problems.</p>
<p>Kubernetes said:</p>
<ul>
<li><p>“We will let users create a <strong>Ingress resource</strong>”</p>
</li>
<li><p>But Kubernetes will <strong>not implement LB logic</strong> for all vendors.</p>
</li>
<li><p>Instead, LB companies must create <strong>Ingress Controllers</strong>.</p>
</li>
</ul>
<hr />
<h2 id="heading-5-what-is-an-ingress-controller"><strong>5. What is an Ingress Controller?</strong></h2>
<p>A <strong>load balancer implementation</strong> inside Kubernetes.</p>
<p>Examples:</p>
<ul>
<li><p>NGINX Ingress Controller</p>
</li>
<li><p>HAProxy Ingress Controller</p>
</li>
<li><p>Traefik</p>
</li>
<li><p>F5 BIG-IP Ingress</p>
</li>
<li><p>Ambassador</p>
</li>
<li><p>Apache APISIX</p>
</li>
</ul>
<p>Workflow:</p>
<ol>
<li><p><strong>User creates Ingress resource</strong> (rules like path/host/TLS).</p>
</li>
<li><p><strong>Ingress controller reads those rules</strong>.</p>
</li>
<li><p><strong>Ingress controller configures the load balancer</strong> inside the cluster.</p>
</li>
</ol>
<p>Without an Ingress controller:</p>
<ul>
<li>Ingress <strong>does nothing</strong>.</li>
</ul>
<hr />
<h2 id="heading-6-practical-understanding-extremely-important"><strong>6. Practical Understanding (Extremely Important)</strong></h2>
<p>Just like:</p>
<ul>
<li><p>Pods → handled by Kubelet</p>
</li>
<li><p>Services → handled by kube-proxy</p>
</li>
<li><p>Deployments → handled by deployment-controller</p>
</li>
</ul>
<p>Ingress → MUST be handled by <strong>Ingress Controller</strong>.</p>
<p>So:<br />✔ You can create 100 Ingress YAML files<br />✖ Nothing will work<br />❗ Unless the Ingress controller is installed.</p>
<hr />
<h2 id="heading-7-common-ingress-controller-installation-minikube-example"><strong>7. Common Ingress Controller Installation (Minikube Example)</strong></h2>
<p>On Minikube, installation is simple:</p>
<pre><code class="lang-plaintext">minikube addons enable ingress
</code></pre>
<p>This deploys:</p>
<ul>
<li><p><code>nginx-ingress-controller</code> Pod</p>
</li>
<li><p>In namespace <code>ingress-nginx</code></p>
</li>
</ul>
<p>After this:</p>
<ul>
<li><p>Ingress controller starts watching your Ingress rules</p>
</li>
<li><p>Updates routing internally</p>
</li>
</ul>
<hr />
<h2 id="heading-8-example-of-the-ingress-resource-in-your-content"><strong>8. Example of the Ingress Resource in Your Content</strong></h2>
<p>You created:</p>
<ul>
<li><p>Deployment → Pod</p>
</li>
<li><p>Service → NodePort</p>
</li>
<li><p>Ingress → Host-based routing (<a target="_blank" href="http://foo.bar.com"><code>foo.bar.com</code></a>)</p>
</li>
</ul>
<p>Ingress YAML points to:</p>
<pre><code class="lang-plaintext">serviceName: my-service
servicePort: 80
path: /bar
host: foo.bar.com
</code></pre>
<hr />
<h2 id="heading-9-after-ingress-controller-installs"><strong>9. After Ingress Controller Installs</strong></h2>
<ul>
<li><p>Ingress now gets an <strong>Address (IP)</strong>.</p>
</li>
<li><p>Ingress controller logs show:</p>
<pre><code class="lang-plaintext">  Successfully synced ingress: Ingress example
</code></pre>
</li>
</ul>
<p>This means:</p>
<ul>
<li><p>Controller picked up your Ingress rule</p>
</li>
<li><p>Updated its internal Nginx config</p>
</li>
</ul>
<hr />
<h2 id="heading-10-accessing-ingress-on-local-machine"><strong>10. Accessing Ingress on Local Machine</strong></h2>
<p>Because DNS is not real on local environment:</p>
<ul>
<li><p>You MUST update <code>/etc/hosts</code> file</p>
</li>
<li><p>Map your domain (<a target="_blank" href="http://foo.bar.com"><code>foo.bar.com</code></a>) to Ingress IP</p>
</li>
</ul>
<p>Example:</p>
<pre><code class="lang-plaintext">192.168.64.11 foo.bar.com
</code></pre>
<p>Then you can access:</p>
<pre><code class="lang-plaintext">http://foo.bar.com/bar
</code></pre>
<hr />
<h2 id="heading-11-summary-what-you-should-remember"><strong>11. Summary — What You Should Remember</strong></h2>
<h3 id="heading-ingress-solves-two-major-problems"><strong>Ingress solves two major problems</strong></h3>
<ol>
<li><p>Missing enterprise-grade LB features</p>
</li>
<li><p>High cost of cloud LB for each service</p>
</li>
</ol>
<h3 id="heading-ingress-requires-ingress-controller"><strong>Ingress requires Ingress Controller</strong></h3>
<p>Without it → Ingress does nothing.</p>
<h3 id="heading-ingress-controller-load-balancer-api-gateway-inside-kubernetes"><strong>Ingress Controller = Load Balancer + API Gateway inside Kubernetes</strong></h3>
<h3 id="heading-one-ingress-can-route-traffic-to-many-services"><strong>One Ingress can route traffic to MANY services</strong></h3>
<p>Using host/path rules.</p>
]]></content:encoded></item><item><title><![CDATA[Day 26 - Kubernetes Services Deep Dive]]></title><description><![CDATA[1. What this session is about
You gave a long demo where you:

Create a Kubernetes deployment

Create different types of services

Test traffic flow

Use Kubeshark to see how traffic flows inside the cluster


The goal:Understand Service Discovery, L...]]></description><link>https://blog.dineshcloud.in/day-26-kubernetes-services-deep-dive</link><guid isPermaLink="true">https://blog.dineshcloud.in/day-26-kubernetes-services-deep-dive</guid><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Dinesh Kumar K]]></dc:creator><pubDate>Mon, 15 Dec 2025 12:32:57 GMT</pubDate><content:encoded><![CDATA[<h3 id="heading-1-what-this-session-is-about"><strong>1. What this session is about</strong></h3>
<p>You gave a long demo where you:</p>
<ul>
<li><p>Create a Kubernetes <strong>deployment</strong></p>
</li>
<li><p>Create different types of <strong>services</strong></p>
</li>
<li><p>Test <strong>traffic flow</strong></p>
</li>
<li><p>Use <strong>Kubeshark</strong> to see how traffic flows inside the cluster</p>
</li>
</ul>
<p>The goal:<br /><strong>Understand Service Discovery, Load Balancing &amp; Exposing Apps inside/outside Kubernetes.</strong></p>
<hr />
<h1 id="heading-2-kubernetes-cluster-setup-minikube">✅ <strong>2. Kubernetes Cluster Setup (Minikube)</strong></h1>
<p>You use:</p>
<pre><code class="lang-plaintext">minikube status
</code></pre>
<p>to confirm the cluster is running.</p>
<p>Then you clean the namespace:</p>
<pre><code class="lang-plaintext">kubectl get all
kubectl delete deploy &lt;name&gt;
kubectl delete svc &lt;name&gt;
</code></pre>
<p>Only the default Kubernetes service remains.</p>
<hr />
<h1 id="heading-3-build-the-application-image">✅ <strong>3. Build the Application Image</strong></h1>
<p>You go to a GitHub repo (<code>docker-zero-to-hero</code>), use a Django app and build your image:</p>
<pre><code class="lang-plaintext">docker build -t python-sample-app-demo:v1 .
</code></pre>
<hr />
<h1 id="heading-4-create-deployment-deploymentyaml">✅ <strong>4. Create Deployment (deployment.yaml)</strong></h1>
<p>You take a template from Kubernetes docs and modify:</p>
<ul>
<li><p><code>replicas: 2</code></p>
</li>
<li><p>Add labels:</p>
<pre><code class="lang-plaintext">  app: sample-python-app
</code></pre>
</li>
<li><p>Add container image:</p>
<pre><code class="lang-plaintext">  image: python-sample-app-demo:v1
</code></pre>
</li>
<li><p>Add containerPort:</p>
<pre><code class="lang-plaintext">  containerPort: 8000
</code></pre>
</li>
</ul>
<p>Apply it:</p>
<pre><code class="lang-plaintext">kubectl apply -f deployment.yaml
</code></pre>
<p>Check:</p>
<pre><code class="lang-plaintext">kubectl get deploy
kubectl get pods -o wide
</code></pre>
<p>⚠️ You highlight that <strong>Pod IPs keep changing</strong>, so you <strong>cannot</strong> rely on Pod IPs directly.</p>
<hr />
<h1 id="heading-5-why-services-are-needed">✅ <strong>5. Why Services Are Needed</strong></h1>
<p>Because:</p>
<ul>
<li><p>Pod IPs change</p>
</li>
<li><p>Pods scale up/down</p>
</li>
<li><p>Pods restart</p>
</li>
</ul>
<p>So you need <strong>stable networking</strong> via:</p>
<ol>
<li><p><strong>Labels</strong></p>
</li>
<li><p><strong>Selectors</strong></p>
</li>
<li><p><strong>Services</strong></p>
</li>
</ol>
<p>Services always look for pods <strong>by label</strong>, not by IP.</p>
<hr />
<h1 id="heading-6-create-the-service-serviceyaml">✅ <strong>6. Create the Service (service.yaml)</strong></h1>
<p>You create a <strong>NodePort</strong> service:</p>
<pre><code class="lang-plaintext">type: NodePort
selector:
  app: sample-python-app
ports:
  - port: 80
    targetPort: 8000
    nodePort: 30007
</code></pre>
<p>Apply:</p>
<pre><code class="lang-plaintext">kubectl apply -f service.yaml
</code></pre>
<p>Check:</p>
<pre><code class="lang-plaintext">kubectl get svc
</code></pre>
<hr />
<h1 id="heading-7-how-nodeport-works">✅ <strong>7. How NodePort Works</strong></h1>
<p>NodePort exposes the app on:</p>
<pre><code class="lang-plaintext">&lt;NodeIP&gt;:&lt;NodePort&gt;
</code></pre>
<p>You get Minikube IP:</p>
<pre><code class="lang-plaintext">minikube ip
</code></pre>
<p>Then access:</p>
<pre><code class="lang-plaintext">curl http://&lt;minikube-ip&gt;:30007/demo
</code></pre>
<p>Or via browser:</p>
<pre><code class="lang-plaintext">http://&lt;minikube-ip&gt;:30007/demo
</code></pre>
<p>This works only <strong>inside the same network</strong> (internal users).</p>
<hr />
<h1 id="heading-8-loadbalancer-mode">✅ <strong>8. LoadBalancer Mode</strong></h1>
<p>You edit the service:</p>
<pre><code class="lang-plaintext">kubectl edit svc &lt;name&gt;
</code></pre>
<p>Change:</p>
<pre><code class="lang-plaintext">type: LoadBalancer
</code></pre>
<p>In cloud providers (AWS/GCP/Azure) you get:</p>
<pre><code class="lang-plaintext">EXTERNAL-IP = Public IP
</code></pre>
<p>But in Minikube this stays:</p>
<pre><code class="lang-plaintext">pending
</code></pre>
<p>(You mention <strong>MetalLB</strong> as an alternative.)</p>
<hr />
<h1 id="heading-9-service-discovery-test">✅ <strong>9. Service Discovery Test</strong></h1>
<p>You break the selector on purpose:</p>
<pre><code class="lang-plaintext">selector:
  app: sample-python
</code></pre>
<p>This no longer matches:</p>
<pre><code class="lang-plaintext">app: sample-python-app
</code></pre>
<p>Result:</p>
<ul>
<li><p>Service can’t find pods</p>
</li>
<li><p>No traffic</p>
</li>
<li><p>Proves <strong>services depend 100% on correct labels/selectors</strong></p>
</li>
</ul>
<hr />
<h1 id="heading-10-load-balancing-concept">✅ <strong>10. Load Balancing Concept</strong></h1>
<p>Since you created <strong>2 pods</strong>, NodePort or ClusterIP service will:</p>
<ul>
<li><p>Split traffic between pods</p>
</li>
<li><p>Do round-robin</p>
</li>
<li><p>Always discover new pods automatically (because labels match)</p>
</li>
</ul>
<hr />
<h1 id="heading-11-kubeshark">✅ <strong>11. Kubeshark</strong></h1>
<p>Used at the end (not included in detail in your text) to:</p>
<ul>
<li><p>Capture real traffic</p>
</li>
<li><p>Show how service selects pods</p>
</li>
<li><p>Show internal communication inside the cluster</p>
</li>
</ul>
<hr />
<h1 id="heading-summary-of-your-entire-content-in-10-sentences">🎉 <strong>Summary of Your Entire Content in 10 Sentences</strong></h1>
<ol>
<li><p>You start a Minikube cluster and clean the namespace.</p>
</li>
<li><p>You build a Python/Django Docker image.</p>
</li>
<li><p>You create a Kubernetes deployment with 2 replicas.</p>
</li>
<li><p>You explain Pod IPs change, so they cannot be used directly.</p>
</li>
<li><p>You create a NodePort service with proper labels/selectors.</p>
</li>
<li><p>You show how NodePort exposes the app on <code>&lt;NodeIP&gt;:&lt;NodePort&gt;</code>.</p>
</li>
<li><p>You demonstrate using the app from browser and using curl.</p>
</li>
<li><p>You convert service type to LoadBalancer for external traffic (cloud only).</p>
</li>
<li><p>You intentionally break selectors to show service discovery failure.</p>
</li>
<li><p>You use Kubeshark to visualize pod-to-service traffic inside Kubernetes.</p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Day 25 - Kubernetes Services]]></title><description><![CDATA[🚀 Why Do We Need a Service in Kubernetes?
In real production environments:

We don’t deploy pods directly.

We deploy deployments, which internally create a ReplicaSet, which finally creates pods.


Assume we have a deployment with 3 pods (replicas)...]]></description><link>https://blog.dineshcloud.in/day-25-kubernetes-services</link><guid isPermaLink="true">https://blog.dineshcloud.in/day-25-kubernetes-services</guid><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Dinesh Kumar K]]></dc:creator><pubDate>Mon, 15 Dec 2025 12:31:19 GMT</pubDate><content:encoded><![CDATA[<h1 id="heading-why-do-we-need-a-service-in-kubernetes">🚀 <strong>Why Do We Need a Service in Kubernetes?</strong></h1>
<p>In real production environments:</p>
<ul>
<li><p>We don’t deploy <strong>pods</strong> directly.</p>
</li>
<li><p>We deploy <strong>deployments</strong>, which internally create a <strong>ReplicaSet</strong>, which finally creates <strong>pods</strong>.</p>
</li>
</ul>
<p>Assume we have a deployment with <strong>3 pods (replicas)</strong>.</p>
<p>These replicas are required because:</p>
<ul>
<li><p>One pod cannot handle many concurrent users.</p>
</li>
<li><p>For example: if 1 pod handles 10 users and you have 100 users → you need ~10 pods.</p>
</li>
</ul>
<hr />
<h1 id="heading-problem-what-if-there-is-no-service-in-kubernetes">⚠️ <strong>Problem: What if There Is <em>No Service</em> in Kubernetes?</strong></h1>
<p>Pods get <strong>dynamic IP addresses</strong>.</p>
<p>Example:</p>
<ul>
<li><p>Pod 1 → 172.16.3.4</p>
</li>
<li><p>Pod 2 → 172.16.3.5</p>
</li>
<li><p>Pod 3 → 172.16.3.6</p>
</li>
</ul>
<p>If a pod crashes, Kubernetes auto-heals and creates a new pod → <strong>new IP address</strong>.</p>
<p>Example:</p>
<ul>
<li><p>Pod 1 dies</p>
</li>
<li><p>New Pod 1 → <strong>172.16.3.8</strong> (IP changed)</p>
</li>
</ul>
<p>Now the users/testers/other internal teams who were using <strong>172.16.3.4</strong> cannot access the application anymore.</p>
<p>So even though Kubernetes healed the pod correctly → <strong>application becomes unreachable</strong> due to changed IP.</p>
<p>This is the <strong>core reason</strong> Kubernetes Services are required.</p>
<hr />
<h1 id="heading-how-a-service-solves-this-problem">🟦 <strong>How a Service Solves This Problem</strong></h1>
<p>Instead of giving direct pod IPs to users, DevOps creates a <strong>Service</strong> on top of the Deployment.</p>
<p>Users will now access the application using:</p>
<pre><code class="lang-plaintext">payment.default.svc
</code></pre>
<p>The service:</p>
<ol>
<li><p>Acts as a <strong>load balancer</strong></p>
</li>
<li><p>Performs <strong>service discovery</strong></p>
</li>
<li><p>Can <strong>expose applications externally</strong></p>
</li>
</ol>
<hr />
<h1 id="heading-1-load-balancing">1️⃣ <strong>Load Balancing</strong></h1>
<p>The service receives traffic and distributes it across all pods.</p>
<p>Example:</p>
<ul>
<li><p>10 requests → Pod1</p>
</li>
<li><p>10 requests → Pod2</p>
</li>
<li><p>10 requests → Pod3</p>
</li>
</ul>
<p>So all pods handle load evenly.</p>
<hr />
<h1 id="heading-2-service-discovery-very-important">2️⃣ <strong>Service Discovery (VERY IMPORTANT)</strong></h1>
<p>A service <strong>does NOT track pod IPs</strong>.</p>
<p>Instead, it uses:</p>
<h3 id="heading-labels">✔️ Labels</h3>
<p>✔️ Selectors</p>
<p>Example label on pods:</p>
<pre><code class="lang-plaintext">app: payment
</code></pre>
<p>If pods die and new ones are created:</p>
<ul>
<li><p>IP address changes</p>
</li>
<li><p><strong>Label remains the same</strong></p>
</li>
<li><p>Service always finds the correct pods because it filters by labels.</p>
</li>
</ul>
<p>This solves the problem of constantly changing IPs.</p>
<hr />
<h1 id="heading-3-exposing-applications-externally">3️⃣ <strong>Exposing Applications Externally</strong></h1>
<p>Services can expose your app inside or outside the cluster depending on the <strong>service type</strong>.</p>
<p>There are 3 main service types:</p>
<hr />
<h2 id="heading-type-1-clusterip-default">🔹 <strong>Type 1 — ClusterIP (default)</strong></h2>
<ul>
<li><p>Application accessible <strong>only inside the Kubernetes cluster</strong></p>
</li>
<li><p>Provides <strong>load balancing</strong> + <strong>service discovery</strong></p>
</li>
<li><p>Not accessible from outside</p>
</li>
</ul>
<hr />
<h2 id="heading-type-2-nodeport">🔹 <strong>Type 2 — NodePort</strong></h2>
<ul>
<li><p>Application accessible <strong>inside your organization / VPC</strong></p>
</li>
<li><p>Anyone who can reach Worker Node IP can access the app</p>
</li>
<li><p>Useful for internal company apps</p>
</li>
</ul>
<hr />
<h2 id="heading-type-3-loadbalancer">🔹 <strong>Type 3 — LoadBalancer</strong></h2>
<ul>
<li><p>Application accessible <strong>from the internet</strong> (publicly)</p>
</li>
<li><p>Creates a Cloud Load Balancer (ELB on AWS)</p>
</li>
<li><p>Best for apps like:</p>
<ul>
<li><p><a target="_blank" href="http://amazon.com"><code>amazon.com</code></a></p>
</li>
<li><p>Any public website</p>
</li>
</ul>
</li>
</ul>
<p>Note:</p>
<ul>
<li><p>LoadBalancer works only on cloud providers (AWS, GCP, Azure)</p>
</li>
<li><p>Does NOT work by default on Minikube (needs addons/metallb)</p>
</li>
</ul>
<hr />
<h1 id="heading-example-summary-from-your-content">🌐 Example Summary (From Your Content)</h1>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Service Type</td><td>Who Can Access</td><td>Purpose</td></tr>
</thead>
<tbody>
<tr>
<td>ClusterIP</td><td>Only inside Kubernetes</td><td>Internal services</td></tr>
<tr>
<td>NodePort</td><td>Anyone with access to Node IP</td><td>Internal org access</td></tr>
<tr>
<td>LoadBalancer</td><td>Anyone on the internet</td><td>Public access</td></tr>
</tbody>
</table>
</div><hr />
<h1 id="heading-final-summary-of-your-content">🧠 Final Summary of Your Content</h1>
<p>Kubernetes Services provide:</p>
<h3 id="heading-load-balancing">✔️ Load Balancing</h3>
<h3 id="heading-service-discovery-through-labels-amp-selectors">✔️ Service Discovery (through labels &amp; selectors)</h3>
<h3 id="heading-external-exposure-depending-on-type">✔️ External Exposure (depending on type)</h3>
<p>Without a Service:</p>
<ul>
<li><p>Pods get new IPs when recreated</p>
</li>
<li><p>Users cannot access the application</p>
</li>
<li><p>Auto-healing breaks connectivity</p>
</li>
</ul>
<p>With a Service:</p>
<ul>
<li><p>Users access via a single stable address</p>
</li>
<li><p>Traffic is distributed</p>
</li>
<li><p>IP changes of pods don’t matter</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Day 24 - Kubernetes – Deployments, and ReplicaSets]]></title><description><![CDATA[🧱 1. From Docker Containers to Kubernetes
You can create containers using any platform — for example, Docker.
In Docker:
You usually run a container using commands like:
docker run -it image_name
docker run -d -p 8080:80 -v /data:/app --network myne...]]></description><link>https://blog.dineshcloud.in/day-24-kubernetes-deployments-and-replicasets</link><guid isPermaLink="true">https://blog.dineshcloud.in/day-24-kubernetes-deployments-and-replicasets</guid><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Dinesh Kumar K]]></dc:creator><pubDate>Mon, 15 Dec 2025 12:30:28 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-1-from-docker-containers-to-kubernetes">🧱 1. From Docker Containers to Kubernetes</h2>
<p>You can create containers using any platform — for example, <strong>Docker</strong>.</p>
<h3 id="heading-in-docker">In Docker:</h3>
<p>You usually run a container using commands like:</p>
<pre><code class="lang-plaintext">docker run -it image_name
docker run -d -p 8080:80 -v /data:/app --network mynet nginx
</code></pre>
<p>Here you specify options such as:</p>
<ul>
<li><p><code>-p</code> → port mapping</p>
</li>
<li><p><code>-v</code> → volumes</p>
</li>
<li><p><code>--network</code> → network type</p>
</li>
</ul>
<p>This is the <strong>command-line specification</strong> for running a container.</p>
<hr />
<h2 id="heading-2-kubernetes-simplifies-this-yaml-manifest">⚙️ 2. Kubernetes Simplifies This – YAML Manifest</h2>
<p>Kubernetes introduced an <strong>enterprise-level approach</strong> to manage containers.</p>
<p>Instead of writing long commands, Kubernetes allows you to define everything in a <strong>YAML manifest file</strong> (for example, <code>pod.yaml</code>).</p>
<p>In the YAML file, you define:</p>
<ul>
<li><p>Container image name</p>
</li>
<li><p>Ports to expose</p>
</li>
<li><p>Volumes</p>
</li>
<li><p>Network details</p>
</li>
</ul>
<p>This YAML file acts as a <strong>running specification</strong> for your container.</p>
<p>So:</p>
<blockquote>
<p>🟩 <strong>Pod YAML = Declarative specification to run your container in Kubernetes</strong></p>
</blockquote>
<hr />
<h2 id="heading-3-what-is-a-pod">📦 3. What is a Pod?</h2>
<p>A <strong>Pod</strong> is the <strong>smallest deployable unit</strong> in Kubernetes.<br />It’s a wrapper around one or more containers.</p>
<p>A Pod defines <em>how the container should run</em> — image, ports, volumes, and other configurations.</p>
<hr />
<h3 id="heading-single-vs-multiple-containers-in-a-pod">🧩 Single vs Multiple Containers in a Pod</h3>
<ul>
<li><p>A <strong>Pod can contain a single container</strong> (most common).</p>
</li>
<li><p>It can also contain <strong>multiple containers</strong> if they are tightly coupled — for example:</p>
<ul>
<li><p>One main application container</p>
</li>
<li><p>One sidecar container (for logging, proxy, API gateway, etc.)</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-why-multiple-containers-in-a-pod">🕸️ Why multiple containers in a Pod?</h3>
<p>Because:</p>
<ul>
<li><p>They <strong>share the same network</strong> (can communicate using <a target="_blank" href="http://localhost"><code>localhost</code></a>)</p>
</li>
<li><p>They <strong>share the same storage volumes</strong></p>
</li>
</ul>
<p>📘 Example use case:<br /><strong>Service Mesh</strong> — one main app container and one sidecar container for service routing.</p>
<hr />
<h2 id="heading-4-why-do-we-need-deployments">⚡ 4. Why Do We Need Deployments?</h2>
<p>Now that you can deploy an app using a Pod, the next question is:</p>
<blockquote>
<p>Why move from Pod to Deployment?</p>
</blockquote>
<p>Because <strong>Pods cannot auto-heal or auto-scale</strong>.</p>
<p>Kubernetes provides features like:</p>
<ul>
<li><p><strong>Auto-healing</strong> → restart failed Pods automatically</p>
</li>
<li><p><strong>Auto-scaling</strong> → increase/decrease Pods based on load</p>
</li>
</ul>
<p>Pods alone <strong>cannot</strong> do this.<br />To achieve these behaviors, Kubernetes uses a <strong>Deployment</strong>.</p>
<hr />
<h2 id="heading-5-what-is-a-deployment">🚀 5. What is a Deployment?</h2>
<p>A <strong>Deployment</strong> is a higher-level resource that manages Pods.</p>
<p>When you create a Deployment, it:</p>
<ol>
<li><p>Creates an internal resource called a <strong>ReplicaSet</strong></p>
</li>
<li><p>The ReplicaSet then creates and manages <strong>Pods</strong></p>
</li>
</ol>
<p>🧠 <strong>Flow:</strong><br /><code>Deployment → ReplicaSet → Pod</code></p>
<hr />
<h3 id="heading-deployment-yaml-simplified-example">📄 Deployment YAML (simplified example)</h3>
<pre><code class="lang-plaintext">apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
</code></pre>
<p>Here:</p>
<ul>
<li><p><code>replicas: 2</code> → desired number of Pods</p>
</li>
<li><p><code>template:</code> → defines the Pod specification</p>
</li>
</ul>
<hr />
<h2 id="heading-6-what-is-a-replicaset">🧩 6. What is a ReplicaSet?</h2>
<p>A <strong>ReplicaSet</strong> is a <strong>Kubernetes controller</strong> that ensures the desired number of Pods are always running.</p>
<p>If you delete or lose a Pod:</p>
<ul>
<li>The ReplicaSet will <strong>immediately recreate</strong> it.</li>
</ul>
<h3 id="heading-example">Example:</h3>
<p>If Deployment says <code>replicas: 2</code>:</p>
<ul>
<li><p>ReplicaSet ensures 2 Pods are <strong>always running</strong>.</p>
</li>
<li><p>If one is deleted, a new one is automatically created.</p>
</li>
</ul>
<p>→ This is called <strong>auto-healing</strong>.</p>
<hr />
<h2 id="heading-7-how-deployments-enable-zero-downtime">🔁 7. How Deployments Enable Zero Downtime</h2>
<p>If you update the <code>replicas</code> value from 2 → 3 in your YAML and apply it again:</p>
<pre><code class="lang-plaintext">kubectl apply -f deployment.yaml
</code></pre>
<p>Kubernetes:</p>
<ul>
<li><p>Notices the change</p>
</li>
<li><p>Automatically creates one new Pod (via ReplicaSet)</p>
</li>
<li><p>Keeps existing Pods running</p>
</li>
</ul>
<p>✅ This ensures <strong>zero downtime</strong> and <strong>smooth scaling</strong>.</p>
<hr />
<h2 id="heading-8-controller-concept-in-kubernetes">🧠 8. Controller Concept in Kubernetes</h2>
<p>A <strong>Controller</strong> in Kubernetes maintains the <strong>desired state</strong> of the cluster.</p>
<p>For example:</p>
<ul>
<li>If Deployment YAML says 3 replicas → the controller ensures 3 Pods are always running.</li>
</ul>
<p>Controllers constantly <strong>watch and reconcile</strong>:</p>
<blockquote>
<p>Desired State (YAML) = Actual State (cluster)</p>
</blockquote>
<p>Examples of controllers:</p>
<ul>
<li><p><strong>ReplicaSet Controller</strong></p>
</li>
<li><p><strong>Deployment Controller</strong></p>
</li>
<li><p><strong>DaemonSet Controller</strong></p>
</li>
<li><p><strong>Custom Controllers</strong> (like ArgoCD, Admission Controllers)</p>
</li>
</ul>
<hr />
<h2 id="heading-9-pod-vs-container-vs-deployment-interview-key-question">🧩 9. Pod vs Container vs Deployment (Interview Key Question)</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Concept</td><td>Description</td><td>Key Feature</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Container</strong></td><td>Basic runtime unit (e.g., Docker)</td><td>Runs app processes</td></tr>
<tr>
<td><strong>Pod</strong></td><td>Wrapper around one or more containers</td><td>Adds networking &amp; storage sharing</td></tr>
<tr>
<td><strong>Deployment</strong></td><td>Manages Pods via ReplicaSet</td><td>Adds auto-healing &amp; scaling</td></tr>
</tbody>
</table>
</div><p><strong>Summary:</strong></p>
<blockquote>
<p>Container → Pod → Deployment<br />Each layer adds management capability on top of the previous one.</p>
</blockquote>
<hr />
<h2 id="heading-10-deployment-vs-replicaset">🧩 10. Deployment vs ReplicaSet</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Concept</td><td>Description</td><td>Role</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Deployment</strong></td><td>High-level abstraction</td><td>Defines desired replicas &amp; updates</td></tr>
<tr>
<td><strong>ReplicaSet</strong></td><td>Kubernetes Controller</td><td>Actually creates &amp; maintains Pods</td></tr>
</tbody>
</table>
</div><p>📘 Think of it like this:<br />Deployment = Manager<br />ReplicaSet = Worker ensuring Pods match Deployment instructions</p>
<hr />
<h2 id="heading-11-real-time-demo-behavior-as-explained">🧩 11. Real-Time Demo Behavior (as explained)</h2>
<ul>
<li><p><code>kubectl apply -f pod.yaml</code> → creates one Pod</p>
</li>
<li><p>If you <code>kubectl delete pod &lt;name&gt;</code> → Pod is gone; app stops</p>
</li>
</ul>
<p>❌ No auto-healing in a Pod.</p>
<hr />
<h3 id="heading-with-deployment">✅ With Deployment:</h3>
<ul>
<li><p><code>kubectl apply -f deployment.yaml</code> → creates Deployment → ReplicaSet → Pod</p>
</li>
<li><p>If you delete one Pod:</p>
<pre><code class="lang-plaintext">  kubectl delete pod &lt;name&gt;
</code></pre>
<p>  Immediately, ReplicaSet recreates a new Pod (auto-healing)</p>
</li>
<li><p>If you increase replicas from 1 → 3:</p>
<pre><code class="lang-plaintext">  kubectl apply -f deployment.yaml
</code></pre>
<p>  ReplicaSet creates two new Pods (scaling)</p>
</li>
</ul>
<p>→ All without downtime.</p>
<hr />
<h2 id="heading-12-common-kubectl-commands">⚙️ 12. Common kubectl Commands</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Action</td><td>Command</td></tr>
</thead>
<tbody>
<tr>
<td>Create Pod/Deployment</td><td><code>kubectl apply -f &lt;file&gt;.yaml</code></td></tr>
<tr>
<td>List Pods</td><td><code>kubectl get pods</code></td></tr>
<tr>
<td>List Deployments</td><td><code>kubectl get deploy</code></td></tr>
<tr>
<td>List ReplicaSets</td><td><code>kubectl get rs</code></td></tr>
<tr>
<td>Delete Pod</td><td><code>kubectl delete pod &lt;name&gt;</code></td></tr>
<tr>
<td>Watch live events</td><td><code>kubectl get pods -w</code></td></tr>
<tr>
<td>Describe a resource</td><td><code>kubectl describe pod &lt;name&gt;</code></td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-13-zero-downtime-example-summary">📈 13. Zero Downtime Example Summary</h2>
<ul>
<li><p>Deleted Pod → instantly recreated (auto-healing)</p>
</li>
<li><p>Increased replicas → new Pods created automatically (scaling)</p>
</li>
<li><p>All without affecting users → <strong>zero downtime</strong></p>
</li>
</ul>
<hr />
<h2 id="heading-14-practical-assignment">🧩 14. Practical Assignment</h2>
<ul>
<li><p>Create a Deployment YAML with your own image</p>
</li>
<li><p>Apply it to cluster</p>
</li>
<li><p>Delete a Pod → observe auto-healing</p>
</li>
<li><p>Increase replicas → observe scaling</p>
</li>
<li><p>Use <code>kubectl get rs</code> to verify ReplicaSet creation</p>
</li>
</ul>
<hr />
<h2 id="heading-15-key-interview-questions">🧠 15. Key Interview Questions</h2>
<ol>
<li><p>Difference between <strong>Container</strong>, <strong>Pod</strong>, and <strong>Deployment</strong></p>
</li>
<li><p>Difference between <strong>Deployment</strong> and <strong>ReplicaSet</strong></p>
</li>
<li><p>What is a <strong>Controller</strong> in Kubernetes?</p>
</li>
<li><p>How does <strong>auto-healing</strong> work in Kubernetes?</p>
</li>
<li><p>What ensures <strong>desired vs actual state</strong>?</p>
</li>
</ol>
<hr />
<h2 id="heading-16-summary-of-key-concepts">✅ 16. Summary of Key Concepts</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Component</td><td>Function</td><td>Example Resource</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Container</strong></td><td>Runs app process</td><td>Docker</td></tr>
<tr>
<td><strong>Pod</strong></td><td>Runs 1 or more containers</td><td>pod.yaml</td></tr>
<tr>
<td><strong>ReplicaSet</strong></td><td>Ensures desired Pod count</td><td>rs</td></tr>
<tr>
<td><strong>Deployment</strong></td><td>Manages Pods via ReplicaSet</td><td>deployment.yaml</td></tr>
<tr>
<td><strong>Controller</strong></td><td>Maintains desired state</td><td>ReplicaSet Controller</td></tr>
</tbody>
</table>
</div><p>🟢 <strong>Final Flow:</strong><br /><code>Deployment → ReplicaSet → Pod → Container</code></p>
]]></content:encoded></item><item><title><![CDATA[Day 23 - Kubernetes Pods]]></title><description><![CDATA[🧠 1. Transition from Docker to Kubernetes
We are moving from Docker (container) to Kubernetes (container orchestration).In Docker, you directly deploy containers using commands like:
docker run -d -p 8080:80 nginx

But in Kubernetes, you cannot dire...]]></description><link>https://blog.dineshcloud.in/day-23-kubernetes-pods</link><guid isPermaLink="true">https://blog.dineshcloud.in/day-23-kubernetes-pods</guid><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Dinesh Kumar K]]></dc:creator><pubDate>Mon, 15 Dec 2025 12:24:33 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-1-transition-from-docker-to-kubernetes">🧠 1. Transition from Docker to Kubernetes</h2>
<p>We are moving from <strong>Docker (container)</strong> to <strong>Kubernetes (container orchestration)</strong>.<br />In Docker, you directly deploy <strong>containers</strong> using commands like:</p>
<pre><code class="lang-plaintext">docker run -d -p 8080:80 nginx
</code></pre>
<p>But in <strong>Kubernetes</strong>, you <strong>cannot directly deploy a container</strong>.<br />Instead, you deploy your application as a <strong>Pod</strong> — which is the <strong>smallest deployable unit in Kubernetes</strong>.</p>
<hr />
<h2 id="heading-2-what-is-a-pod">📦 2. What is a Pod?</h2>
<p>A <strong>Pod</strong> is the <strong>basic unit of deployment in Kubernetes</strong>.</p>
<blockquote>
<p>📘 Definition:<br />A Pod is a <strong>wrapper around one or more containers</strong> that defines <strong>how a container should run</strong> in Kubernetes.</p>
</blockquote>
<hr />
<h3 id="heading-why-not-deploy-containers-directly">🧩 Why not deploy containers directly?</h3>
<p>Because Kubernetes is a <strong>declarative system</strong> — it doesn’t want you to manually type commands for every run like Docker.<br />Instead, Kubernetes expects you to <strong>declare the desired state</strong> in a file (usually YAML).</p>
<p>Example:</p>
<ul>
<li><p>In Docker → you write:<br />  <code>docker run -d -p 80:80 nginx</code></p>
</li>
<li><p>In Kubernetes → you define a <code>pod.yaml</code> with all specifications.</p>
</li>
</ul>
<p>This allows:</p>
<ul>
<li><p>Standardization across environments</p>
</li>
<li><p>Easier automation</p>
</li>
<li><p>Consistency in deployment</p>
</li>
</ul>
<hr />
<h2 id="heading-3-pod-yaml-file-specification">🧾 3. Pod YAML File (Specification)</h2>
<p>A Pod is defined using a YAML file.</p>
<p>Example:</p>
<pre><code class="lang-plaintext">apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
    - name: nginx
      image: nginx:1.14.2
      ports:
        - containerPort: 80
</code></pre>
<p>🟢 This YAML replaces all the Docker CLI flags like:<br /><code>-p</code>, <code>-v</code>, <code>--network</code>, etc.</p>
<hr />
<h3 id="heading-equivalent-docker-command">🔄 Equivalent Docker Command</h3>
<p>The above YAML is equivalent to:</p>
<pre><code class="lang-plaintext">docker run -d --name nginx -p 80:80 nginx:1.14.2
</code></pre>
<p>So, a <strong>Pod</strong> = Docker container + YAML configuration wrapper.</p>
<hr />
<h2 id="heading-4-single-vs-multiple-containers-in-a-pod">👥 4. Single vs Multiple Containers in a Pod</h2>
<p>A Pod can have:</p>
<ul>
<li><p><strong>One container</strong> (most common)</p>
</li>
<li><p><strong>Multiple containers</strong> (special use-cases)</p>
</li>
</ul>
<h3 id="heading-why-multiple-containers">Why multiple containers?</h3>
<p>Some applications need <strong>helper containers</strong> (sidecars) for:</p>
<ul>
<li><p>Logging</p>
</li>
<li><p>Config loading</p>
</li>
<li><p>Proxying traffic</p>
</li>
</ul>
<p>If you place containers inside one Pod:</p>
<ul>
<li><p>They share the <strong>same network</strong> (<code>localhost</code>)</p>
</li>
<li><p>They share the <strong>same storage volume</strong></p>
</li>
</ul>
<p>Example:</p>
<ul>
<li><p><code>containerA</code> can talk to <code>containerB</code> using <code>localhost:port</code></p>
</li>
<li><p>Both can read/write shared files</p>
</li>
</ul>
<hr />
<h2 id="heading-5-networking-amp-ips-in-pods">🌐 5. Networking &amp; IPs in Pods</h2>
<ul>
<li><p>Each Pod gets a <strong>unique Cluster IP</strong> (assigned by <code>kube-proxy</code>)</p>
</li>
<li><p>Containers inside the same Pod share the <strong>same IP address</strong></p>
</li>
<li><p>Other Pods in the cluster can communicate using this <strong>Pod IP</strong></p>
</li>
</ul>
<hr />
<h2 id="heading-6-understanding-kubectl-kubernetes-cli">⚙️ 6. Understanding kubectl (Kubernetes CLI)</h2>
<p>Like Docker uses <code>docker</code> commands, Kubernetes uses <code>kubectl</code> (pronounced <em>cube control</em>).</p>
<h3 id="heading-common-commands">Common Commands</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Task</td><td>Command</td></tr>
</thead>
<tbody>
<tr>
<td>List all nodes</td><td><code>kubectl get nodes</code></td></tr>
<tr>
<td>List pods</td><td><code>kubectl get pods</code></td></tr>
<tr>
<td>Create a resource</td><td><code>kubectl create -f &lt;filename&gt;.yaml</code></td></tr>
<tr>
<td>Delete a resource</td><td><code>kubectl delete pod &lt;name&gt;</code></td></tr>
<tr>
<td>View logs</td><td><code>kubectl logs &lt;pod-name&gt;</code></td></tr>
<tr>
<td>Describe a pod (troubleshoot)</td><td><code>kubectl describe pod &lt;pod-name&gt;</code></td></tr>
</tbody>
</table>
</div><p>👉 All Kubernetes interaction happens through <code>kubectl</code>.</p>
<hr />
<h2 id="heading-7-setting-up-kubernetes-locally">💻 7. Setting up Kubernetes Locally</h2>
<p>For local practice, we use <strong>Minikube</strong> (lightweight Kubernetes cluster).<br />Other options include <strong>kind</strong>, <strong>k3s</strong>, <strong>microk8s</strong>.</p>
<h3 id="heading-steps">Steps:</h3>
<h4 id="heading-step-1-install-kubectl">🧩 Step 1: Install kubectl</h4>
<p>Search → “install kubectl”<br />Follow instructions for your OS (Linux/Mac/Windows).</p>
<p>Example for macOS:</p>
<pre><code class="lang-plaintext">brew install kubectl
kubectl version
</code></pre>
<h4 id="heading-step-2-install-minikube">🧩 Step 2: Install Minikube</h4>
<p>Search → “install Minikube”<br />Then follow commands based on your OS and CPU architecture (Intel/ARM).</p>
<p>Example:</p>
<pre><code class="lang-plaintext">brew install minikube
minikube version
</code></pre>
<h4 id="heading-step-3-start-the-cluster">🧩 Step 3: Start the Cluster</h4>
<pre><code class="lang-plaintext">minikube start
</code></pre>
<p>🖥️ This creates a <strong>single-node Kubernetes cluster</strong> (1 virtual machine).</p>
<blockquote>
<p>In production, you’d have multiple master and worker nodes.<br />But for learning, 1 node (control plane + worker) is enough.</p>
</blockquote>
<hr />
<h2 id="heading-8-deploying-your-first-pod">🚀 8. Deploying Your First Pod</h2>
<h3 id="heading-create-a-pod-yaml-file">Create a pod YAML file:</h3>
<pre><code class="lang-plaintext">apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.14.2
    ports:
    - containerPort: 80
</code></pre>
<h3 id="heading-deploy-it">Deploy it:</h3>
<pre><code class="lang-plaintext">kubectl create -f pod.yaml
</code></pre>
<h3 id="heading-verify-it">Verify it:</h3>
<pre><code class="lang-plaintext">kubectl get pods
kubectl get pods -o wide
</code></pre>
<h3 id="heading-access-it-inside-cluster">Access it inside cluster:</h3>
<pre><code class="lang-plaintext">minikube ssh
curl &lt;pod-cluster-ip&gt;
</code></pre>
<p>It should return → “Thank you for using nginx”.</p>
<hr />
<h2 id="heading-9-debugging-pods">🔍 9. Debugging Pods</h2>
<p>If your pod has issues, use:</p>
<pre><code class="lang-plaintext">kubectl describe pod &lt;pod-name&gt;
</code></pre>
<p>→ Shows Pod events, errors, status, etc.</p>
<p>To view logs of the application:</p>
<pre><code class="lang-plaintext">kubectl logs &lt;pod-name&gt;
</code></pre>
<hr />
<h2 id="heading-10-cleanup">💡 10. Cleanup</h2>
<p>To delete your Pod:</p>
<pre><code class="lang-plaintext">kubectl delete pod &lt;pod-name&gt;
</code></pre>
<hr />
<h2 id="heading-11-reference-kubectl-cheat-sheet">📘 11. Reference – kubectl Cheat Sheet</h2>
<p>For all kubectl commands:<br />🔗 Kubernetes Official kubectl Cheat Sheet</p>
<p>Keep this handy — even experienced DevOps engineers refer to it.</p>
<hr />
<h2 id="heading-12-from-pod-deployment">⚙️ 12. From Pod → Deployment</h2>
<p>You learned:</p>
<ul>
<li><p>Pod = single or multi-container unit</p>
</li>
<li><p>Defined using YAML</p>
</li>
</ul>
<p>Next step:</p>
<ul>
<li>To enable <strong>Auto-healing</strong> &amp; <strong>Auto-scaling</strong>,<br />  you use <strong>Deployments</strong> — which are wrappers around Pods.</li>
</ul>
<hr />
<h2 id="heading-summary">✅ Summary</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Concept</td><td>Description</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Pod</strong></td><td>Smallest deployable unit in Kubernetes</td></tr>
<tr>
<td><strong>Purpose</strong></td><td>Runs one or more containers</td></tr>
<tr>
<td><strong>Definition</strong></td><td>Written in a YAML file</td></tr>
<tr>
<td><strong>Command-line Tool</strong></td><td>kubectl</td></tr>
<tr>
<td><strong>Local Cluster</strong></td><td>Minikube</td></tr>
<tr>
<td><strong>Debug Commands</strong></td><td><code>kubectl describe pod</code>, <code>kubectl logs</code></td></tr>
<tr>
<td><strong>Next Topic</strong></td><td>Deployment (adds auto-scaling, self-healing)</td></tr>
</tbody>
</table>
</div>]]></content:encoded></item><item><title><![CDATA[Day 22 - How to Manage Hundreds of Kubernetes Clusters — Using KOPS]]></title><description><![CDATA[🎯 1. What Is the Problem?
In real-world production environments, DevOps engineers must:

Create

Upgrade

Configure

Delete  Kubernetes clusters — across multiple environments (dev, staging, prod).


Managing these life cycles manually (especially a...]]></description><link>https://blog.dineshcloud.in/day-22-how-to-manage-hundreds-of-kubernetes-clusters-using-kops</link><guid isPermaLink="true">https://blog.dineshcloud.in/day-22-how-to-manage-hundreds-of-kubernetes-clusters-using-kops</guid><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Dinesh Kumar K]]></dc:creator><pubDate>Mon, 15 Dec 2025 12:22:09 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-1-what-is-the-problem">🎯 <strong>1. What Is the Problem?</strong></h2>
<p>In real-world <strong>production environments</strong>, DevOps engineers must:</p>
<ul>
<li><p><strong>Create</strong></p>
</li>
<li><p><strong>Upgrade</strong></p>
</li>
<li><p><strong>Configure</strong></p>
</li>
<li><p><strong>Delete</strong><br />  Kubernetes clusters — across <strong>multiple environments</strong> (dev, staging, prod).</p>
</li>
</ul>
<p>Managing these life cycles manually (especially at scale) is complex.<br />Hence, automation tools like <strong>KOPS</strong> are used.</p>
<hr />
<h2 id="heading-2-why-not-minikube-kind-k3s-or-microk8s">🧩 <strong>2. Why Not Minikube, Kind, K3s, or MicroK8s?</strong></h2>
<ul>
<li><p>These are <strong>lightweight, single-node setups</strong> meant for <strong>learning and development only</strong>.</p>
</li>
<li><p>They lack:</p>
<ul>
<li><p><strong>High availability</strong></p>
</li>
<li><p><strong>Multi-node support</strong></p>
</li>
<li><p><strong>Production-grade fault tolerance</strong></p>
</li>
<li><p><strong>Scalability and security controls</strong></p>
</li>
</ul>
</li>
</ul>
<p>📌 <strong>In short:</strong></p>
<blockquote>
<p>Minikube / K3s / Kind = Local dev use only<br />KOPS / EKS / OpenShift / Rancher = Production-ready systems</p>
</blockquote>
<hr />
<h2 id="heading-3-kubernetes-in-production-distributions">🏗️ <strong>3. Kubernetes in Production (Distributions)</strong></h2>
<p>Just like <strong>Linux</strong> has distributions (Ubuntu, Red Hat, Amazon Linux),<br /><strong>Kubernetes</strong> also has multiple <strong>distributions</strong>.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Type</td><td>Example</td><td>Managed by</td><td>Support</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Open Source (DIY)</strong></td><td>Kubernetes (k8s)</td><td>Community</td><td>Limited</td></tr>
<tr>
<td><strong>Enterprise / Managed</strong></td><td>EKS (AWS), AKS (Azure), GKE (Google), OpenShift (Red Hat), Tanzu (VMware), Rancher (SUSE)</td><td>Vendors</td><td>24×7 Vendor support</td></tr>
</tbody>
</table>
</div><p>💡 <strong>Why use distributions?</strong></p>
<ul>
<li><p>Provide <strong>enterprise support</strong></p>
</li>
<li><p>Manage <strong>security patches</strong></p>
</li>
<li><p>Simplify <strong>setup and upgrades</strong></p>
</li>
<li><p>Offer <strong>ready-to-use integrations</strong></p>
</li>
</ul>
<hr />
<h2 id="heading-4-common-production-scenarios">🧠 <strong>4. Common Production Scenarios</strong></h2>
<ul>
<li><p>Organizations may have:</p>
<ul>
<li><p>Hundreds of Kubernetes clusters</p>
</li>
<li><p>Or one large cluster with thousands of nodes</p>
</li>
</ul>
</li>
<li><p>Managed solutions (EKS, GKE, AKS) cost a lot when scaled.</p>
</li>
<li><p>Hence, many companies use <strong>open-source Kubernetes</strong> with tools like <strong>KOPS</strong> to manage lifecycle operations.</p>
</li>
</ul>
<hr />
<h2 id="heading-5-kubernetes-vs-eks">⚖️ <strong>5. Kubernetes vs EKS</strong></h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Aspect</td><td>Kubernetes (Self-managed)</td><td>EKS (Managed by AWS)</td></tr>
</thead>
<tbody>
<tr>
<td>Installation</td><td>You install it manually (e.g., KOPS, Kubeadm)</td><td>AWS handles installation</td></tr>
<tr>
<td>Maintenance</td><td>You manage upgrades, HA, scaling</td><td>AWS manages control plane</td></tr>
<tr>
<td>Cost</td><td>Cheaper, but you manage</td><td>More expensive, managed</td></tr>
<tr>
<td>Support</td><td>Community / self-managed</td><td>AWS support</td></tr>
<tr>
<td>Flexibility</td><td>Full control</td><td>Limited (AWS integrated only)</td></tr>
</tbody>
</table>
</div><p>🟩 <strong>Key Point:</strong><br />EKS = Kubernetes + AWS management + Paid support<br />KOPS = Kubernetes + Full control + Open source management</p>
<hr />
<h2 id="heading-6-what-is-kops">⚙️ <strong>6. What Is KOPS?</strong></h2>
<p><strong>KOPS = Kubernetes Operations Tool</strong></p>
<blockquote>
<p>A CLI tool that automates the <strong>creation, management, and lifecycle</strong> of Kubernetes clusters on AWS and other clouds.</p>
</blockquote>
<h3 id="heading-kops-manages">✳️ KOPS manages:</h3>
<ul>
<li><p>Cluster creation</p>
</li>
<li><p>Configuration changes</p>
</li>
<li><p>Upgrades</p>
</li>
<li><p>Node scaling</p>
</li>
<li><p>Cluster deletion</p>
</li>
</ul>
<p>KOPS stores all cluster configuration in an <strong>S3 bucket</strong>, which acts as the <strong>cluster state store</strong>.</p>
<hr />
<h2 id="heading-7-why-kops-is-popular">🧰 <strong>7. Why KOPS Is Popular</strong></h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Feature</td><td>Description</td></tr>
</thead>
<tbody>
<tr>
<td>Lifecycle Management</td><td>Handles create, update, delete easily</td></tr>
<tr>
<td>Automation</td><td>Minimal manual configuration</td></tr>
<tr>
<td>Multi-cluster support</td><td>Manage 100s of clusters centrally</td></tr>
<tr>
<td>Cloud integration</td><td>AWS, GCP, DigitalOcean supported</td></tr>
<tr>
<td>Open-source</td><td>No licensing fees</td></tr>
<tr>
<td>Infrastructure as Code</td><td>Configurations stored in YAML, reusable</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-8-pre-requisites-before-using-kops">🪜 <strong>8. Pre-requisites Before Using KOPS</strong></h2>
<p>Before creating a cluster with KOPS, ensure you have:</p>
<h3 id="heading-software-requirements">🔧 Software Requirements:</h3>
<ol>
<li><p><strong>Python 3</strong></p>
</li>
<li><p><strong>AWS CLI</strong></p>
</li>
<li><p><strong>kubectl</strong></p>
</li>
<li><p><strong>KOPS</strong></p>
</li>
</ol>
<h3 id="heading-aws-requirements">☁️ AWS Requirements:</h3>
<ul>
<li><p>AWS account access</p>
</li>
<li><p>IAM user (Admin or with following policies):</p>
<ul>
<li><p><code>AmazonEC2FullAccess</code></p>
</li>
<li><p><code>AmazonS3FullAccess</code></p>
</li>
<li><p><code>IAMFullAccess</code></p>
</li>
<li><p><code>AmazonVPCFullAccess</code></p>
</li>
</ul>
</li>
<li><p>AWS CLI configured via:</p>
<pre><code class="lang-plaintext">  aws configure
</code></pre>
</li>
</ul>
<hr />
<h2 id="heading-9-step-by-step-setup-with-kops">📦 <strong>9. Step-by-Step Setup with KOPS</strong></h2>
<h3 id="heading-step-1-create-an-s3-bucket-for-cluster-state">Step 1️⃣: Create an S3 Bucket for Cluster State</h3>
<p>KOPS stores cluster metadata in S3.</p>
<pre><code class="lang-plaintext">aws s3 mb s3://kops-state-store-1
</code></pre>
<h3 id="heading-step-2-export-the-s3-bucket-path">Step 2️⃣: Export the S3 Bucket Path</h3>
<pre><code class="lang-plaintext">export KOPS_STATE_STORE=s3://kops-state-store-1
</code></pre>
<h3 id="heading-step-3-create-the-kubernetes-cluster-definition">Step 3️⃣: Create the Kubernetes Cluster Definition</h3>
<pre><code class="lang-plaintext">kops create cluster \
--name=k8s.local \
--zones=us-east-1a \
--node-count=2 \
--node-size=t2.micro \
--master-size=t2.micro \
--state=s3://kops-state-store-1
</code></pre>
<h3 id="heading-step-4-build-launch-the-cluster">Step 4️⃣: Build (Launch) the Cluster</h3>
<pre><code class="lang-plaintext">kops update cluster k8s.local --yes
</code></pre>
<blockquote>
<p>⏱️ This process takes several minutes — KOPS provisions EC2 instances, networking, security groups, IAM roles, etc.</p>
</blockquote>
<hr />
<h2 id="heading-10-domain-considerations">🧩 <strong>10. Domain Considerations</strong></h2>
<p>KOPS requires a <strong>domain name</strong> (for the cluster API endpoint).<br />You can use:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Environment</td><td>Example Domain</td></tr>
</thead>
<tbody>
<tr>
<td>Local / Demo</td><td><code>k8s.local</code></td></tr>
<tr>
<td>Production</td><td><a target="_blank" href="http://prod.example.com"><code>prod.example.com</code></a> or <a target="_blank" href="http://company.com"><code>company.com</code></a></td></tr>
</tbody>
</table>
</div><p>If using a real domain:</p>
<ul>
<li><p>Purchase it (e.g., GoDaddy)</p>
</li>
<li><p>Configure DNS in <strong>AWS Route 53</strong></p>
</li>
<li><p>Create a <strong>hosted zone</strong>:</p>
<pre><code class="lang-plaintext">  aws route53 create-hosted-zone --name dev.example.com
</code></pre>
</li>
</ul>
<hr />
<h2 id="heading-11-cost-caution">💰 <strong>11. Cost Caution</strong></h2>
<p>⚠️ <strong>KOPS uses AWS resources</strong>:</p>
<ul>
<li><p>EC2 instances</p>
</li>
<li><p>EBS volumes</p>
</li>
<li><p>S3 buckets</p>
</li>
<li><p>Route 53 entries</p>
</li>
</ul>
<p>🧾 These all <strong>incur AWS billing</strong>, even in free-tier accounts.</p>
<p><strong>Tip:</strong><br />If you only want to learn, stop after the “create cluster” step — do not run the final “update cluster” command.</p>
<hr />
<h2 id="heading-12-kops-in-the-real-world">🧱 <strong>12. KOPS in the Real World</strong></h2>
<ul>
<li><p>Used by DevOps teams for <strong>multi-environment orchestration</strong>.</p>
</li>
<li><p>Commonly manages:</p>
<ul>
<li><p>Dev, QA, Staging, and Production clusters</p>
</li>
<li><p>Clusters across multiple AWS accounts or regions</p>
</li>
</ul>
</li>
<li><p>Supports upgrades via:</p>
<pre><code class="lang-plaintext">  kops upgrade cluster
  kops rolling-update cluster
</code></pre>
</li>
<li><p>Supports deletion via:</p>
<pre><code class="lang-plaintext">  kops delete cluster --name=&lt;cluster-name&gt; --yes
</code></pre>
</li>
</ul>
<hr />
<h2 id="heading-13-comparison-other-installation-tools">🧭 <strong>13. Comparison: Other Installation Tools</strong></h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Tool</td><td>Purpose</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Kubeadm</strong></td><td>Manual cluster setup, great for learning</td></tr>
<tr>
<td><strong>KOPS</strong></td><td>Automated production setup &amp; management</td></tr>
<tr>
<td><strong>OpenShift (Ansible)</strong></td><td>Enterprise-grade Red Hat distro</td></tr>
<tr>
<td><strong>Rancher</strong></td><td>UI-based multi-cluster management</td></tr>
<tr>
<td><strong>Tanzu</strong></td><td>VMware enterprise platform</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-14-interview-tip">🧩 <strong>14. Interview Tip</strong></h2>
<p>When asked about Kubernetes setup in production, say:</p>
<blockquote>
<p>“In our organization, we manage multiple Kubernetes clusters using <strong>KOPS</strong> on AWS.<br />KOPS handles the full lifecycle — creation, configuration, upgrades, and deletion.<br />For staging and testing, we use <code>.k8s.local</code> domains, and for production we use Route 53 hosted domains like <a target="_blank" href="http://prod.company.com"><code>prod.company.com</code></a>.”</p>
</blockquote>
<hr />
<h2 id="heading-15-in-summary">🧠 <strong>15. In Summary</strong></h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Feature</td><td>Description</td></tr>
</thead>
<tbody>
<tr>
<td><strong>KOPS Full Form</strong></td><td>Kubernetes Operations</td></tr>
<tr>
<td><strong>Purpose</strong></td><td>Automates lifecycle management of Kubernetes clusters</td></tr>
<tr>
<td><strong>Primary Use</strong></td><td>Managing 100s of clusters in production</td></tr>
<tr>
<td><strong>Where Used</strong></td><td>AWS (mainly), GCP, DigitalOcean</td></tr>
<tr>
<td><strong>Alternatives</strong></td><td>Kubeadm, Rancher, OpenShift, Tanzu</td></tr>
<tr>
<td><strong>State Storage</strong></td><td>S3 bucket</td></tr>
<tr>
<td><strong>Domain Management</strong></td><td>Route 53 or local DNS</td></tr>
<tr>
<td><strong>Key Advantage</strong></td><td>Simple automation for complex cluster management</td></tr>
</tbody>
</table>
</div><hr />
<p>✅ <strong>Final One-Liner Summary:</strong></p>
<blockquote>
<p><strong>KOPS</strong> is a powerful open-source tool used by DevOps engineers to <strong>create, manage, and scale hundreds of production-grade Kubernetes clusters</strong> — providing automation, versioning, and reliability without managed-service costs.</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Day 21 - Kubernetes Architecture]]></title><description><![CDATA[🧩 1. Why “K8s”?

The word “Kubernetes” has 10 letters.

To shorten it, the middle 8 letters (“ubernete”) are replaced with the number 8, forming K8s.



Kubernetes → K + 8 letters + s → K8s


🐳 2. Docker vs Kubernetes (Before Architecture)
Before u...]]></description><link>https://blog.dineshcloud.in/day-21-kubernetes-architecture</link><guid isPermaLink="true">https://blog.dineshcloud.in/day-21-kubernetes-architecture</guid><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Dinesh Kumar K]]></dc:creator><pubDate>Mon, 15 Dec 2025 12:18:39 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-1-why-k8s">🧩 1. Why “K8s”?</h2>
<ul>
<li><p>The word <strong>“Kubernetes”</strong> has <strong>10 letters</strong>.</p>
</li>
<li><p>To shorten it, the middle 8 letters (“ubernete”) are replaced with the number <strong>8</strong>, forming <strong>K8s</strong>.</p>
</li>
</ul>
<blockquote>
<p>Kubernetes → K + 8 letters + s → <strong>K8s</strong></p>
</blockquote>
<hr />
<h2 id="heading-2-docker-vs-kubernetes-before-architecture">🐳 2. Docker vs Kubernetes (Before Architecture)</h2>
<p>Before understanding Kubernetes architecture, you must know how it differs from Docker.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Feature</td><td>Docker</td><td>Kubernetes</td></tr>
</thead>
<tbody>
<tr>
<td>Nature</td><td>Single host container platform</td><td>Cluster-based orchestration platform</td></tr>
<tr>
<td>Self-healing</td><td>Manual restart needed</td><td>Auto-healing built-in</td></tr>
<tr>
<td>Scaling</td><td>Manual</td><td>Auto-scaling</td></tr>
<tr>
<td>Load balancing</td><td>Basic</td><td>Advanced load balancing</td></tr>
<tr>
<td>Enterprise support</td><td>Limited</td><td>Strong networking, security, scheduling, and monitoring</td></tr>
</tbody>
</table>
</div><p>👉 Hence, <strong>Kubernetes = Enterprise-level orchestration</strong> of containers across clusters.</p>
<hr />
<h2 id="heading-3-basic-idea">🏗️ 3. Basic Idea</h2>
<p>Kubernetes architecture has two main layers:</p>
<pre><code class="lang-plaintext">+----------------------------------------+
|        Kubernetes Architecture         |
|----------------------------------------|
| Control Plane (Master Components)      |
| Data Plane (Worker Node Components)    |
+----------------------------------------+
</code></pre>
<hr />
<h2 id="heading-4-core-concepts-container-vs-pod">⚙️ 4. Core Concepts: Container vs Pod</h2>
<ul>
<li><p><strong>In Docker:</strong> smallest unit = <strong>Container</strong></p>
</li>
<li><p><strong>In Kubernetes:</strong> smallest unit = <strong>Pod</strong></p>
</li>
</ul>
<p>A <strong>Pod</strong> is like a wrapper around one or more containers — with added features like networking, restart policies, and scaling.</p>
<hr />
<h2 id="heading-5-kubernetes-node-structure">🧱 5. Kubernetes Node Structure</h2>
<h3 id="heading-kubernetes-cluster">Kubernetes Cluster =</h3>
<ul>
<li><p>1 or more <strong>Master Nodes</strong> (Control Plane)</p>
</li>
<li><p>1 or more <strong>Worker Nodes</strong> (Data Plane)</p>
</li>
</ul>
<p>For simplicity:</p>
<pre><code class="lang-plaintext">1 Master Node + 1 Worker Node
</code></pre>
<hr />
<h2 id="heading-6-data-plane-worker-node-components">🧮 6. Data Plane (Worker Node) Components</h2>
<p>Each <strong>Worker Node</strong> contains components responsible for <strong>running the actual application Pods</strong>.</p>
<h3 id="heading-components">🧩 Components:</h3>
<ol>
<li><p><strong>Kubelet</strong></p>
<ul>
<li><p>Ensures Pods are running as expected.</p>
</li>
<li><p>Talks to the Control Plane.</p>
</li>
<li><p>Reports pod status.</p>
</li>
<li><p>Performs “auto-healing”: restarts pods if they crash.</p>
</li>
</ul>
</li>
<li><p><strong>Kube-proxy</strong></p>
<ul>
<li><p>Manages <strong>networking and communication</strong> between Pods and nodes.</p>
</li>
<li><p>Allocates <strong>IP addresses</strong> to Pods.</p>
</li>
<li><p>Handles <strong>service load balancing</strong>.</p>
</li>
<li><p>Uses Linux <strong>iptables</strong> for routing.</p>
</li>
</ul>
</li>
<li><p><strong>Container Runtime</strong></p>
<ul>
<li><p>Actually runs containers inside Pods.</p>
</li>
<li><p>Examples:</p>
<ul>
<li><p>Docker Shim</p>
</li>
<li><p>containerd</p>
</li>
<li><p>CRI-O</p>
</li>
</ul>
</li>
<li><p>Kubernetes supports any runtime that implements the <strong>Container Runtime Interface (CRI)</strong>.</p>
</li>
</ul>
</li>
</ol>
<blockquote>
<p><strong>Summary:</strong><br />Worker node = { Kubelet + Kube-proxy + Container Runtime }<br />→ These three together form the <strong>Data Plane</strong>.</p>
</blockquote>
<hr />
<h2 id="heading-7-control-plane-master-node-components">🧠 7. Control Plane (Master Node) Components</h2>
<p>These components <strong>control, schedule, and manage</strong> the cluster.</p>
<h3 id="heading-components-1">🧩 Components:</h3>
<ol>
<li><p><strong>API Server</strong></p>
<ul>
<li><p>The <strong>heart of Kubernetes</strong>.</p>
</li>
<li><p>All external or internal requests go through it.</p>
</li>
<li><p>Provides the Kubernetes REST API.</p>
</li>
<li><p>Validates and processes commands (e.g., from <code>kubectl</code>).</p>
</li>
</ul>
</li>
<li><p><strong>Scheduler</strong></p>
<ul>
<li><p>Decides <strong>which node</strong> should run a new Pod.</p>
</li>
<li><p>Uses resource data and rules to make the placement decision.</p>
</li>
</ul>
</li>
<li><p><strong>etcd</strong></p>
<ul>
<li><p>The <strong>database of Kubernetes</strong>.</p>
</li>
<li><p>A <strong>key-value store</strong> that saves cluster state and configuration.</p>
</li>
<li><p>Used for backup and recovery.</p>
</li>
</ul>
</li>
<li><p><strong>Controller Manager</strong></p>
<ul>
<li><p>Runs background “controller” processes that manage the cluster state.</p>
</li>
<li><p>Examples:</p>
<ul>
<li><p><strong>ReplicaSet Controller</strong> → ensures correct number of pods.</p>
</li>
<li><p><strong>Node Controller</strong>, <strong>Endpoint Controller</strong>, etc.</p>
</li>
</ul>
</li>
<li><p>Ensures the cluster stays in the <strong>desired state</strong>.</p>
</li>
</ul>
</li>
<li><p><strong>Cloud Controller Manager (CCM)</strong></p>
<ul>
<li><p>Bridges Kubernetes with <strong>cloud providers</strong> (AWS, Azure, GCP, etc.).</p>
</li>
<li><p>Handles:</p>
<ul>
<li><p>Cloud load balancers</p>
</li>
<li><p>Cloud storage</p>
</li>
<li><p>Node management</p>
</li>
</ul>
</li>
<li><p>Not required for <strong>on-premises clusters</strong>.</p>
</li>
<li><p>Open source – cloud vendors can extend it for their own integrations.</p>
</li>
</ul>
</li>
</ol>
<hr />
<h2 id="heading-8-flow-example-pod-creation">🔁 8. Flow Example — Pod Creation</h2>
<ol>
<li><p><strong>User runs command:</strong><br /> <code>kubectl apply -f pod.yaml</code></p>
</li>
<li><p><strong>API Server:</strong> receives the request.</p>
</li>
<li><p><strong>Scheduler:</strong> decides on which node to place the pod.</p>
</li>
<li><p><strong>etcd:</strong> stores pod info and cluster state.</p>
</li>
<li><p><strong>Kubelet (on worker):</strong> runs the pod using container runtime.</p>
</li>
<li><p><strong>Kube-proxy:</strong> assigns IP, configures networking, enables load balancing.</p>
</li>
<li><p><strong>Controller Manager:</strong> ensures the desired number of pods are running.</p>
</li>
</ol>
<hr />
<h2 id="heading-9-summary-table">🧩 9. Summary Table</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Plane</td><td>Component</td><td>Role</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Control Plane (Master)</strong></td><td>API Server</td><td>Entry point; handles all requests</td></tr>
<tr>
<td></td><td>Scheduler</td><td>Decides which node runs the pod</td></tr>
<tr>
<td></td><td>etcd</td><td>Cluster data store</td></tr>
<tr>
<td></td><td>Controller Manager</td><td>Ensures desired state (e.g., replicas)</td></tr>
<tr>
<td></td><td>Cloud Controller Manager</td><td>Integrates with cloud providers</td></tr>
<tr>
<td><strong>Data Plane (Worker)</strong></td><td>Kubelet</td><td>Runs pods, reports status</td></tr>
<tr>
<td></td><td>Kube-proxy</td><td>Manages networking &amp; load balancing</td></tr>
<tr>
<td></td><td>Container Runtime</td><td>Runs containers (Docker, containerd, CRI-O)</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-10-final-concept">🧭 10. Final Concept</h2>
<ul>
<li><p><strong>Control Plane</strong> → “Brain” (decides, schedules, manages)</p>
</li>
<li><p><strong>Data Plane</strong> → “Body” (executes, runs workloads)</p>
</li>
<li><p>Together they make Kubernetes a <strong>self-healing, auto-scaling, cluster-based orchestration system.</strong></p>
</li>
</ul>
<hr />
<h2 id="heading-11-assignment-idea-optional">📝 11. Assignment Idea (Optional)</h2>
<ul>
<li><p>Draw your own Kubernetes architecture diagram.</p>
</li>
<li><p>Label control plane and worker components.</p>
</li>
<li><p>Write a short explanation and post it on LinkedIn or GitHub as a study note.</p>
</li>
</ul>
<hr />
<p>✅ <strong>In short:</strong></p>
<blockquote>
<p><strong>Kubernetes = Control Plane (manages) + Data Plane (executes)</strong><br />Each component has a clear, defined responsibility that together make K8s scalable, reliable, and automated.</p>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Day 20 - Kubernetes Introduction]]></title><description><![CDATA[🧭 1. Why Kubernetes Matters
Kubernetes is considered the future of DevOps.If you plan a long-term career (“marathon”) in DevOps — not just short CI/CD tasks — learning Kubernetes is essential.

You can get jobs doing basic DevOps tasks (CI/CD, build...]]></description><link>https://blog.dineshcloud.in/day-20-kubernetes-introduction</link><guid isPermaLink="true">https://blog.dineshcloud.in/day-20-kubernetes-introduction</guid><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Dinesh Kumar K]]></dc:creator><pubDate>Mon, 15 Dec 2025 12:16:53 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-1-why-kubernetes-matters">🧭 <strong>1. Why Kubernetes Matters</strong></h2>
<p>Kubernetes is considered <strong>the future of DevOps</strong>.<br />If you plan a <strong>long-term career (“marathon”)</strong> in DevOps — not just short CI/CD tasks — learning Kubernetes is essential.</p>
<ul>
<li><p>You can get jobs doing basic DevOps tasks (CI/CD, build and release),<br />  but <strong>true DevOps engineers</strong> are expected to understand <strong>container orchestration</strong>, which means <strong>Kubernetes</strong>.</p>
</li>
<li><p>Kubernetes dominates the <strong>modern microservices and container world</strong>.</p>
</li>
</ul>
<hr />
<h2 id="heading-2-prerequisite-docker-and-containers">⚙️ <strong>2. Prerequisite — Docker and Containers</strong></h2>
<p>Before learning Kubernetes, you <strong>must understand containers and Docker</strong>.<br />Because Kubernetes works <em>on top</em> of containers.</p>
<p>You should already know:</p>
<ul>
<li><p>What containers are, and how they differ from virtual machines</p>
</li>
<li><p>Container networking and namespace isolation</p>
</li>
<li><p>Why containers are lightweight</p>
</li>
<li><p>How to secure containers</p>
</li>
<li><p>Multi-stage Docker builds and distroless images</p>
</li>
</ul>
<blockquote>
<p>📘 In short — get strong with <strong>container fundamentals</strong>, not just Docker commands.</p>
</blockquote>
<hr />
<h2 id="heading-3-docker-vs-kubernetes-the-core-difference">🐋 <strong>3. Docker vs Kubernetes — The Core Difference</strong></h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Concept</td><td>Docker</td><td>Kubernetes</td></tr>
</thead>
<tbody>
<tr>
<td>Type</td><td>Container Platform</td><td>Container Orchestration Platform</td></tr>
<tr>
<td>Purpose</td><td>Build, package, and run containers</td><td>Manage, scale, and automate containers</td></tr>
<tr>
<td>Scope</td><td>Single host</td><td>Cluster of multiple hosts</td></tr>
<tr>
<td>Auto-Healing</td><td>❌ Manual restart needed</td><td>✅ Automatic recovery</td></tr>
<tr>
<td>Auto-Scaling</td><td>❌ Manual scaling</td><td>✅ Automatic scaling (HPA)</td></tr>
<tr>
<td>Enterprise Features</td><td>❌ Limited</td><td>✅ Enterprise-grade support</td></tr>
</tbody>
</table>
</div><p>So, <strong>Kubernetes doesn’t replace Docker</strong> — it <strong>extends and manages it</strong>.</p>
<hr />
<h2 id="heading-4-problems-with-docker-alone">⚠️ <strong>4. Problems with Docker Alone</strong></h2>
<p>When you use only Docker, several issues arise in real-world production.</p>
<h3 id="heading-problem-1-single-host-limitation">🧩 Problem 1: Single Host Limitation</h3>
<ul>
<li><p>Docker runs all containers on <strong>one host</strong>.</p>
</li>
<li><p>If one container consumes too many resources (CPU/RAM), others may crash.</p>
</li>
<li><p>There’s no way for containers to move between hosts.</p>
</li>
</ul>
<p>🧠 <em>Result:</em> If one node fails, all containers on it fail too.</p>
<hr />
<h3 id="heading-problem-2-no-auto-healing">⚕️ Problem 2: No Auto-Healing</h3>
<ul>
<li><p>If a container stops, it stays stopped until manually restarted.</p>
</li>
<li><p>In large systems (10,000+ containers), manual monitoring is impossible.</p>
</li>
</ul>
<p>🧠 <em>Need:</em> A system that automatically detects and restarts failed containers.</p>
<hr />
<h3 id="heading-problem-3-no-auto-scaling">📈 Problem 3: No Auto-Scaling</h3>
<ul>
<li><p>When user load increases (e.g., from 10,000 to 1 million users),<br />  containers must scale up.</p>
</li>
<li><p>In Docker, this must be done <strong>manually</strong>.</p>
</li>
<li><p>Docker also lacks built-in <strong>load balancing</strong>.</p>
</li>
</ul>
<p>🧠 <em>Need:</em> A platform that can <strong>automatically add/remove</strong> containers as demand changes.</p>
<hr />
<h3 id="heading-problem-4-lacks-enterprise-features">🏢 Problem 4: Lacks Enterprise Features</h3>
<p>Docker by itself doesn’t provide:</p>
<ul>
<li><p>Load balancers</p>
</li>
<li><p>Firewalls</p>
</li>
<li><p>API gateways</p>
</li>
<li><p>Whitelisting / Blacklisting</p>
</li>
<li><p>Advanced networking / Security policies</p>
</li>
<li><p>Auto-healing / Auto-scaling mechanisms</p>
</li>
</ul>
<p>🧠 <em>Need:</em> A production-ready system that can integrate and automate all this.</p>
<hr />
<h2 id="heading-5-kubernetes-the-solution">🧠 <strong>5. Kubernetes — The Solution</strong></h2>
<p>Kubernetes was created by <strong>Google</strong>, inspired by their internal system <strong>Borg</strong>, and is now maintained by the <strong>CNCF (Cloud Native Computing Foundation).</strong></p>
<p>Kubernetes is designed to <strong>solve all four Docker limitations</strong>.</p>
<hr />
<h3 id="heading-problem-1-solved-cluster-architecture">🧩 Problem 1 Solved: Cluster Architecture</h3>
<ul>
<li><p>Kubernetes runs as a <strong>cluster</strong> of multiple nodes.</p>
</li>
<li><p>If one node fails or is overloaded, pods (containers) are automatically rescheduled to another node.</p>
</li>
<li><p>Supports both <strong>Master (Control Plane)</strong> and <strong>Worker Nodes</strong>.</p>
</li>
</ul>
<p>🧠 <em>Result:</em> High availability and fault tolerance.</p>
<hr />
<h3 id="heading-problem-2-solved-auto-healing">⚕️ Problem 2 Solved: Auto-Healing</h3>
<ul>
<li><p>Kubernetes monitors the health of containers (pods).</p>
</li>
<li><p>If a pod fails, it automatically creates a <strong>new pod</strong> (even before the old one is completely dead).</p>
</li>
<li><p>Managed through <strong>ReplicaSets / Deployments</strong>.</p>
</li>
</ul>
<p>🧠 <em>Result:</em> Application stays up even when individual pods fail.</p>
<hr />
<h3 id="heading-problem-3-solved-auto-scaling">📈 Problem 3 Solved: Auto-Scaling</h3>
<ul>
<li><p>Kubernetes supports <strong>Horizontal Pod Autoscaler (HPA)</strong>.</p>
</li>
<li><p>Based on CPU or memory thresholds (e.g., 80%), Kubernetes automatically creates or removes pods.</p>
</li>
<li><p>You can also manually scale by editing YAML files.</p>
</li>
</ul>
<p>🧠 <em>Result:</em> Application adjusts to user load dynamically.</p>
<hr />
<h3 id="heading-problem-4-solved-enterprise-level-features">🏢 Problem 4 Solved: Enterprise-Level Features</h3>
<p>Kubernetes supports or integrates with:</p>
<ul>
<li><p><strong>Load balancers</strong> (via Services, Ingress Controllers)</p>
</li>
<li><p><strong>Firewalls and Network Policies</strong></p>
</li>
<li><p><strong>API Gateways</strong></p>
</li>
<li><p><strong>Service Meshes (Istio, Linkerd)</strong></p>
</li>
<li><p><strong>Security Controls</strong> (RBAC, Admission Controllers)</p>
</li>
<li><p><strong>Monitoring tools</strong> (Prometheus, Grafana)</p>
</li>
<li><p><strong>Logging tools</strong> (ELK / EFK)</p>
</li>
</ul>
<p>🧠 <em>Result:</em> Kubernetes is production-ready and enterprise-grade.</p>
<hr />
<h2 id="heading-6-kubernetes-is-evolving">🧱 <strong>6. Kubernetes Is Evolving</strong></h2>
<ul>
<li><p>Kubernetes is not 100% perfect — it’s still <strong>evolving rapidly</strong>.</p>
</li>
<li><p>The CNCF community continuously adds new capabilities.</p>
</li>
<li><p>Many open-source tools integrate with Kubernetes:</p>
<ul>
<li><p><strong>Prometheus</strong> – Monitoring</p>
</li>
<li><p><strong>Grafana</strong> – Visualization</p>
</li>
<li><p><strong>Ingress-NGINX / Traefik</strong> – Load balancing</p>
</li>
<li><p><strong>Helm</strong> – Package management</p>
</li>
<li><p><strong>Podman / Buildpacks</strong> – Image building</p>
</li>
</ul>
</li>
</ul>
<p>Each of these tools enhances Kubernetes capabilities.</p>
<hr />
<h2 id="heading-7-extensibility-crds-and-controllers">⚙️ <strong>7. Extensibility (CRDs and Controllers)</strong></h2>
<p>Kubernetes allows <strong>Custom Resources (CRDs)</strong> and <strong>Controllers</strong>,<br />so organizations can extend its features — e.g.:</p>
<ul>
<li><p>Create custom load balancers</p>
</li>
<li><p>Add new resource types</p>
</li>
<li><p>Integrate with 3rd-party tools</p>
</li>
</ul>
<p>Example:<br />Kubernetes doesn’t provide advanced load balancing by default,<br />but Ingress Controllers (like NGINX Ingress) were built using CRDs to provide this.</p>
<hr />
<h2 id="heading-8-why-organizations-adopt-kubernetes">🌍 <strong>8. Why Organizations Adopt Kubernetes</strong></h2>
<p>Companies like <strong>Netflix, Amazon, Flipkart, PayPal</strong> use Kubernetes because:</p>
<ul>
<li><p>It automates deployment, scaling, and management.</p>
</li>
<li><p>It provides resiliency and flexibility.</p>
</li>
<li><p>It standardizes infrastructure across environments (cloud, hybrid, on-prem).</p>
</li>
</ul>
<hr />
<h2 id="heading-9-important-points-to-remember">💬 <strong>9. Important Points to Remember</strong></h2>
<ul>
<li><p>Kubernetes ≠ Docker replacement — it <strong>uses Docker or container runtimes</strong> under the hood.</p>
</li>
<li><p>Kubernetes manages containers at <strong>scale</strong>.</p>
</li>
<li><p>Kubernetes is <strong>cluster-based</strong>, not single-host.</p>
</li>
<li><p>It provides <strong>Auto-healing</strong>, <strong>Auto-scaling</strong>, <strong>Load balancing</strong>, and <strong>Enterprise-grade control</strong>.</p>
</li>
<li><p>It is <strong>open-source</strong> and backed by the <strong>CNCF</strong> community.</p>
</li>
</ul>
<hr />
<h2 id="heading-10-whats-next">🧩 <strong>10. What’s Next</strong></h2>
<p>In the upcoming topics:</p>
<ol>
<li><p><strong>Kubernetes Architecture</strong></p>
</li>
<li><p><strong>Pods</strong></p>
</li>
<li><p><strong>Deployments</strong></p>
</li>
<li><p><strong>Services</strong></p>
</li>
<li><p><strong>Ingress Controllers</strong></p>
</li>
<li><p><strong>Admission Controllers</strong></p>
</li>
</ol>
<p>Each topic will build upon today’s foundation.</p>
<hr />
<h2 id="heading-summary">✅ <strong>Summary</strong></h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Concept</td><td>Docker</td><td>Kubernetes</td></tr>
</thead>
<tbody>
<tr>
<td>Platform Type</td><td>Container Platform</td><td>Container Orchestration Platform</td></tr>
<tr>
<td>Scope</td><td>Single Host</td><td>Multi-node Cluster</td></tr>
<tr>
<td>Auto-Healing</td><td>No</td><td>Yes</td></tr>
<tr>
<td>Auto-Scaling</td><td>No</td><td>Yes (HPA)</td></tr>
<tr>
<td>Enterprise Ready</td><td>No</td><td>Yes</td></tr>
<tr>
<td>Best For</td><td>Local / Small Projects</td><td>Production / Large Systems</td></tr>
</tbody>
</table>
</div><hr />
<p><strong>In short:</strong><br />Kubernetes is the <strong>brain</strong> that manages your containers.<br />It makes container-based infrastructure <strong>scalable, resilient, and production-ready</strong>.</p>
]]></content:encoded></item><item><title><![CDATA[Day 19 - Docker Interview Q&A]]></title><description><![CDATA[1. What is Docker?
Answer:Docker is an open-source containerization platform used to build, package, and run applications inside lightweight, portable containers. It helps manage the entire lifecycle of containers — building images, running container...]]></description><link>https://blog.dineshcloud.in/day-19-docker-interview-qanda</link><guid isPermaLink="true">https://blog.dineshcloud.in/day-19-docker-interview-qanda</guid><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[Docker]]></category><category><![CDATA[containers]]></category><dc:creator><![CDATA[Dinesh Kumar K]]></dc:creator><pubDate>Mon, 15 Dec 2025 12:14:36 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-1-what-is-docker"><strong>1. What is Docker?</strong></h2>
<p><strong>Answer:</strong><br />Docker is an <strong>open-source containerization platform</strong> used to build, package, and run applications inside lightweight, portable containers. It helps manage the <strong>entire lifecycle of containers</strong> — building images, running containers, pushing/pulling images from registries, etc.</p>
<p>You can add:<br />“In my projects, I use Docker to write Dockerfiles, build images, run containers, optimize image size, and push artifacts to registries like Docker Hub/ECR.”</p>
<hr />
<h2 id="heading-2-how-are-containers-different-from-virtual-machines"><strong>2. How are Containers different from Virtual Machines?</strong></h2>
<p><strong>Answer:</strong></p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Containers</strong></td><td><strong>Virtual Machines</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Lightweight</td><td>Heavyweight</td></tr>
<tr>
<td>Share the host OS kernel</td><td>Have full guest OS</td></tr>
<tr>
<td>Start in milliseconds</td><td>Start in minutes</td></tr>
<tr>
<td>Only need application + dependencies</td><td>Need OS + kernel + libraries</td></tr>
<tr>
<td>Image size is small (MBs)</td><td>Large images (GBs)</td></tr>
</tbody>
</table>
</div><blockquote>
<p>Never say containers “don’t have an OS” — correct answer is:<br />They include <strong>only minimal system libraries</strong>, not a full OS.</p>
</blockquote>
<hr />
<h2 id="heading-3-explain-the-docker-lifecycle"><strong>3. Explain the Docker Lifecycle.</strong></h2>
<p><strong>Answer:</strong><br />The Docker lifecycle includes:</p>
<ol>
<li><p><strong>Write Dockerfile</strong></p>
</li>
<li><p><strong>Build</strong> image → <code>docker build</code></p>
</li>
<li><p><strong>Run</strong> container → <code>docker run</code></p>
</li>
<li><p><strong>Tag &amp; Push</strong> image to registry (Docker Hub, ECR, GCR)</p>
</li>
<li><p><strong>Pull</strong> image on any environment</p>
</li>
<li><p><strong>Manage containers</strong> (start/stop/remove/prune)</p>
</li>
</ol>
<hr />
<h2 id="heading-4-what-are-the-main-docker-components"><strong>4. What are the main Docker components?</strong></h2>
<p><strong>Answer:</strong></p>
<ol>
<li><p><strong>Docker Client (CLI)</strong> – sends commands</p>
</li>
<li><p><strong>Docker Daemon</strong> – core engine that executes actions</p>
</li>
<li><p><strong>Docker Images</strong> – read-only templates</p>
</li>
<li><p><strong>Docker Containers</strong> – running instances of images</p>
</li>
<li><p><strong>Docker Registry</strong> – stores images (Docker Hub, ECR, private registry)</p>
</li>
</ol>
<p>Daemon is the “heart” of Docker — if it stops, Docker actions cannot be executed.</p>
<hr />
<h2 id="heading-5-difference-between-copy-and-add-in-dockerfile"><strong>5. Difference between</strong> <code>COPY</code> and <code>ADD</code> in Dockerfile?</h2>
<p><strong>Answer:</strong></p>
<ul>
<li><p><strong>COPY</strong> – Copies files/folders from local machine → image (preferred)</p>
</li>
<li><p><strong>ADD</strong> – Same as COPY + supports downloading from <strong>URL</strong> or auto-extracting archives.</p>
</li>
</ul>
<p><strong>Use COPY unless you specifically need ADD’s special features.</strong></p>
<hr />
<h2 id="heading-6-difference-between-cmd-and-entrypoint"><strong>6. Difference between</strong> <code>CMD</code> and <code>ENTRYPOINT</code>?</h2>
<p><strong>Answer:</strong></p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>CMD</td><td>ENTRYPOINT</td></tr>
</thead>
<tbody>
<tr>
<td>Provides default arguments</td><td>Provides main executable</td></tr>
<tr>
<td>Can be overridden using CLI</td><td>Not overridden by default</td></tr>
<tr>
<td><code>docker run image ls</code> → ls replaces CMD</td><td><code>docker run image ls</code> → ls becomes argument</td></tr>
</tbody>
</table>
</div><p><strong>Best practice:</strong> Use ENTRYPOINT for the main command and CMD for default arguments.</p>
<p>Example:</p>
<pre><code class="lang-plaintext">ENTRYPOINT ["python", "app.py"]
CMD ["--port", "8000"]
</code></pre>
<hr />
<h2 id="heading-7-what-are-docker-networking-types-what-is-the-default"><strong>7. What are Docker networking types? What is the default?</strong></h2>
<p><strong>Answer:</strong></p>
<ol>
<li><p><strong>bridge</strong> – default network for containers</p>
</li>
<li><p><strong>host</strong> – container shares host network</p>
</li>
<li><p><strong>overlay</strong> – used in multi-host (Swarm/Kubernetes)</p>
</li>
<li><p><strong>macvlan</strong> – container appears as a physical device on network</p>
</li>
<li><p><strong>none</strong> – no network</p>
</li>
</ol>
<hr />
<h2 id="heading-8-how-do-you-isolate-networking-between-containers"><strong>8. How do you isolate networking between containers?</strong></h2>
<p><strong>Answer:</strong><br />Create a <strong>custom bridge network</strong>:</p>
<pre><code class="lang-plaintext">docker network create secure_net
docker run --network secure_net ...
</code></pre>
<p>Containers on different networks cannot talk to each other unless explicitly connected.</p>
<hr />
<h2 id="heading-9-what-is-a-multi-stage-docker-build"><strong>9. What is a Multi-Stage Docker Build?</strong></h2>
<p><strong>Answer:</strong><br />It allows you to <strong>use multiple FROM statements</strong> and copy only the required build artifacts into the final image.</p>
<p><strong>Why?</strong></p>
<ul>
<li><p>Reduces image size</p>
</li>
<li><p>Removes build tools from production image</p>
</li>
<li><p>Improves security</p>
</li>
</ul>
<p>Example: Reduce image from ~800MB → 1MB using scratch/alpine.</p>
<hr />
<h2 id="heading-10-what-are-distroless-images"><strong>10. What are Distroless Images?</strong></h2>
<p><strong>Answer:</strong><br />Distroless images (e.g., <a target="_blank" href="http://gcr.io/distroless/"><code>gcr.io/distroless/</code></a><code>...</code>) are <strong>minimal images</strong> that contain only:</p>
<ul>
<li><p>your application</p>
</li>
<li><p>required runtime dependencies</p>
</li>
</ul>
<p>They <strong>do not</strong> contain:<br />❌ shell (<code>sh</code>, <code>bash</code>)<br />❌ package managers (<code>apt</code>, <code>yum</code>)<br />❌ OS utilities (<code>ping</code>, <code>curl</code>)</p>
<p><strong>Benefit:</strong><br />Extremely secure, tiny, no attack surface.</p>
<hr />
<h1 id="heading-real-time-docker-challenges-must-know-for-interviews">🔥 <strong>Real-Time Docker Challenges (Must-Know for Interviews)</strong></h1>
<hr />
<h2 id="heading-1-docker-daemon-single-point-of-failure"><strong>1. Docker Daemon – Single Point of Failure</strong></h2>
<ul>
<li><p>Docker daemon is one single process</p>
</li>
<li><p>If daemon crashes → containers may stop or fail</p>
</li>
</ul>
<p><strong>Modern solution:</strong> <em>Podman</em> (daemonless, rootless).</p>
<hr />
<h2 id="heading-2-docker-daemon-runs-as-root"><strong>2. Docker Daemon Runs as Root</strong></h2>
<ul>
<li><p>By default, daemon runs with root privileges</p>
</li>
<li><p>If a container is compromised, host becomes vulnerable</p>
</li>
</ul>
<p><strong>Solution:</strong></p>
<ul>
<li><p>Use <strong>rootless Docker</strong></p>
</li>
<li><p>Use <strong>Podman</strong> (runs fully rootless)</p>
</li>
<li><p>Always set <code>USER</code> in Dockerfile</p>
</li>
</ul>
<hr />
<h2 id="heading-3-image-size-issues"><strong>3. Image Size Issues</strong></h2>
<ul>
<li><p>Developers often install unnecessary tools</p>
</li>
<li><p>Leads to huge (GB-sized) images</p>
</li>
<li><p>Slow deploys, security risks</p>
</li>
</ul>
<p><strong>Solutions:</strong></p>
<ul>
<li><p>Multi-stage builds</p>
</li>
<li><p>Distroless images</p>
</li>
<li><p>Base images like <code>alpine</code></p>
</li>
</ul>
<hr />
<h2 id="heading-4-networking-misconfigurations"><strong>4. Networking Misconfigurations</strong></h2>
<ul>
<li><p>Wrong port mappings</p>
</li>
<li><p>Misuse of host network</p>
</li>
<li><p>Containers unintentionally communicating</p>
</li>
</ul>
<p><strong>Solution:</strong><br />Custom networks &amp; proper isolation.</p>
<hr />
<h2 id="heading-5-security-vulnerabilities"><strong>5. Security Vulnerabilities</strong></h2>
<ul>
<li><p>Using outdated base images</p>
</li>
<li><p>Running containers as root</p>
</li>
<li><p>Storing secrets inside images</p>
</li>
</ul>
<p><strong>Solution:</strong></p>
<ul>
<li><p>Scan images (Trivy, Anchore)</p>
</li>
<li><p>Use secrets manager</p>
</li>
<li><p>Use non-root user in Dockerfile</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Day 18 - Docker Networking]]></title><description><![CDATA[Bridge vs Host vs Overlay Networks
Secure Containers Using Custom Bridge Network

1️⃣ Introduction — Why Docker Networking?

Docker Networking (or container networking) enables communication:

Between containers.

Between containers and the host syst...]]></description><link>https://blog.dineshcloud.in/day-18-docker-networking</link><guid isPermaLink="true">https://blog.dineshcloud.in/day-18-docker-networking</guid><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[Docker]]></category><category><![CDATA[containers]]></category><dc:creator><![CDATA[Dinesh Kumar K]]></dc:creator><pubDate>Mon, 15 Dec 2025 12:13:22 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-bridge-vs-host-vs-overlay-networks">Bridge vs Host vs Overlay Networks</h2>
<h3 id="heading-secure-containers-using-custom-bridge-network">Secure Containers Using Custom Bridge Network</h3>
<hr />
<h2 id="heading-1-introduction-why-docker-networking">1️⃣ Introduction — Why Docker Networking?</h2>
<ul>
<li><p><strong>Docker Networking</strong> (or container networking) enables <strong>communication</strong>:</p>
<ul>
<li><p>Between <strong>containers</strong>.</p>
</li>
<li><p>Between <strong>containers and the host system</strong>.</p>
</li>
</ul>
</li>
<li><p>Every Docker container requires networking to send and receive data.</p>
</li>
<li><p>Networking in containers is similar in concept to traditional networking in virtual machines but lighter and more flexible.</p>
</li>
</ul>
<hr />
<h2 id="heading-2-why-do-we-need-docker-networking">2️⃣ Why Do We Need Docker Networking?</h2>
<h3 id="heading-scenario-1-containers-need-to-communicate">🧩 Scenario 1 — Containers Need to Communicate</h3>
<ul>
<li><p>Example:</p>
<ul>
<li>Frontend container ↔ Backend container.</li>
</ul>
</li>
<li><p>These containers must exchange data (e.g., API calls, responses).</p>
</li>
<li><p>Networking allows this communication using IPs or service names.</p>
</li>
</ul>
<h3 id="heading-scenario-2-containers-need-isolation">🔒 Scenario 2 — Containers Need Isolation</h3>
<ul>
<li><p>Example:</p>
<ul>
<li>A <strong>login container</strong> and a <strong>payment container</strong>.</li>
</ul>
</li>
<li><p>Payment container stores <strong>sensitive information</strong> (credit cards, user data).</p>
</li>
<li><p>We need <strong>logical isolation</strong> — login users must not access the payment container.</p>
</li>
</ul>
<p>So, Docker networking helps achieve both:</p>
<ul>
<li><p><strong>Connectivity</strong>, and</p>
</li>
<li><p><strong>Isolation</strong>.</p>
</li>
</ul>
<hr />
<h2 id="heading-3-networking-basics-containers-vs-virtual-machines">3️⃣ Networking Basics — Containers vs Virtual Machines</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Feature</td><td>Virtual Machine</td><td>Docker Container</td></tr>
</thead>
<tbody>
<tr>
<td>OS</td><td>Each VM has its own OS</td><td>Containers share the host OS</td></tr>
<tr>
<td>Subnet</td><td>Can have separate subnets</td><td>Use Docker-managed subnets</td></tr>
<tr>
<td>Isolation</td><td>Built-in via hypervisor</td><td>Achieved via Docker networks</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-4-how-containers-communicate-with-the-host">4️⃣ How Containers Communicate with the Host</h2>
<h3 id="heading-default-host-interface">🔧 Default Host Interface</h3>
<ul>
<li><p>Every host (server or laptop) has a network interface like:</p>
<pre><code class="lang-plaintext">  eth0 → 192.168.1.10
</code></pre>
</li>
<li><p>Each container also gets its own interface:</p>
<pre><code class="lang-plaintext">  eth0 → 172.17.0.2
</code></pre>
</li>
<li><p>These two belong to <strong>different subnets</strong> — so, direct ping fails.</p>
</li>
</ul>
<h3 id="heading-the-solution-virtual-ethernet-bridge">🌉 The Solution — Virtual Ethernet Bridge</h3>
<ul>
<li><p>Docker automatically creates a <strong>virtual bridge</strong> called <code>docker0</code>.</p>
</li>
<li><p>This bridge acts like a router between the host and containers.</p>
</li>
<li><p>When you create a container, Docker links its virtual ethernet (<code>veth</code>) to this bridge.</p>
</li>
</ul>
<p>✅ Result: Containers can now communicate with the host and each other.</p>
<hr />
<h2 id="heading-5-default-docker-network-the-bridge-network">5️⃣ Default Docker Network — The Bridge Network</h2>
<h3 id="heading-what-is-bridge-networking">⚙️ What Is Bridge Networking?</h3>
<ul>
<li><p>A <strong>bridge</strong> connects containers to the host through a virtual switch (<code>docker0</code>).</p>
</li>
<li><p>It provides:</p>
<ul>
<li><p>Communication between containers.</p>
</li>
<li><p>Communication between container and host.</p>
</li>
<li><p>Internet access (via NAT).</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-example">🧩 Example:</h3>
<pre><code class="lang-plaintext">docker network ls
</code></pre>
<p>Shows:</p>
<pre><code class="lang-plaintext">NETWORK ID     NAME      DRIVER    SCOPE
abcd1234       bridge    bridge    local
</code></pre>
<h3 id="heading-behavior">📦 Behavior:</h3>
<ul>
<li><p>Containers connected to the same bridge can <strong>ping</strong> each other.</p>
</li>
<li><p>All containers share the <strong>same subnet</strong>.</p>
</li>
<li><p>This network is created automatically by Docker.</p>
</li>
</ul>
<hr />
<h2 id="heading-6-other-docker-network-types">6️⃣ Other Docker Network Types</h2>
<h3 id="heading-1-bridge-network-default">1. 🧱 Bridge Network (Default)</h3>
<ul>
<li><p><strong>Virtual bridge (docker0)</strong> created automatically.</p>
</li>
<li><p>Containers communicate using internal IPs.</p>
</li>
<li><p>Suitable for single-host setups.</p>
</li>
</ul>
<h3 id="heading-2-host-network">2. 🌐 Host Network</h3>
<ul>
<li><p>The container <strong>shares the host’s network stack</strong>.</p>
</li>
<li><p>No separate IP; it uses the host’s IP.</p>
</li>
<li><p>Example:</p>
<pre><code class="lang-plaintext">  docker run -d --network=host nginx
</code></pre>
</li>
<li><p><strong>Pros:</strong> Faster, direct access.</p>
</li>
<li><p><strong>Cons:</strong> No isolation; insecure (container = host access).</p>
</li>
</ul>
<h3 id="heading-3-overlay-network">3. 🕸️ Overlay Network</h3>
<ul>
<li><p>Used for <strong>multi-host communication</strong> (in Docker Swarm or Kubernetes).</p>
</li>
<li><p>Creates a network that spans across multiple Docker hosts.</p>
</li>
<li><p>Allows containers on different machines to communicate securely.</p>
</li>
<li><p>Common in <strong>container orchestration platforms</strong>.</p>
</li>
</ul>
<hr />
<h2 id="heading-7-networking-deep-dive-how-communication-works">7️⃣ Networking Deep Dive — How Communication Works</h2>
<p>Example setup:</p>
<pre><code class="lang-plaintext">Host eth0: 192.168.1.5
docker0 (bridge): 172.17.0.1
Container 1 eth0: 172.17.0.2
Container 2 eth0: 172.17.0.3
</code></pre>
<ul>
<li><p>Both containers use the <strong>same bridge (</strong><code>docker0</code>).</p>
</li>
<li><p>Hence:</p>
<ul>
<li><p>They can <strong>ping each other</strong>.</p>
</li>
<li><p>They share the same communication channel.</p>
</li>
</ul>
</li>
</ul>
<p>⚠️ Problem:</p>
<ul>
<li><p>All containers use the same bridge.</p>
</li>
<li><p>A <strong>security risk</strong> — if one container is compromised, others are accessible.</p>
</li>
</ul>
<hr />
<h2 id="heading-8-custom-bridge-networks-securing-containers">8️⃣ Custom Bridge Networks — Securing Containers</h2>
<p>To isolate sensitive containers, you can <strong>create custom bridge networks</strong>.</p>
<h3 id="heading-why-create-custom-bridges">🧱 Why Create Custom Bridges?</h3>
<ul>
<li><p>The default bridge (<code>docker0</code>) allows all containers to communicate.</p>
</li>
<li><p>A custom bridge provides:</p>
<ul>
<li><p><strong>Network segmentation.</strong></p>
</li>
<li><p><strong>Security boundaries.</strong></p>
</li>
<li><p><strong>Controlled communication.</strong></p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-create-a-custom-bridge">🧰 Create a Custom Bridge</h3>
<pre><code class="lang-plaintext">docker network create secure_network
</code></pre>
<h3 id="heading-verify">🧪 Verify</h3>
<pre><code class="lang-plaintext">docker network ls
</code></pre>
<p>Output:</p>
<pre><code class="lang-plaintext">bridge
host
none
secure_network
</code></pre>
<hr />
<h2 id="heading-9-attach-containers-to-custom-bridge-networks">9️⃣ Attach Containers to Custom Bridge Networks</h2>
<h3 id="heading-example-1">Example:</h3>
<h4 id="heading-step-1-run-normal-containers">Step 1 — Run Normal Containers</h4>
<pre><code class="lang-plaintext">docker run -d --name login nginx
docker run -d --name logout nginx
</code></pre>
<ul>
<li><p>Both use default <strong>bridge</strong> network.</p>
</li>
<li><p>Can <strong>ping each other</strong>.</p>
</li>
</ul>
<h4 id="heading-step-2-create-a-secure-network">Step 2 — Create a Secure Network</h4>
<pre><code class="lang-plaintext">docker network create secure_network
</code></pre>
<h4 id="heading-step-3-run-secure-container">Step 3 — Run Secure Container</h4>
<pre><code class="lang-plaintext">docker run -d --name finance --network=secure_network nginx
</code></pre>
<h4 id="heading-step-4-verify">Step 4 — Verify</h4>
<pre><code class="lang-plaintext">docker inspect finance
</code></pre>
<p>You’ll see:</p>
<pre><code class="lang-plaintext">"Networks": {
  "secure_network": {
    "IPAddress": "172.19.0.2"
  }
}
</code></pre>
<p>✅ Result:</p>
<ul>
<li><p><code>login</code> → <code>bridge</code> → 172.17.x.x</p>
</li>
<li><p><code>finance</code> → <code>secure_network</code> → 172.19.x.x</p>
</li>
<li><p><strong>They cannot ping each other.</strong></p>
</li>
<li><p><strong>Finance container isolated successfully.</strong></p>
</li>
</ul>
<hr />
<h2 id="heading-summary-of-isolation">🔒 Summary of Isolation</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Container</td><td>Network Type</td><td>Communication</td></tr>
</thead>
<tbody>
<tr>
<td>login</td><td>default bridge</td><td>Can talk to logout</td></tr>
<tr>
<td>logout</td><td>default bridge</td><td>Can talk to login</td></tr>
<tr>
<td>finance</td><td>custom bridge</td><td>Isolated from others</td></tr>
</tbody>
</table>
</div><p>This achieves <strong>network-level security</strong> while staying within Docker itself.</p>
<hr />
<h2 id="heading-host-network-example">🔍 Host Network Example</h2>
<pre><code class="lang-plaintext">docker run -d --name host_demo --network=host nginx
</code></pre>
<ul>
<li><p>Container uses host’s IP (<code>192.168.1.5</code>).</p>
</li>
<li><p><code>docker inspect host_demo</code> shows:</p>
<ul>
<li><p><code>"NetworkMode": "host"</code></p>
</li>
<li><p>No separate IP address.</p>
</li>
</ul>
</li>
<li><p>⚠️ <strong>No isolation</strong> — directly exposed on host’s interface.</p>
</li>
</ul>
<hr />
<h2 id="heading-recap-docker-networking-summary">🔚 Recap — Docker Networking Summary</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Network Type</td><td>Description</td><td>Use Case</td><td>Security</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Bridge</strong></td><td>Default virtual network via docker0</td><td>Single-host apps</td><td>Medium</td></tr>
<tr>
<td><strong>Host</strong></td><td>Shares host network</td><td>Performance-critical or testing</td><td>Low</td></tr>
<tr>
<td><strong>Overlay</strong></td><td>Cross-host networking</td><td>Multi-node clusters (Swarm/K8s)</td><td>High</td></tr>
<tr>
<td><strong>Custom Bridge</strong></td><td>User-created network</td><td>Secure container isolation</td><td>High</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-key-takeaways">🧠 Key Takeaways</h2>
<ul>
<li><p>Docker networking lets containers <strong>communicate or isolate</strong> as needed.</p>
</li>
<li><p><strong>Bridge Network</strong> → Default communication method.</p>
</li>
<li><p><strong>Host Network</strong> → Shares host network; faster but insecure.</p>
</li>
<li><p><strong>Overlay Network</strong> → For multi-host clusters (Docker Swarm/Kubernetes).</p>
</li>
<li><p><strong>Custom Bridge Network</strong> → Best way to isolate secure containers on a single host.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Day 17 - Docker Volumes and Bind Mounts]]></title><description><![CDATA[1️⃣ The Problem — Why Persistent Storage Is Needed
Containers are ephemeral (temporary).When a container stops or crashes, all data inside it is lost.
🧩 Example 1: NGINX Logs Lost After Container Stops

Suppose you run an NGINX container that stores...]]></description><link>https://blog.dineshcloud.in/day-17-docker-volumes-and-bind-mounts</link><guid isPermaLink="true">https://blog.dineshcloud.in/day-17-docker-volumes-and-bind-mounts</guid><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[Docker]]></category><category><![CDATA[containers]]></category><dc:creator><![CDATA[Dinesh Kumar K]]></dc:creator><pubDate>Mon, 15 Dec 2025 12:12:23 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-1-the-problem-why-persistent-storage-is-needed">1️⃣ The Problem — Why Persistent Storage Is Needed</h2>
<p>Containers are <strong>ephemeral</strong> (temporary).<br />When a container stops or crashes, <strong>all data inside it is lost</strong>.</p>
<h3 id="heading-example-1-nginx-logs-lost-after-container-stops">🧩 Example 1: NGINX Logs Lost After Container Stops</h3>
<ul>
<li><p>Suppose you run an NGINX container that stores <strong>user login info and IP addresses</strong> in a log file.</p>
</li>
<li><p>These logs are critical for <strong>security audits</strong> and <strong>user tracking</strong>.</p>
</li>
<li><p>If the container goes down → <strong>the log file is deleted</strong> because the container filesystem is temporary.</p>
</li>
<li><p>Result: Company loses important user and audit data.</p>
</li>
</ul>
<h3 id="heading-example-2-frontendbackend-data-sharing-problem">🧩 Example 2: Frontend–Backend Data Sharing Problem</h3>
<ul>
<li><p>A <strong>backend container</strong> continuously writes data (e.g., JSON, YAML, or HTML files).</p>
</li>
<li><p>A <strong>frontend container</strong> reads these files to display content to users.</p>
</li>
<li><p>If the backend container goes down:</p>
<ul>
<li><p>All previously written files are lost.</p>
</li>
<li><p>The frontend can’t access old records (e.g., yesterday’s data).</p>
</li>
</ul>
</li>
<li><p>Result: Broken application — only today’s data is available.</p>
</li>
</ul>
<h3 id="heading-example-3-container-needs-to-read-host-files">🧩 Example 3: Container Needs to Read Host Files</h3>
<ul>
<li><p>A <strong>cron job on the host</strong> creates files periodically.</p>
</li>
<li><p>The <strong>container</strong> needs to read those files.</p>
</li>
<li><p>By default, a container <strong>cannot access the host filesystem</strong>.</p>
</li>
<li><p>Result: Container fails to read required host files.</p>
</li>
</ul>
<hr />
<h2 id="heading-2-the-solution-persistent-storage-options">2️⃣ The Solution — Persistent Storage Options</h2>
<p>Docker introduced <strong>two methods</strong> to solve these problems:</p>
<ol>
<li><p><strong>Bind Mounts</strong></p>
</li>
<li><p><strong>Volumes</strong></p>
</li>
</ol>
<p>Both allow data to persist even if containers are deleted or recreated.</p>
<hr />
<h2 id="heading-3-bind-mounts">3️⃣ 🔗 Bind Mounts</h2>
<h3 id="heading-concept">🧠 Concept</h3>
<p>Bind mounts <strong>connect (bind)</strong> a folder inside the container to a folder on the <strong>host machine</strong>.</p>
<ul>
<li><p>Example:</p>
<pre><code class="lang-plaintext">  Host folder: /app
  Container folder: /app
</code></pre>
</li>
<li><p>Any changes made inside the container’s <code>/app</code> folder reflect on the host, and vice versa.</p>
</li>
</ul>
<h3 id="heading-how-it-works">⚙️ How It Works</h3>
<ul>
<li><p>Data is stored <strong>on the host</strong>.</p>
</li>
<li><p>If the container stops or is removed → the host folder still contains all files.</p>
</li>
<li><p>When a new container is started and the same folder is bound again, the <strong>data is retained</strong>.</p>
</li>
</ul>
<h3 id="heading-advantages">✅ Advantages</h3>
<ul>
<li><p>Very simple setup.</p>
</li>
<li><p>Great for <strong>development environments</strong> where you want to see changes instantly.</p>
</li>
</ul>
<h3 id="heading-limitations">⚠️ Limitations</h3>
<ul>
<li><p>Must specify <strong>exact host directory path</strong>.</p>
</li>
<li><p>Works <strong>only on that specific host</strong>.</p>
</li>
<li><p>No built-in management via Docker CLI.</p>
</li>
</ul>
<hr />
<h2 id="heading-4-docker-volumes">4️⃣ 📦 Docker Volumes</h2>
<h3 id="heading-concept-1">🧠 Concept</h3>
<p>Volumes provide <strong>Docker-managed storage</strong> that is independent of the container lifecycle.</p>
<ul>
<li><p>You <strong>create a volume</strong> using Docker commands.</p>
</li>
<li><p>Docker internally manages where and how it’s stored on the host.</p>
</li>
</ul>
<h3 id="heading-example">🧩 Example</h3>
<pre><code class="lang-plaintext">docker volume create mydata
docker run -d --mount source=mydata,target=/app nginx
</code></pre>
<p>Now <code>/app</code> inside the container is linked to the Docker volume <code>mydata</code>.</p>
<h3 id="heading-lifecycle-management">🧰 Lifecycle Management</h3>
<ul>
<li><p>Create, inspect, delete volumes easily:</p>
<pre><code class="lang-plaintext">  docker volume create &lt;name&gt;
  docker volume ls
  docker volume inspect &lt;name&gt;
  docker volume rm &lt;name&gt;
</code></pre>
</li>
<li><p>Volumes can be attached to <strong>one or multiple containers</strong>.</p>
</li>
</ul>
<h3 id="heading-features-and-benefits">⚙️ Features and Benefits</h3>
<ul>
<li><p><strong>Managed via Docker CLI</strong> (no manual path setup).</p>
</li>
<li><p><strong>Logical partitions</strong> created on the host.</p>
</li>
<li><p>Can be <strong>moved, backed up, and shared</strong> across containers.</p>
</li>
<li><p>Can use <strong>external storage</strong> like:</p>
<ul>
<li><p>AWS S3</p>
</li>
<li><p>NFS</p>
</li>
<li><p>External EC2 instance disks</p>
</li>
</ul>
</li>
<li><p>Supports <strong>high-performance storage</strong> (e.g., SSD/NVMe) for data-intensive apps.</p>
</li>
<li><p>Excellent for <strong>production environments</strong>.</p>
</li>
</ul>
<h3 id="heading-difference-from-bind-mounts">🧱 Difference from Bind Mounts</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Feature</td><td>Bind Mounts</td><td>Volumes</td></tr>
</thead>
<tbody>
<tr>
<td>Storage Location</td><td>Host-specified folder</td><td>Docker-managed folder</td></tr>
<tr>
<td>Portability</td><td>Tied to one host</td><td>Can be moved or external</td></tr>
<tr>
<td>Management</td><td>Manual</td><td>Via Docker CLI</td></tr>
<tr>
<td>Backup Support</td><td>Manual</td><td>Easy (can connect to remote storage)</td></tr>
<tr>
<td>Use Case</td><td>Local dev/test</td><td>Production workloads</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-5-volume-command-examples">5️⃣ 🧩 Volume Command Examples</h2>
<pre><code class="lang-plaintext"># Create a new volume
docker volume create myvol

# List all volumes
docker volume ls

# Inspect details
docker volume inspect myvol

# Delete a volume
docker volume rm myvol
</code></pre>
<hr />
<h2 id="heading-6-practical-example">6️⃣ 🧠 Practical Example</h2>
<h3 id="heading-step-1-create-a-volume">Step 1 — Create a Volume</h3>
<pre><code class="lang-plaintext">docker volume create myvol
</code></pre>
<h3 id="heading-step-2-run-container-using-volume">Step 2 — Run Container Using Volume</h3>
<pre><code class="lang-plaintext">docker run -d --mount source=myvol,target=/app nginx
</code></pre>
<ul>
<li><p>The container <code>nginx</code> now uses <code>/app</code> linked to <code>myvol</code>.</p>
</li>
<li><p>Any file written in <code>/app</code> is <strong>persisted</strong>.</p>
</li>
</ul>
<h3 id="heading-step-3-inspect-container-mount">Step 3 — Inspect Container Mount</h3>
<pre><code class="lang-plaintext">docker inspect &lt;container_id&gt;
</code></pre>
<p>You’ll see:</p>
<pre><code class="lang-plaintext">"Mounts": [
  {
    "Type": "volume",
    "Name": "myvol",
    "Source": "/var/lib/docker/volumes/myvol/_data",
    "Destination": "/app",
    "Mode": "rw"
  }
]
</code></pre>
<h3 id="heading-step-4-delete-the-volume">Step 4 — Delete the Volume</h3>
<p>You <strong>cannot</strong> delete a volume that’s in use:</p>
<pre><code class="lang-plaintext">docker volume rm myvol
# Error: volume is in use
</code></pre>
<p>So first stop and remove the container:</p>
<pre><code class="lang-plaintext">docker stop &lt;container_id&gt;
docker rm &lt;container_id&gt;
docker volume rm myvol
</code></pre>
<hr />
<h2 id="heading-7-v-vs-mount-option">7️⃣ 🔍 <code>-v</code> vs <code>--mount</code> Option</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Option</td><td>Meaning</td><td>Notes</td></tr>
</thead>
<tbody>
<tr>
<td><code>-v</code></td><td>Short syntax</td><td>Compact, older style</td></tr>
<tr>
<td><code>--mount</code></td><td>Long syntax</td><td>More <strong>verbose</strong>, easier to read &amp; understand</td></tr>
</tbody>
</table>
</div><p>Example:</p>
<pre><code class="lang-plaintext"># Short form
docker run -d -v myvol:/app nginx

# Verbose form
docker run -d --mount source=myvol,target=/app nginx
</code></pre>
<p>✅ Recommended: Use <code>--mount</code> for clarity in production or team projects.</p>
<hr />
<h2 id="heading-summary">🏁 Summary</h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Concept</td><td>Description</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Bind Mounts</strong></td><td>Directly link container path ↔ host path. Simple but less flexible.</td></tr>
<tr>
<td><strong>Volumes</strong></td><td>Docker-managed storage. Persistent, portable, and powerful.</td></tr>
<tr>
<td><strong>Use Volumes When</strong></td><td>You need container data persistence, backup, or sharing between multiple containers.</td></tr>
<tr>
<td><strong>Key Commands</strong></td><td><code>docker volume create</code>, <code>docker volume ls</code>, <code>docker volume inspect</code>, <code>docker volume rm</code></td></tr>
<tr>
<td><strong>Best Practice</strong></td><td>Prefer Volumes over Bind Mounts for production-grade applications.</td></tr>
</tbody>
</table>
</div>]]></content:encoded></item><item><title><![CDATA[Day 16 - Multi-Stage Docker Builds & Distroless Images]]></title><description><![CDATA[🚀 Reduce Image Size by 800% and Improve Security

🎯 1. Objective
In this session, we’ll learn:

The concept of Multi-Stage Docker Builds

The concept of Distroless (Destroyless) Images


Both concepts are closely related — using distroless images e...]]></description><link>https://blog.dineshcloud.in/day-16-multi-stage-docker-builds-and-distroless-images</link><guid isPermaLink="true">https://blog.dineshcloud.in/day-16-multi-stage-docker-builds-and-distroless-images</guid><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[containers]]></category><category><![CDATA[Docker]]></category><dc:creator><![CDATA[Dinesh Kumar K]]></dc:creator><pubDate>Mon, 15 Dec 2025 12:09:59 GMT</pubDate><content:encoded><![CDATA[<h3 id="heading-reduce-image-size-by-800-and-improve-security">🚀 Reduce Image Size by 800% and Improve Security</h3>
<hr />
<h2 id="heading-1-objective">🎯 <strong>1. Objective</strong></h2>
<p>In this session, we’ll learn:</p>
<ol>
<li><p>The concept of <strong>Multi-Stage Docker Builds</strong></p>
</li>
<li><p>The concept of <strong>Distroless (Destroyless) Images</strong></p>
</li>
</ol>
<p>Both concepts are <strong>closely related</strong> — using distroless images enhances the efficiency and security of multi-stage builds.</p>
<hr />
<h2 id="heading-2-the-problem-with-traditional-docker-builds">🧱 <strong>2. The Problem with Traditional Docker Builds</strong></h2>
<p>Let’s take a simple example:<br />You want to containerize a <strong>Python calculator application</strong>.</p>
<h3 id="heading-typical-steps">Typical Steps</h3>
<ol>
<li><p>Start from a base image (e.g., <code>ubuntu:latest</code>)</p>
</li>
<li><p>Set a working directory (optional)</p>
</li>
<li><p>Install dependencies:</p>
<ul>
<li><p>Python</p>
</li>
<li><p>pip</p>
</li>
<li><p>Required Python modules/packages</p>
</li>
</ul>
</li>
<li><p>Copy source code into the image</p>
</li>
<li><p>Build and run the application (via <code>CMD</code> or <code>ENTRYPOINT</code>)</p>
</li>
</ol>
<h3 id="heading-problem">⚠️ Problem</h3>
<p>Although this Dockerfile works, it’s <strong>inefficient</strong>:</p>
<ul>
<li><p>The image includes the <strong>entire Ubuntu OS</strong> plus unnecessary packages (<code>apt</code>, <code>curl</code>, etc.)</p>
</li>
<li><p>These packages are <strong>only needed during build</strong>, not runtime</p>
</li>
<li><p>The final image becomes <strong>huge</strong> and <strong>slow to pull/run</strong></p>
</li>
</ul>
<hr />
<h2 id="heading-3-build-vs-run-phases">⚙️ <strong>3. Build vs. Run Phases</strong></h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Stage</td><td>Purpose</td><td>Example</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Build Stage</strong></td><td>Compiles or prepares the app</td><td>Installs compilers, dependencies</td></tr>
<tr>
<td><strong>Run Stage</strong></td><td>Executes the final app</td><td>Needs only runtime environment</td></tr>
</tbody>
</table>
</div><p>For instance:</p>
<ul>
<li><p>A <strong>Java</strong> app needs <strong>JDK</strong> to build, but only <strong>JRE</strong> to run.</p>
</li>
<li><p>A <strong>Python</strong> app needs <strong>pip</strong> and libraries to build, but only <strong>Python runtime</strong> to run.</p>
</li>
</ul>
<p>So, it’s wasteful to keep all build-time tools in the final image.</p>
<hr />
<h2 id="heading-4-dockers-solution-multi-stage-builds">🧩 <strong>4. Docker’s Solution — Multi-Stage Builds</strong></h2>
<p>To solve this problem, Docker introduced <strong>multi-stage builds</strong>.</p>
<h3 id="heading-concept">💡 Concept</h3>
<p>You can <strong>split your Dockerfile into multiple stages</strong>, using <strong>multiple</strong> <code>FROM</code> statements in one file.</p>
<p>Each stage:</p>
<ul>
<li><p>Builds a specific part of your application</p>
</li>
<li><p>Can copy artifacts (like binaries) to the next stage</p>
</li>
<li><p>Keeps the <strong>final image minimal</strong></p>
</li>
</ul>
<hr />
<h2 id="heading-5-example-two-stage-dockerfile">🏗️ <strong>5. Example: Two-Stage Dockerfile</strong></h2>
<h3 id="heading-stage-1-build-stage">Stage 1 – Build Stage</h3>
<pre><code class="lang-plaintext">FROM ubuntu AS build
RUN apt-get update &amp;&amp; apt-get install -y python3 pip
COPY . /app
WORKDIR /app
RUN python3 setup.py build
</code></pre>
<h3 id="heading-stage-2-final-stage">Stage 2 – Final Stage</h3>
<pre><code class="lang-plaintext">FROM python:3.10-slim
COPY --from=build /app/dist /app
CMD ["python3", "/app/main.py"]
</code></pre>
<p>✅ <strong>Result:</strong></p>
<ul>
<li><p>The build tools (like <code>apt</code>, compilers, pip caches) are <strong>excluded</strong></p>
</li>
<li><p>Only the necessary runtime (Python + your app) remains</p>
</li>
<li><p>Image size drastically <strong>reduces</strong></p>
</li>
</ul>
<hr />
<h2 id="heading-6-example-multi-stage-build-for-complex-app">🧮 <strong>6. Example: Multi-Stage Build for Complex App</strong></h2>
<p>Imagine a <strong>3-tier application</strong>:</p>
<ul>
<li><p>Frontend (React)</p>
</li>
<li><p>Backend (Java Spring Boot)</p>
</li>
<li><p>Database (MySQL)</p>
</li>
</ul>
<h3 id="heading-traditional-method">Traditional Method:</h3>
<ul>
<li>All dependencies (Node.js, JDK, MySQL client) installed in one image → ~1 GB+.</li>
</ul>
<h3 id="heading-with-multi-stage-build">With Multi-Stage Build:</h3>
<ul>
<li><p>Each part built in separate stages:</p>
<ul>
<li><p>Stage 1 → Frontend build (React)</p>
</li>
<li><p>Stage 2 → Backend build (Java)</p>
</li>
<li><p>Stage 3 → Final stage (only Java runtime + built artifacts)</p>
</li>
</ul>
</li>
<li><p>Final image = ~150 MB</p>
</li>
</ul>
<p><strong>Result:</strong><br />Image size reduced by ~85–90% with cleaner, modular build process.</p>
<hr />
<h2 id="heading-7-image-size-comparison-example">📉 <strong>7. Image Size Comparison Example</strong></h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Type</td><td>Description</td><td>Approx. Size</td></tr>
</thead>
<tbody>
<tr>
<td>Traditional single-stage</td><td><code>ubuntu + go + source + runtime</code></td><td>861 MB</td></tr>
<tr>
<td>Multi-stage + Distroless</td><td>Only runtime + binary</td><td><strong>1.83 MB</strong></td></tr>
<tr>
<td>Reduction</td><td>–</td><td>~800× smaller 🚀</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-8-multi-stage-docker-syntax-example">⚙️ <strong>8. Multi-Stage Docker Syntax Example</strong></h2>
<pre><code class="lang-plaintext"># Stage 1: Build
FROM ubuntu AS build
RUN apt-get install -y golang
COPY . /src
WORKDIR /src
RUN go build -o calculator calculator.go

# Stage 2: Final (Distroless)
FROM scratch
COPY --from=build /src/calculator /
ENTRYPOINT ["/calculator"]
</code></pre>
<h3 id="heading-explanation">Explanation:</h3>
<ul>
<li><p><code>AS build</code> → Creates a named stage</p>
</li>
<li><p><code>COPY --from=build</code> → Copies artifact from build stage</p>
</li>
<li><p><code>FROM scratch</code> → Uses an empty minimal image (distroless base)</p>
</li>
</ul>
<hr />
<h2 id="heading-9-what-are-distroless-images">🪶 <strong>9. What Are Distroless Images?</strong></h2>
<h3 id="heading-definition">Definition</h3>
<p>A <strong>Distroless Image</strong> is a <strong>very minimalistic base image</strong> that includes:</p>
<ul>
<li><p>Only <strong>runtime binaries</strong> (e.g., Python runtime, Java runtime)</p>
</li>
<li><p><strong>No package manager</strong>, <strong>no shell</strong>, <strong>no OS utilities</strong></p>
</li>
</ul>
<h3 id="heading-examples">🔍 Examples</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Language</td><td>Distroless Image Example</td></tr>
</thead>
<tbody>
<tr>
<td>Java</td><td><code>gcr.io/distroless/java17</code></td></tr>
<tr>
<td>Python</td><td><code>gcr.io/distroless/python3</code></td></tr>
<tr>
<td>Node.js</td><td><code>gcr.io/distroless/nodejs</code></td></tr>
<tr>
<td>Go</td><td><code>scratch</code> (empty base, needs no runtime)</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-10-why-distroless">🧰 <strong>10. Why Distroless?</strong></h2>
<h3 id="heading-advantages">✅ Advantages</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Benefit</td><td>Description</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Smaller Image Size</strong></td><td>Removes all unnecessary OS layers</td></tr>
<tr>
<td><strong>Higher Security</strong></td><td>No package manager, shell, or vulnerable binaries</td></tr>
<tr>
<td><strong>Faster Deployment</strong></td><td>Lightweight → faster pull/run</td></tr>
<tr>
<td><strong>Best with Multi-Stage Builds</strong></td><td>Build heavy → final image minimal</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-11-security-benefits">🔒 <strong>11. Security Benefits</strong></h2>
<ul>
<li><p>Traditional base images (like Ubuntu, CentOS) come with many <strong>system packages</strong> → higher <strong>attack surface</strong>.</p>
</li>
<li><p>Distroless images have <strong>no shell</strong>, <strong>no apt</strong>, <strong>no curl</strong>, etc.</p>
</li>
<li><p>Hackers can’t exploit missing tools.</p>
</li>
<li><p>Greatly reduces <strong>CVE (Common Vulnerability Exposure)</strong> count.</p>
</li>
</ul>
<p>Example:</p>
<blockquote>
<p>In interviews, you can say:<br />“We moved from Ubuntu-based containers to Python Distroless images, eliminating unnecessary system binaries and greatly reducing vulnerability exposure.”</p>
</blockquote>
<hr />
<h2 id="heading-12-special-case-go-golang-applications">🦴 <strong>12. Special Case: Go (Golang) Applications</strong></h2>
<ul>
<li><p>Go produces <strong>statically compiled binaries</strong>.</p>
</li>
<li><p>Doesn’t even need a runtime to execute.</p>
</li>
<li><p>Works perfectly with the <code>scratch</code> base image.</p>
</li>
<li><p>Final image size can be as small as <strong>1–2 MB</strong>.</p>
</li>
</ul>
<p>Hence, Go + Multi-Stage + Distroless = 💯 perfect combination.</p>
<hr />
<h2 id="heading-13-finding-distroless-images">🔍 <strong>13. Finding Distroless Images</strong></h2>
<p>Visit the official <strong>Google Distroless GitHub repository</strong>:<br />👉 <a target="_blank" href="https://github.com/GoogleContainerTools/distroless">https://github.com/GoogleContainerTools/distroless</a></p>
<p>There you’ll find folders for:</p>
<ul>
<li><code>base</code>, <code>cc</code>, <code>java</code>, <code>python3</code>, <code>nodejs</code>, etc.<br />  Each folder’s <strong>README.md</strong> lists the image name (e.g., <code>gcr.io/distroless/java17</code>).</li>
</ul>
<hr />
<h2 id="heading-14-key-interview-points">🧾 <strong>14. Key Interview Points</strong></h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Question</td><td>Short Answer</td></tr>
</thead>
<tbody>
<tr>
<td>What is a Multi-Stage Docker Build?</td><td>A way to split a Dockerfile into multiple build stages and keep only the final runtime stage.</td></tr>
<tr>
<td>What are Distroless Images?</td><td>Minimal base images without OS or shell, containing only runtime dependencies.</td></tr>
<tr>
<td>Benefits of Multi-Stage Builds</td><td>Smaller, faster, modular images.</td></tr>
<tr>
<td>Benefits of Distroless Images</td><td>Security, minimalism, reduced vulnerabilities.</td></tr>
<tr>
<td>How many stages can a multi-stage build have?</td><td>Unlimited, but only one final stage is used to run the container.</td></tr>
<tr>
<td>Which base image is the most minimal?</td><td><code>scratch</code> (completely empty).</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-15-summary">🧠 <strong>15. Summary</strong></h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Concept</td><td>Key Idea</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Traditional Dockerfile</strong></td><td>Installs build + runtime tools in one image → large &amp; insecure</td></tr>
<tr>
<td><strong>Multi-Stage Docker Build</strong></td><td>Separates build &amp; runtime stages → smaller &amp; efficient</td></tr>
<tr>
<td><strong>Distroless Image</strong></td><td>Removes OS layer entirely → minimal, secure runtime</td></tr>
<tr>
<td><strong>Result</strong></td><td>Up to <strong>800× smaller</strong>, <strong>highly secure</strong>, <strong>production-ready</strong> images</td></tr>
</tbody>
</table>
</div><hr />
<p>✅ <strong>Final Takeaway:</strong></p>
<blockquote>
<p>Using <strong>Multi-Stage Docker Builds</strong> + <strong>Distroless Images</strong> gives you:</p>
<ul>
<li><p>Massive reduction in image size</p>
</li>
<li><p>Improved container startup speed</p>
</li>
<li><p>Drastically fewer security vulnerabilities</p>
</li>
<li><p>Best practice for all modern production-grade container builds</p>
</li>
</ul>
</blockquote>
]]></content:encoded></item><item><title><![CDATA[Day 15 - Introduction to Containers and Docker]]></title><description><![CDATA[🎯 1. Objective
Before working on real DevOps projects or advanced Docker concepts, it’s essential to first understand:

What containers are

How they differ from virtual machines (VMs)

What Docker and Buildah do in the container world


This sessio...]]></description><link>https://blog.dineshcloud.in/day-15-introduction-to-containers-and-docker</link><guid isPermaLink="true">https://blog.dineshcloud.in/day-15-introduction-to-containers-and-docker</guid><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[Docker]]></category><dc:creator><![CDATA[Dinesh Kumar K]]></dc:creator><pubDate>Mon, 15 Dec 2025 12:08:50 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-1-objective">🎯 <strong>1. Objective</strong></h2>
<p>Before working on real DevOps projects or advanced Docker concepts, it’s essential to first understand:</p>
<ul>
<li><p>What <strong>containers</strong> are</p>
</li>
<li><p>How they differ from <strong>virtual machines (VMs)</strong></p>
</li>
<li><p>What <strong>Docker</strong> and <strong>Buildah</strong> do in the container world</p>
</li>
</ul>
<p>This session focuses purely on <strong>concepts</strong>, not hands-on commands.</p>
<hr />
<h2 id="heading-2-background-from-physical-servers-virtual-machines-containers">💡 <strong>2. Background: From Physical Servers → Virtual Machines → Containers</strong></h2>
<h3 id="heading-physical-servers">🖥️ <strong>Physical Servers</strong></h3>
<ul>
<li><p>Earlier, organizations ran one application per physical server.</p>
</li>
<li><p>Hardware resources (CPU, RAM, storage) were often <strong>underutilized</strong>.</p>
</li>
<li><p>Maintaining thousands of physical servers was <strong>expensive</strong> and inefficient.</p>
</li>
</ul>
<h3 id="heading-virtualization">🧰 <strong>Virtualization</strong></h3>
<p>To improve resource utilization, the concept of <strong>virtualization</strong> was introduced using a <strong>Hypervisor</strong>.</p>
<p><strong>Hypervisor:</strong><br />A software layer that lets you create <strong>multiple virtual machines (VMs)</strong> on a single physical server.</p>
<p>Each <strong>VM</strong> has:</p>
<ul>
<li><p>Its own <strong>Operating System (OS)</strong></p>
</li>
<li><p>Its own <strong>applications and dependencies</strong></p>
</li>
<li><p>Logical isolation from other VMs</p>
</li>
</ul>
<p><strong>Benefits:</strong></p>
<ul>
<li><p>Better hardware utilization</p>
</li>
<li><p>Isolation between applications</p>
</li>
<li><p>Easier deployment than physical servers</p>
</li>
</ul>
<p><strong>Drawback:</strong></p>
<ul>
<li><p>Each VM still requires a <strong>full OS</strong>, consuming large amounts of CPU, RAM, and disk.</p>
</li>
<li><p>Even with virtualization, many VMs remain <strong>underutilized</strong> most of the time.</p>
</li>
</ul>
<hr />
<h2 id="heading-3-the-problem-with-virtual-machines">⚙️ <strong>3. The Problem with Virtual Machines</strong></h2>
<p>Let’s say:</p>
<ul>
<li><p>Physical server = 100 GB RAM, 100 CPUs</p>
</li>
<li><p>You create 4 VMs (25 GB RAM each)</p>
</li>
</ul>
<p>Even if one VM’s application only needs 10 GB RAM, the rest (15 GB RAM) stays <strong>idle</strong>.</p>
<p>At scale — say, 1 million VMs — that wasted capacity means <strong>huge financial loss</strong>.</p>
<p>Hence, a more <strong>lightweight</strong> and <strong>resource-efficient</strong> approach was needed.</p>
<hr />
<h2 id="heading-4-solution-containers">🧩 <strong>4. Solution: Containers</strong></h2>
<p>Containers were introduced to solve these inefficiencies.</p>
<p><strong>Definition:</strong></p>
<blockquote>
<p>A container is a <strong>lightweight, standalone package</strong> that includes everything needed to run a piece of software — code, libraries, dependencies, and minimal OS components.</p>
</blockquote>
<h3 id="heading-how-containers-run">🏗️ <strong>How Containers Run</strong></h3>
<p>Containers can be created:</p>
<ol>
<li><p><strong>Directly on a physical server</strong>, or</p>
</li>
<li><p><strong>On top of a virtual machine</strong></p>
</li>
</ol>
<p>In both cases, you install a <strong>containerization platform</strong> (like <strong>Docker</strong>, <strong>Podman</strong>, or <strong>Buildah</strong>) over the host OS.</p>
<hr />
<h2 id="heading-5-containers-vs-virtual-machines">⚖️ <strong>5. Containers vs Virtual Machines</strong></h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Feature</td><td>Virtual Machine</td><td>Container</td></tr>
</thead>
<tbody>
<tr>
<td><strong>OS</strong></td><td>Full guest OS per VM</td><td>Shares host OS kernel</td></tr>
<tr>
<td><strong>Size</strong></td><td>Heavy (GBs)</td><td>Lightweight (MBs)</td></tr>
<tr>
<td><strong>Startup Time</strong></td><td>Minutes</td><td>Seconds</td></tr>
<tr>
<td><strong>Isolation</strong></td><td>Full hardware-level</td><td>Process-level (less secure)</td></tr>
<tr>
<td><strong>Resource Efficiency</strong></td><td>Moderate</td><td>Very high</td></tr>
<tr>
<td><strong>Use Case</strong></td><td>Legacy apps, full isolation</td><td>Microservices, cloud-native apps</td></tr>
</tbody>
</table>
</div><p>📌 <strong>In short:</strong></p>
<blockquote>
<p>Virtual machines isolate hardware.<br />Containers isolate processes.</p>
</blockquote>
<hr />
<h2 id="heading-6-why-containers-are-lightweight">🧠 <strong>6. Why Containers Are Lightweight</strong></h2>
<ul>
<li><p>Containers <strong>don’t include a full OS</strong> — only minimal system libraries and dependencies.</p>
</li>
<li><p>They <strong>share</strong> the host system’s kernel.</p>
</li>
<li><p>Only necessary components are bundled → drastically smaller image sizes.</p>
</li>
</ul>
<p>Example:</p>
<ul>
<li><p>VM image: ~2–3 GB</p>
</li>
<li><p>Container image: ~100–500 MB</p>
</li>
</ul>
<p>This makes containers:</p>
<ul>
<li><p>Faster to build</p>
</li>
<li><p>Easier to transfer (“ship”)</p>
</li>
<li><p>Quicker to deploy</p>
</li>
</ul>
<hr />
<h2 id="heading-7-whats-inside-a-container">📦 <strong>7. What’s Inside a Container</strong></h2>
<p>A container image includes:</p>
<ol>
<li><p><strong>Application code</strong></p>
</li>
<li><p><strong>Application dependencies</strong> (libraries, frameworks)</p>
</li>
<li><p><strong>System dependencies</strong> (minimal OS libraries)</p>
</li>
</ol>
<p>If additional libraries are needed (e.g., Python, Node.js, Java), they’re added via <strong>base images</strong> (e.g., <code>python:3.10</code>, <code>node:18</code>, <code>openjdk:17</code>).</p>
<hr />
<h2 id="heading-8-how-docker-works">⚙️ <strong>8. How Docker Works</strong></h2>
<p><strong>Docker</strong> is a <strong>containerization platform</strong> that simplifies container creation and management.</p>
<h3 id="heading-docker-lifecycle">🔄 <strong>Docker Lifecycle</strong></h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Stage</td><td>Description</td><td>Command</td></tr>
</thead>
<tbody>
<tr>
<td>1️⃣ Write</td><td>Define image in a <strong>Dockerfile</strong></td><td><code>Dockerfile</code></td></tr>
<tr>
<td>2️⃣ Build</td><td>Convert Dockerfile → Image</td><td><code>docker build</code></td></tr>
<tr>
<td>3️⃣ Run</td><td>Launch container from image</td><td><code>docker run</code></td></tr>
</tbody>
</table>
</div><p>Behind the scenes, <strong>Docker Engine</strong> executes these commands and handles:</p>
<ul>
<li><p>Layered image building</p>
</li>
<li><p>Container lifecycle management</p>
</li>
<li><p>Resource sharing with host OS</p>
</li>
</ul>
<hr />
<h2 id="heading-9-drawbacks-of-docker">⚠️ <strong>9. Drawbacks of Docker</strong></h2>
<p>While Docker made containers popular, it has a few limitations:</p>
<h3 id="heading-single-point-of-failure-spof">🧱 <strong>Single Point of Failure (SPOF)</strong></h3>
<ul>
<li><p>All containers depend on the <strong>Docker Engine daemon</strong>.</p>
</li>
<li><p>If the Docker daemon stops, <strong>all containers stop</strong>.</p>
</li>
</ul>
<h3 id="heading-layer-complexity">🧩 <strong>Layer Complexity</strong></h3>
<ul>
<li><p>Docker builds images in <strong>layers</strong>.</p>
</li>
<li><p>Too many layers can slow builds and consume disk space.</p>
</li>
</ul>
<hr />
<h2 id="heading-10-alternative-buildah">🔧 <strong>10. Alternative: Buildah</strong></h2>
<p>To overcome Docker’s limitations, <strong>Buildah</strong> was introduced.</p>
<h3 id="heading-what-is-buildah">🏗️ <strong>What Is Buildah?</strong></h3>
<blockquote>
<p>A container image–building tool that doesn’t depend on a daemon like Docker Engine.</p>
</blockquote>
<h3 id="heading-benefits-of-buildah">🔍 <strong>Benefits of Buildah</strong></h3>
<ul>
<li><p>No <strong>single point of failure</strong></p>
</li>
<li><p>No background daemon</p>
</li>
<li><p>Works well with <strong>Podman</strong> and <strong>Skopeo</strong></p>
</li>
<li><p>Compatible with <strong>Docker images</strong> (OCI compliant)</p>
</li>
<li><p>Simpler scripting via <strong>shell commands</strong></p>
</li>
</ul>
<p>Unlike Docker (which uses <code>Dockerfile</code>), Buildah can create images directly from <strong>shell scripts</strong>.</p>
<hr />
<h2 id="heading-11-real-world-architecture">🧭 <strong>11. Real-World Architecture</strong></h2>
<h3 id="heading-model-1-containers-on-physical-server"><strong>Model 1: Containers on Physical Server</strong></h3>
<pre><code class="lang-plaintext">Physical Server
 └── OS
     └── Docker Engine / Container Platform
         ├── Container 1
         ├── Container 2
         └── Container 3
</code></pre>
<h3 id="heading-model-2-containers-on-virtual-machine-most-common"><strong>Model 2: Containers on Virtual Machine (Most Common)</strong></h3>
<pre><code class="lang-plaintext">Physical Server (Cloud / Data Center)
 └── Virtual Machine
     └── OS
         └── Docker Engine / Podman / Buildah
             ├── Container 1
             ├── Container 2
             └── Container 3
</code></pre>
<p>📌 Most organizations today use <strong>Model 2</strong> because:</p>
<ul>
<li><p>They rely on <strong>cloud providers</strong> (AWS, Azure, GCP)</p>
</li>
<li><p>They avoid maintaining physical data centers</p>
</li>
</ul>
<hr />
<h2 id="heading-12-why-docker-became-so-popular">🚀 <strong>12. Why Docker Became So Popular</strong></h2>
<ul>
<li><p>Simple to learn and use (<code>docker build</code>, <code>docker run</code>)</p>
</li>
<li><p>Strong community support</p>
</li>
<li><p>Easy image sharing through <strong>Docker Hub</strong></p>
</li>
<li><p>Integrated with orchestration tools like <strong>Kubernetes</strong></p>
</li>
<li><p>Lightweight and portable — “Build once, run anywhere”</p>
</li>
</ul>
<hr />
<h2 id="heading-13-summary-table">🧾 <strong>13. Summary Table</strong></h2>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Concept</td><td>Description</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Container</strong></td><td>Lightweight, isolated runtime for applications</td></tr>
<tr>
<td><strong>Docker</strong></td><td>Tool/platform to build and manage containers</td></tr>
<tr>
<td><strong>Dockerfile</strong></td><td>Script to define how to build an image</td></tr>
<tr>
<td><strong>Image</strong></td><td>Read-only template containing code + dependencies</td></tr>
<tr>
<td><strong>Container</strong></td><td>Running instance of an image</td></tr>
<tr>
<td><strong>Buildah</strong></td><td>Daemonless alternative to build container images</td></tr>
<tr>
<td><strong>Base Image</strong></td><td>Starting OS or runtime layer (e.g., Ubuntu, Alpine)</td></tr>
</tbody>
</table>
</div><hr />
<h2 id="heading-14-final-summary">✅ <strong>14. Final Summary</strong></h2>
<ul>
<li><p>Containers evolved to improve <strong>resource efficiency</strong> over VMs.</p>
</li>
<li><p>Docker popularized containers through <strong>ease of use</strong> and <strong>portability</strong>.</p>
</li>
<li><p>Containers are <strong>lightweight</strong>, <strong>fast</strong>, and <strong>ideal for microservices</strong>.</p>
</li>
<li><p>Docker’s limitations (single point of failure, layer bloat) led to tools like <strong>Buildah</strong> and <strong>Podman</strong>.</p>
</li>
<li><p>Today, containerization is the <strong>foundation of modern DevOps</strong> and <strong>cloud-native computing</strong>.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Day 14 - CI/CD Interview Questions]]></title><description><![CDATA[1. What is the CI/CD process in your organization?
This is a common interview question to understand your real experience.You should answer by describing the tools your organization uses.
Example structure explained in your content:

Assume your comp...]]></description><link>https://blog.dineshcloud.in/day-14-cicd-interview-questions</link><guid isPermaLink="true">https://blog.dineshcloud.in/day-14-cicd-interview-questions</guid><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[ci-cd]]></category><dc:creator><![CDATA[Dinesh Kumar K]]></dc:creator><pubDate>Mon, 15 Dec 2025 12:07:12 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-1-what-is-the-cicd-process-in-your-organization"><strong>1. What is the CI/CD process in your organization?</strong></h2>
<p>This is a common interview question to understand your real experience.<br />You should answer by describing the tools your organization uses.</p>
<p>Example structure explained in your content:</p>
<ul>
<li><p>Assume your company uses <strong>Java</strong>.</p>
</li>
<li><p>Jenkins is the main orchestrator.</p>
</li>
<li><p>Different tools are connected with Jenkins: Maven, Sonar, AppScan, Argo CD, Kubernetes, Helm, etc.</p>
</li>
<li><p>Explain how a developer commits code to GitHub.</p>
</li>
<li><p>Jenkins pipeline automatically triggers and pulls the code.</p>
</li>
<li><p>Jenkins builds the code (e.g., with Maven).</p>
</li>
<li><p>It performs code quality checks or security checks (e.g., Sonar, AppScan).</p>
</li>
<li><p>Then the application is promoted to Dev using Argo CD and Kubernetes.</p>
</li>
<li><p>Argo CD watches the Git repository and deploys new versions using updated image tags and Helm charts.</p>
</li>
<li><p>If Kubernetes is difficult, you can say you deploy to EC2 instead.</p>
</li>
</ul>
<p>The content also refers to a Jenkins pipeline example in the GitHub repository that updates Kubernetes manifests and lets Argo CD deploy based on GitOps.</p>
<hr />
<h2 id="heading-2-what-are-the-different-ways-to-trigger-jenkins-pipelines"><strong>2. What are the different ways to trigger Jenkins pipelines?</strong></h2>
<p>Since Jenkins and GitHub are separate tools, Jenkins must know when new code is pushed.</p>
<p>There are <strong>three methods</strong>:</p>
<ul>
<li><p>Poll SCM</p>
</li>
<li><p>Build triggers (Cron)</p>
</li>
<li><p>Webhooks</p>
</li>
</ul>
<p>Explanation from your content:</p>
<ul>
<li><p>Polling and Cron jobs are inefficient because Jenkins repeatedly checks GitHub, which consumes resources and can have time delays.</p>
</li>
<li><p>Webhooks are the best method:</p>
<ul>
<li><p>When a developer commits code, GitHub sends a JSON payload to Jenkins.</p>
</li>
<li><p>GitHub notifies Jenkins through an API.</p>
</li>
<li><p>Jenkins receives the payload and triggers the pipeline.</p>
</li>
</ul>
</li>
</ul>
<hr />
<h2 id="heading-3-how-to-back-up-jenkins"><strong>3. How to back up Jenkins?</strong></h2>
<p>For Jenkins administration roles, backup is important.</p>
<p>Summary from your content:</p>
<ul>
<li><p>The main folder to back up is <strong>.jenkins</strong> from the Jenkins home directory.</p>
</li>
<li><p>This folder contains jobs, logs, and configuration.</p>
</li>
<li><p>Use tools like <strong>rsync</strong> to sync backups to storage (EBS or other).</p>
</li>
<li><p>Some large organizations store Jenkins data in external databases, so those databases must be backed up as well.</p>
</li>
<li><p>Plugins or user content may need separate backup steps.</p>
</li>
</ul>
<hr />
<h2 id="heading-4-how-do-you-store-or-handle-secrets-in-jenkins"><strong>4. How do you store or handle secrets in Jenkins?</strong></h2>
<p>Secrets must never appear in logs or UI.</p>
<p>Jenkins provides credential plugins, but the recommended explanation:</p>
<ul>
<li><p>Use external secret managers such as <strong>HashiCorp Vault</strong>.</p>
</li>
<li><p>Jenkins integrates with Vault.</p>
</li>
<li><p>Pipelines fetch secrets from Vault during runtime.</p>
</li>
</ul>
<hr />
<h2 id="heading-5-what-is-the-latest-version-of-jenkins"><strong>5. What is the latest version of Jenkins?</strong></h2>
<p>Interviewers ask this to check whether you actually use Jenkins regularly.<br />Not knowing the current version can give a bad impression.</p>
<p>If you claim to be a CI/CD engineer using Jenkins, you must stay updated.</p>
<hr />
<h2 id="heading-6-what-are-shared-modules-in-jenkins"><strong>6. What are shared modules in Jenkins?</strong></h2>
<p>Shared modules/shared libraries mean:</p>
<ul>
<li><p>A DevOps engineer writes a pipeline once.</p>
</li>
<li><p>Many development teams reuse that pipeline.</p>
</li>
<li><p>This avoids each team rewriting the same logic.</p>
</li>
</ul>
<p>It’s a reusable pipeline approach.</p>
<hr />
<h2 id="heading-7-can-jenkins-build-applications-using-multiple-programming-languages-with-different-agents"><strong>7. Can Jenkins build applications using multiple programming languages with different agents?</strong></h2>
<p>Yes.</p>
<p>Explanation in your content:</p>
<ul>
<li><p>Example: frontend (Node.js), backend (Java), microservice (Python).</p>
</li>
<li><p>Jenkins can run multiple stages using <strong>different Docker agents</strong>.</p>
</li>
<li><p>Each stage uses a separate Docker container with required dependencies.</p>
</li>
<li><p>Containers are removed after pipeline execution, saving resources.</p>
</li>
</ul>
<hr />
<h2 id="heading-8-how-to-set-up-auto-scaling-groups-with-jenkins"><strong>8. How to set up Auto Scaling Groups with Jenkins?</strong></h2>
<p>Some companies need multiple worker nodes.</p>
<p>Explanation:</p>
<ul>
<li><p>Jenkins master runs on one EC2 instance.</p>
</li>
<li><p>Many teams may need many worker nodes.</p>
</li>
<li><p>Load can increase during certain periods.</p>
</li>
<li><p>Auto Scaling Groups in AWS automatically add/remove Jenkins worker nodes.</p>
</li>
<li><p>This prevents unused nodes from wasting costs.</p>
</li>
</ul>
<hr />
<h2 id="heading-9-how-to-add-a-new-worker-node-in-jenkins"><strong>9. How to add a new worker node in Jenkins?</strong></h2>
<p>Summary:</p>
<ul>
<li><p>Go to <strong>Manage Jenkins → Manage Nodes and Clouds</strong>.</p>
</li>
<li><p>Add a new node.</p>
</li>
<li><p>Provide IP address, SSH keys, authentication.</p>
</li>
<li><p>Launch the node to make it active.</p>
</li>
</ul>
<hr />
<h2 id="heading-10-how-to-install-plugins-in-jenkins"><strong>10. How to install plugins in Jenkins?</strong></h2>
<p>Two ways:</p>
<h3 id="heading-ui-method"><strong>UI Method</strong></h3>
<ul>
<li><p>Manage Jenkins → Manage Plugins</p>
</li>
<li><p>Search and install plugins</p>
</li>
</ul>
<h3 id="heading-cli-method"><strong>CLI Method</strong></h3>
<ul>
<li><p>Use a Java command to install plugins directly</p>
</li>
<li><p>Useful for automation or when installing many plugins at once</p>
</li>
<li><p>Some plugins must be manually uploaded if not available in the plugin catalog.</p>
</li>
</ul>
<hr />
<h2 id="heading-11-what-is-jnlp-and-why-is-it-used"><strong>11. What is JNLP and why is it used?</strong></h2>
<p>Explanation:</p>
<ul>
<li><p>JNLP is a way for Jenkins agents (workers) to communicate with Jenkins master.</p>
</li>
<li><p>You download a JNLP JAR and run it on the agent.</p>
</li>
<li><p>It allows remote launch and communication.</p>
</li>
<li><p>The agent receives build tasks from the master.</p>
</li>
</ul>
<hr />
<h2 id="heading-12-what-are-some-common-jenkins-plugins"><strong>12. What are some common Jenkins plugins?</strong></h2>
<p>Interviewers check your practical experience.</p>
<p>The content suggests:</p>
<ul>
<li><p>Be familiar with common plugins.</p>
</li>
<li><p>Look at which plugins Jenkins installs by default.</p>
</li>
<li><p>Prepare a list to avoid blanking during interviews.</p>
</li>
</ul>
<hr />
<h2 id="heading-wrap-up"><strong>Wrap-up</strong></h2>
<p>The speaker ends by saying:</p>
<ul>
<li><p>These are the main interview questions.</p>
</li>
<li><p>The repository contains detailed answers.</p>
</li>
<li><p>You can submit pull requests if something is missing.</p>
</li>
<li><p>Feedback is welcome, and viewers should subscribe and share.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Day 13 - GitHub Actions]]></title><description><![CDATA[What GitHub Actions Is
GitHub Actions is another CI/CD solution.It works similarly to Jenkins — it performs Continuous Integration and Continuous Delivery tasks.The main difference:

GitHub Actions works only with GitHub

GitLab CI works only with Gi...]]></description><link>https://blog.dineshcloud.in/day-13-github-actions</link><guid isPermaLink="true">https://blog.dineshcloud.in/day-13-github-actions</guid><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[DevOps Journey]]></category><category><![CDATA[#Devopscommunity]]></category><category><![CDATA[ci-cd]]></category><dc:creator><![CDATA[Dinesh Kumar K]]></dc:creator><pubDate>Mon, 15 Dec 2025 12:04:30 GMT</pubDate><content:encoded><![CDATA[<h2 id="heading-what-github-actions-is"><strong>What GitHub Actions Is</strong></h2>
<p>GitHub Actions is another CI/CD solution.<br />It works similarly to Jenkins — it performs Continuous Integration and Continuous Delivery tasks.<br />The main difference:</p>
<ul>
<li><p>GitHub Actions works only with GitHub</p>
</li>
<li><p>GitLab CI works only with GitLab</p>
</li>
</ul>
<p>So before choosing GitHub Actions or GitLab CI, an organization must consider whether they will stay on that platform in the future.<br />If a company may move to a different platform later (like GitHub → GitLab → AWS → Azure DevOps → self-hosted Git), GitHub Actions or GitLab CI is not ideal because they are tied to their respective platforms.</p>
<p>Like you choose Terraform over CloudFormation because Terraform works across multiple clouds, choosing GitHub Actions makes sense only if you plan to stay on GitHub.</p>
<p>Even though GitHub Actions is powerful and often better than Jenkins, platform-lock-in is its main limitation.</p>
<hr />
<h2 id="heading-starting-with-github-actions"><strong>Starting With GitHub Actions</strong></h2>
<p>You don't install plugins manually.<br />To use GitHub Actions, create a folder in your repository:</p>
<pre><code class="lang-plaintext">.github/workflows
</code></pre>
<p>Inside this folder, you create YAML files (your pipelines).</p>
<p>Example:<br />If you write <code>on: push</code> in the first line, it means:</p>
<ul>
<li>Whenever someone pushes a commit, run this pipeline.</li>
</ul>
<p>It doesn’t matter what kind of commit it is—any push will trigger the workflow.</p>
<p>You can have multiple workflow files (10, 20, or more).<br />Each can handle different jobs, such as:</p>
<ul>
<li><p>Checking pull-request description</p>
</li>
<li><p>Linting or formatting checks</p>
</li>
<li><p>Running CI</p>
</li>
<li><p>Running CD<br />  Companies often split workflows this way.</p>
</li>
</ul>
<p>The ArgoCD project is shown as an example.<br />Their <code>.github/workflows</code> folder has multiple workflows such as CI build, code checks, PR title check, release, and security scanning.</p>
<p>GitHub will run any workflow whose trigger condition matches.</p>
<hr />
<h2 id="heading-writing-a-github-actions-workflow"><strong>Writing a GitHub Actions Workflow</strong></h2>
<p>A simple Python example is used.</p>
<p>Inside an <code>src</code> folder, an <a target="_blank" href="http://addition.py"><code>addition.py</code></a> program contains:</p>
<ul>
<li><p>A simple addition function</p>
</li>
<li><p>A unit test for that function</p>
</li>
</ul>
<p>The workflow should:</p>
<ol>
<li><p>Run on every commit</p>
</li>
<li><p>Check out the code</p>
</li>
<li><p>Create a Python environment</p>
</li>
<li><p>Install dependencies</p>
</li>
<li><p>Run the tests</p>
</li>
</ol>
<p>When a commit is made (adding a comment in the example), GitHub Actions automatically starts running the workflow.</p>
<p>The logs show steps like:</p>
<ul>
<li><p>Set up job</p>
</li>
<li><p>Check out the repository</p>
</li>
<li><p>Set up Python</p>
</li>
<li><p>Install dependencies</p>
</li>
<li><p>Run tests</p>
</li>
<li><p>Complete job</p>
</li>
</ul>
<h3 id="heading-how-is-this-defined">How is this defined?</h3>
<p>Everything is written in the workflow YAML file.</p>
<p>YAML formatting makes it easy (similar to Kubernetes YAML).</p>
<p>You define:</p>
<ul>
<li><p>Workflow name</p>
</li>
<li><p>Trigger event</p>
</li>
<li><p>Jobs</p>
</li>
<li><p>Container image (Ubuntu latest)</p>
</li>
<li><p>Multiple Python versions (e.g., 3.8 and 3.9)<br />  → which is why two jobs were executed</p>
</li>
</ul>
<p>Then steps are defined:</p>
<ul>
<li><p>Checkout plugin</p>
</li>
<li><p>Setup Python plugin</p>
</li>
<li><p>Install dependencies</p>
</li>
<li><p>Run tests</p>
</li>
</ul>
<hr />
<h2 id="heading-understanding-plugins-in-github-actions"><strong>Understanding Plugins in GitHub Actions</strong></h2>
<p>GitHub Actions has a <strong>marketplace of plugins</strong>.<br />You don’t install plugins manually (unlike Jenkins).<br />They are available by default.</p>
<p>Example:<br /><code>actions/checkout@v3</code><br />→ checks out the repo</p>
<p><code>actions/setup-python@v2</code><br />→ sets up Python<br />The number after <code>@</code> is plugin version, <strong>not Python version</strong>.</p>
<p>The same pattern applies to Java, Node, Ruby, etc.<br />Only the plugin name changes.</p>
<p>The biggest advantage:<br /><strong>Very little code is needed</strong> because most work is done by plugins.</p>
<p>The disadvantage:<br />Plugins are still limited because GitHub Actions is newer compared to older tools like Jenkins.</p>
<hr />
<h2 id="heading-self-hosted-runners"><strong>Self-Hosted Runners</strong></h2>
<p>In GitHub repository settings, you can add <strong>self-hosted runners</strong>.</p>
<p>You may need this if:</p>
<ul>
<li><p>GitHub’s default runners are too small</p>
</li>
<li><p>You need more compute for tasks like load testing</p>
</li>
<li><p>You need internal security/compliance</p>
</li>
</ul>
<p>Then the workflow runs on your own machines instead of GitHub’s machines.</p>
<hr />
<h2 id="heading-secrets-management"><strong>Secrets Management</strong></h2>
<p>GitHub Actions allows securely storing secrets, such as:</p>
<ul>
<li><p>kubeconfig</p>
</li>
<li><p>Sonar token</p>
</li>
<li><p>Passwords or keys</p>
</li>
</ul>
<p>These can be used inside workflows.</p>
<p>The Java + Maven + Sonar + Kubernetes example demonstrates this.</p>
<hr />
<h2 id="heading-comparing-github-actions-with-jenkins"><strong>Comparing GitHub Actions with Jenkins</strong></h2>
<h3 id="heading-disadvantage"><strong>Disadvantage</strong></h3>
<p>GitHub Actions is platform-dependent.<br />If you move from GitHub to another platform (GitLab, AWS CodeCommit, Azure DevOps), your GitHub Actions pipelines cannot be reused.</p>
<h3 id="heading-advantages"><strong>Advantages</strong></h3>
<ol>
<li><p><strong>No hosting effort</strong></p>
<ul>
<li>No need to install Jenkins, setup EC2 instances, configure plugins, or maintain servers.</li>
</ul>
</li>
<li><p><strong>Less maintenance</strong></p>
<ul>
<li>No managing Jenkins updates or plugin compatibility.</li>
</ul>
</li>
<li><p><strong>Simple UI and easy pipeline creation</strong></p>
<ul>
<li>YAML based, plugin-driven, easy to understand.</li>
</ul>
</li>
<li><p><strong>Cost</strong></p>
<ul>
<li><p>Free for public repositories</p>
</li>
<li><p>Limited free minutes for private repositories</p>
</li>
<li><p>Still cheaper than maintaining Jenkins infrastructure</p>
</li>
</ul>
</li>
</ol>
<p>Because of this, ~90% of open-source projects prefer GitHub Actions.</p>
<hr />
<h2 id="heading-final-summary"><strong>Final Summary</strong></h2>
<p>GitHub Actions is an easy, plugin-driven CI/CD solution built for GitHub.<br />It is great when your codebase will stay on GitHub.<br />It removes maintenance overhead, provides free execution for public repos, supports secrets, supports custom runners, and is simpler than Jenkins.</p>
<p>However, it locks you to GitHub.<br />If your organization might switch to another platform, GitHub Actions is not ideal.</p>
]]></content:encoded></item></channel></rss>