[
  
  
    {
      "title"       : "Auto-PPT Debugging Part 3: Editable Web App and Lightsail Deployment Path",
      "category"    : "",
      "tags"        : "Django, OpenAI, Google Slides, PPTXGenJS, Docker, Caddy, Lightsail, Terraform",
      "url"         : "/en/pptauto3.html",
      "date"        : "2026-03-30 00:00:00 +0900",
      "description" : "Extending the slide generator from text mapping fixes into an editable web app and a deployable Docker, Caddy, Lightsail, and Terraform path.",
      "content"     : "From generation to controlPart 2 made the main problem clear: the code needed to understand slide structure before filling text into a template.Part 3 moved to the next product-level question. Generating a deck is useful, but a real user also needs to edit the result, preview it, and export it through a repeatable deployment path.New directionThe project split into two tracks: an editable web app for controlling generated slide content an operational path for deploying the app with Docker, Caddy, Lightsail, and TerraformThis changed the project from an experiment into a tool shape.Web app structureThe editable layer needed to keep the generated slide model and rendered output close enough that user edits would make sense.The main requirement was not visual perfection. It was control: users can review generated text users can edit fields before export the app can rerender after changes the final PPTX output follows the same structureDeployment structureThe deployment side focused on repeatability. Docker provided the runtime package. Lightsail gave a simple hosting target. Caddy handled web serving and TLS-friendly routing. Terraform documented infrastructure setup.TakeawayThe key shift was from “AI creates slides” to “the user controls a generated artifact.” That is a better product boundary. AI can draft, but the app needs editing, preview, export, and deployment discipline to be usable."
    } ,
  
    {
      "title"       : "Terraform 1: Building and Removing a DeepLX Lambda Proxy on AWS",
      "category"    : "",
      "tags"        : "Terraform, Lambda, AWS, Proxy",
      "url"         : "/en/Terraform_1.html",
      "date"        : "2026-03-22 00:00:00 +0900",
      "description" : "A Terraform practice run that built an AWS Lambda proxy path, verified it, and tore it down cleanly.",
      "content"     : "Why I started thisThe Terraform repository needed more than a toy example. I wanted a deployable infrastructure shape that could explain an AWS Lambda proxy from state backend to final verification.The workflow was:prepare state backend -&gt; build artifact -&gt; terraform plan -&gt; terraform apply -&gt; verify endpoint -&gt; terraform destroyConcepts clarified firstBefore writing too much code, I focused on the operational boundaries: S3 backend for Terraform state. DynamoDB locking for concurrent state safety. Artifact bucket for Lambda build outputs. ALB to Lambda target group behavior. Health check constraints when Lambda sits behind an ALB. HTTP-only scope for the lab, leaving HTTPS and custom domains out of this iteration.That made the deployment easier to explain because every resource had a job.What mattered mostThe work was less about “Terraform created resources” and more about understanding where design constraints appear.Lambda behind ALB is different from a normal EC2 target. The target group behavior, permissions, event shape, and health checks all need attention.TakeawayThe useful result was a complete lifecycle: build, apply, verify, and destroy. Terraform practice becomes much more valuable when teardown is part of the exercise, because it proves the state and resource boundaries are understood."
    } ,
  
    {
      "title"       : "Reviewing a Selenium-Based Book PDF Crawler Prototype",
      "category"    : "",
      "tags"        : "python, selenium, cli, crawling, automation",
      "url"         : "/en/book_crewling.html",
      "date"        : "2026-02-22 00:00:00 +0900",
      "description" : "A prototype review for a CLI crawler that searches book metadata and evaluates public PDF candidates with license signals.",
      "content"     : "ScopeThis prototype was not treated as a general downloader. The correct use case is finding legally public or public-domain material and preserving license checks as a required step.The session focused on making the minimum search-to-result flow work.Completed workThe review covered: understanding the execution flow adding requirements.txt confirming the Google-based flow observing Google’s sorry and reCAPTCHA blocking behavior switching the search backend from Google to Bing parsing Bing results decoding Bing tracking links into original URLs fixing a Selenium expected_conditions import bug fixing JSON serialization for Path values writing result JSON under a local result/ directoryArchitectureThe CLI takes book information, drives a browser search through Selenium, collects candidate URLs, and stores structured result data.The important design point is that search results are not automatically trusted. A result needs additional signals before it can be considered usable.TakeawayThe prototype became more stable after replacing brittle Google scraping with a less blocked search path and fixing serialization issues. The next quality bar would be stronger license classification and cleaner separation between search, parsing, scoring, and output."
    } ,
  
    {
      "title"       : "Monitoring a MacBook with Zabbix and Email Alerts",
      "category"    : "",
      "tags"        : "devops, zabbix, monitoring, docker, ubuntu",
      "url"         : "/en/zabbix.html",
      "date"        : "2026-01-20 00:00:00 +0900",
      "description" : "A Zabbix setup on AWS Lightsail with Docker, a MacBook agent, memory triggers, and Gmail notifications.",
      "content"     : "GoalI wanted an email alert when my MacBook memory became too low.The setup used: Zabbix Server on AWS Lightsail Ubuntu. Docker containers for Zabbix services and PostgreSQL. Zabbix Agent on the MacBook. Gmail SMTP for notification delivery.What workedThe main pieces eventually worked: Zabbix collected memory data from the MacBook. Latest data showed the memory item correctly. A trigger could be created for low-memory conditions. Gmail delivery worked after SMTP configuration was corrected.Main failure modeThe most confusing problem was when the Zabbix problem appeared but no email arrived.The likely causes were operational rather than metric-related: the action had no operation configured the problem already existed before the action rule was applied the media type or recipient was incomplete trigger and action conditions did not matchIn Zabbix, alerting is not complete just because a trigger fires. Actions, operations, users, media, and conditions all need to line up.TakeawayZabbix is powerful but configuration-heavy. The reliable debugging path is: confirm item data confirm trigger state confirm action match confirm operation confirm user media test deliveryThat order makes alert failures much easier to isolate."
    } ,
  
    {
      "title"       : "Monitoring Lab: Prometheus, Grafana, and Alertmanager",
      "category"    : "",
      "tags"        : "devops, prometheus, grafana, alertmanager, monitoring",
      "url"         : "/en/prometheus.html",
      "date"        : "2026-01-16 23:59:00 +0900",
      "description" : "An end-to-end local monitoring lab with Prometheus scraping metrics, Grafana dashboards, and Alertmanager email notifications.",
      "content"     : "GoalThis lab built a local monitoring stack on macOS with Docker Compose.The target workflow was:node-exporter -&gt; Prometheus scrape -&gt; alert rules -&gt; Alertmanager -&gt; Gmail SMTP notification -&gt; Grafana dashboardComponentsnode-exporter exposed machine metrics such as CPU, memory, and network values.Prometheus scraped those metrics and evaluated alert rules.Alertmanager handled routing and delivery for alerts.Grafana provided dashboards for visual inspection.What made it usefulThe important part was building the full loop, not only starting containers.A monitoring stack is only useful when each link is verified: the exporter exposes metrics Prometheus scrapes successfully rules evaluate as expected Alertmanager receives fired alerts email delivery works Grafana reads Prometheus dataTakeawayEnd-to-end monitoring needs proof at every stage. A dashboard alone is not monitoring, and an alert rule alone is not notification. The system works only when collection, evaluation, routing, and delivery are all tested together."
    } ,
  
    {
      "title"       : "Preparing a Debian Server Deployment with Ansible",
      "category"    : "",
      "tags"        : "devops, ansible, django, nginx, debian",
      "url"         : "/en/ansible.html",
      "date"        : "2025-12-14 00:00:00 +0900",
      "description" : "A practical Ansible note on preparing a Debian target server for a Django and Nginx deployment.",
      "content"     : "EnvironmentThe setup used two machines: Control node: an Ubuntu server running Ansible. Managed node: a Debian server where Nginx and the Django application would run.The goal was to move deployment preparation out of manual shell commands and into repeatable Ansible structure.Project shapeThe Ansible project followed a cookbook-style layout with roles, inventory, variables, and playbooks. I also tested using an existing community role for Nginx instead of writing every task from scratch.The deployment preparation covered: installing Nginx placing application code preparing a Python virtual environment configuring Nginx for the Quiz_AI service checking that the target host could be managed consistentlyWhy roles matterRoles make the playbook easier to read because each responsibility has a place. Nginx setup, application deployment, and environment preparation should not live as one long script.Using a known role also made it easier to compare my configuration with common Ansible patterns.TakeawayAnsible is useful when the server setup becomes a procedure that should be repeated. The main value is not only automation; it is turning deployment knowledge into files that can be reviewed and run again."
    } ,
  
    {
      "title"       : "Building Nginx Log Monitoring with the Elastic Stack",
      "category"    : "",
      "tags"        : "devops, elasticsearch, kibana, filebeat, nginx",
      "url"         : "/en/ElasticStack.html",
      "date"        : "2025-11-29 00:00:00 +0900",
      "description" : "A monitoring setup using Filebeat, Elasticsearch, and Kibana to inspect Nginx logs from an EC2 server.",
      "content"     : "GoalThe goal was to see Nginx traffic and errors in a dashboard instead of checking raw log files manually.The pipeline was:Nginx logs -&gt; Filebeat -&gt; Elasticsearch -&gt; KibanaArchitectureThe EC2 server ran Nginx and Filebeat. Elasticsearch and Kibana were tested through Docker containers. Access to Kibana happened from a Mac browser through an SSH tunnel to localhost:5601.This kept the dashboard reachable during testing without exposing Kibana publicly.Why separate logsThe useful requirement was separating blog traffic from other service traffic. That meant Nginx needed distinct access logs, and Filebeat needed to ship the right files.Once the logs reached Elasticsearch, Kibana could visualize: request volume status code patterns error spikes active paths source IP patternsTakeawayRaw logs are still the source of truth, but dashboards make patterns visible faster. The important design choice is to keep log sources named and separated before they enter the pipeline."
    } ,
  
    {
      "title"       : "Moving a Django Project from Ubuntu to Debian with Docker",
      "category"    : "",
      "tags"        : "devops, docker, django, mariadb, debian",
      "url"         : "/en/Docker1.html",
      "date"        : "2025-11-24 00:00:00 +0900",
      "description" : "Notes from packaging a Django project into Docker and moving it from an Ubuntu server to a Debian environment.",
      "content"     : "Why Docker helpedThe project originally ran directly on an Ubuntu server. Moving it to Debian exposed the usual server migration problem: packages, Python dependencies, database clients, and environment setup can drift.Docker made the runtime more explicit.Dockerfile shapeThe application image started from a slim Python base:FROM python:3.12-slimWORKDIR /appThe image needed system packages for Python dependencies such as MySQL client bindings:RUN apt-get update &amp;&amp; apt-get install -y \\ build-essential \\ default-libmysqlclient-dev \\ pkg-configThe important part was not the exact package list. It was capturing all runtime assumptions in one file.Database and environmentThe app still needed correct environment variables and a database connection. Docker does not remove configuration work; it makes the boundary clearer.The Django container, MariaDB, volume paths, and network settings had to agree.Migration lessonServer migration is easier when the application runtime is described as code. Docker made it possible to rebuild the same app shape on Debian without manually repeating every Ubuntu setup step.The result was not a perfect production platform, but it was a cleaner base for repeatable deployment."
    } ,
  
    {
      "title"       : "Load Balancing Django with Nginx Upstream and Multiple Gunicorn Instances",
      "category"    : "",
      "tags"        : "devops, nginx, django, load-balancing, ubuntu",
      "url"         : "/en/loadbalancer.html",
      "date"        : "2025-11-22 00:00:00 +0900",
      "description" : "A deployment note on splitting Django traffic across two Gunicorn ports through Nginx upstream.",
      "content"     : "GoalThe original deployment had one Nginx process forwarding traffic to one Gunicorn-backed Django process on 127.0.0.1:8000.The practice goal was to run two backend instances and let Nginx distribute requests:Nginx -&gt; Gunicorn on 127.0.0.1:8000 -&gt; Gunicorn on 127.0.0.1:8001Nginx upstreamThe key Nginx concept is upstream:upstream django_backend { server 127.0.0.1:8000; server 127.0.0.1:8001;}location / { proxy_pass http://django_backend;}This keeps the public server block simple while allowing multiple private backend targets.What needs to matchThe backend processes must be started separately and listen on different ports. Nginx must point to those ports, and both processes must serve the same application version.Useful checks:ss -lntpcurl http://127.0.0.1:8000curl http://127.0.0.1:8001sudo nginx -tTakeawayLoad balancing is not only about scaling traffic. It also forces the deployment to define process boundaries, health checks, and how Nginx should behave if one backend is unavailable."
    } ,
  
    {
      "title"       : "Text Processing with Nginx Logs",
      "category"    : "",
      "tags"        : "devops, linux, nginx, logs, text-processing",
      "url"         : "/en/Text_Manipulation.html",
      "date"        : "2025-11-21 00:00:00 +0900",
      "description" : "A terminal-focused note on using Linux text pipelines to inspect Nginx access logs.",
      "content"     : "Why text processing is a DevOps skillServer work produces text everywhere: logs, command output, config files, status pages, and error messages.Being able to cut, filter, sort, and count that text directly in the terminal is a practical debugging skill.Example goalThe sample task was to find the top requester IPs in an Nginx access log.Nginx access logs usually contain one request per line with fields such as: client IP timestamp HTTP method and path status code response sizePipeline shapeA common pipeline is:awk '{print $1}' /var/log/nginx/access.log | sort | uniq -c | sort -nr | headEach stage does one job: awk extracts the IP field. sort groups equal values together. uniq -c counts repeated values. sort -nr orders by count. head keeps the largest entries.Why this is usefulThis kind of command gives a fast signal before setting up a larger log platform. It can show noisy IPs, unexpected traffic patterns, or whether one client is producing most requests.The bigger lesson is the pipeline mindset: small tools connected together often answer operational questions quickly."
    } ,
  
    {
      "title"       : "Understanding Firewalls with AWS Security Groups and UFW",
      "category"    : "",
      "tags"        : "devops, networking, firewall, aws, ubuntu",
      "url"         : "/en/Firewalld.html",
      "date"        : "2025-11-21 00:00:00 +0900",
      "description" : "A practical explanation of how AWS Security Groups and Ubuntu UFW divide responsibility around an EC2 web server.",
      "content"     : "Two firewall layersOn an EC2 server, traffic is usually controlled by at least two layers: AWS Security Group at the cloud network boundary. UFW on the Ubuntu instance.Both can block traffic, but they operate at different places.Security GroupA Security Group decides which traffic is allowed to reach the instance from the AWS side.Common web server rules: 22/tcp only from a trusted IP for SSH. 80/tcp from anywhere for HTTP. 443/tcp from anywhere for HTTPS.If a port is blocked here, the request never reaches Ubuntu.UFWUFW is the host-level firewall. It controls what the operating system allows after traffic reaches the instance.This gives a second boundary:sudo ufw statussudo ufw allow 80/tcpsudo ufw allow 443/tcpUFW is useful because it keeps host policy visible even when cloud rules change.Debugging ruleWhen traffic fails, check from outside to inside: DNS points to the instance. Security Group allows the port. UFW allows the port. A process is listening on the port. The application responds correctly.That order prevents confusing a firewall issue with an application issue."
    } ,
  
    {
      "title"       : "First Notes on Django Performance Monitoring",
      "category"    : "",
      "tags"        : "devops, monitoring, performance, django, ubuntu",
      "url"         : "/en/PerformMoni.html",
      "date"        : "2025-11-19 00:00:00 +0900",
      "description" : "A first monitoring checklist for watching CPU, memory, response behavior, and service stability on a Django server.",
      "content"     : "What performance monitoring means hereProcess monitoring answers whether a service is alive. Performance monitoring asks a different question: how well is it behaving while it is alive?For this first pass, I focused on a small set of signals that are easy to observe on an Ubuntu server.CPU usageCPU shows whether the server is under pressure from active work. htop is a useful first view because it makes spikes, load, and process-level usage easy to scan.sudo apt-get install htophtopHigh CPU is not always bad. The question is whether the usage matches expected traffic and whether it stays high after the request load drops.Memory usageMemory matters for Django because workers, database clients, and background processes can grow over time. The key check is whether memory rises and returns to a stable range or keeps climbing.Useful commands:free -hps aux --sort=-%mem | headResponse checksA service can look healthy at the process level but still respond slowly. curl gives a quick HTTP-level view:curl -I https://example.comFor deeper work, response time, error rate, and logs need to be collected over time.TakeawayThe first monitoring habit is to separate server pressure from application behavior. CPU and memory explain what the machine is feeling. HTTP checks and logs explain what users are experiencing."
    } ,
  
    {
      "title"       : "Practical Network Debugging Tools for a Linux Server",
      "category"    : "",
      "tags"        : "devops, networking, dns, troubleshooting, ubuntu",
      "url"         : "/en/NetworkingTools.html",
      "date"        : "2025-11-19 00:00:00 +0900",
      "description" : "Notes on using ping, traceroute, curl, dig, and ss to narrow down service and network failures.",
      "content"     : "Why these tools matterWhen a site does not open, guessing wastes time. Basic network tools help narrow the failure from “the service is broken” into a smaller question: DNS, routing, firewall, port binding, HTTP response, or application behavior.Tool checklistping checks basic reachability, but it is not proof that a web service is healthy. Many systems block ICMP.traceroute shows the route packets take across networks. It is useful when traffic dies before reaching the server.curl tests the actual HTTP layer:curl -I https://example.comThis can confirm status code, redirects, TLS behavior, and whether Nginx is responding.dig checks DNS:dig example.comThis answers whether the domain resolves to the expected address.ss checks listening ports on the server:ss -lntpThis shows whether a process is actually listening on ports such as 80, 443, or an internal app port.Debugging orderThe useful order is: Does DNS point to the right place? Does the server accept traffic on the expected port? Does Nginx respond? Does Nginx reach the backend? Does the app return the expected response?This gives every incident a path instead of a panic."
    } ,
  
    {
      "title"       : "Monitoring a Django Process with systemd and cron",
      "category"    : "",
      "tags"        : "devops, django, systemd, monitoring, cron",
      "url"         : "/en/devops_processmonitoring.html",
      "date"        : "2025-11-17 00:00:00 +0900",
      "description" : "A practice note on running a Django process under systemd and checking it regularly with a cron-driven script.",
      "content"     : "GoalThe goal was to stop treating a Django development server as a manually started process and instead make it observable from the operating system.The practice setup had three parts: Register the Django command as a systemd service. Write a status-check script. Run the check on a schedule with cron.For production, Django should normally run behind Gunicorn or Uvicorn with a reverse proxy. This exercise was about learning process supervision and monitoring mechanics.systemd serviceThe important idea is that systemd owns the process lifecycle. It can start the service at boot, restart it on failure, and expose status through one consistent command:systemctl status quizai.serviceThat is better than remembering which terminal session started the server.Health check scriptThe check script looked at whether the process was running and whether the expected port responded. This split is useful: A process can exist but not serve traffic. A port can fail even when the service looks active.Both checks matter when debugging a web service.cron schedulecron made the check repeat automatically. The result was a small monitoring loop:cron -&gt; status script -&gt; systemd/service check -&gt; log resultThe main lesson was that monitoring starts with boring checks. Before dashboards and alert systems, the process needs a clear owner, a health signal, and a repeatable check path."
    } ,
  
    {
      "title"       : "Auto-PPT Debugging Part 2: Credentials, Text Mapping, and Broken Layouts",
      "category"    : "",
      "tags"        : "django, openai, google-slides, troubleshooting",
      "url"         : "/en/pptauto2.html",
      "date"        : "2025-11-16 00:00:00 +0900",
      "description" : "Follow-up notes on fixing credential mismatches and tracing why generated slide text broke the template layout.",
      "content"     : "What changed in this roundThe web errors were mostly reduced, but the generated slides still did not behave like a real product. The biggest issues moved from server crashes to content mapping and layout quality.Credential mismatchOne error appeared when opening the result page:FileNotFoundError: [Errno 2] No such file or directory: 'credentials.json'The cause was inconsistent authentication code. The prompt flow used OAuth through token.json, but the result and download views still expected an older service-account style credentials.json.The fix was to make the result and download views use the same authentication path as the generation flow.Template text not changingAnother issue was that some slides still looked like the original template. That meant the code was creating or opening the presentation, but not replacing the intended text boxes.The fix required checking how Google Slides exposes page elements and making the replacement logic target the actual placeholders rather than assuming a simple text order.Layout breakageThe deeper problem was that text replacement by sequence is fragile. If the code does not know which element is a title, subtitle, or body field, generated content can land in the wrong box.The next design direction became clear: Read placeholder metadata. Separate title and body content. Map generated text by role, not by raw order. Keep slide structure explicit in the code.This round turned the project from “generate text and push it into slides” into “understand the template before filling it.”"
    } ,
  
    {
      "title"       : "Debugging a Django, OpenAI, and Google Slides Auto-PPT Generator",
      "category"    : "",
      "tags"        : "django, openai, google-slides, troubleshooting",
      "url"         : "/en/pptauto1.html",
      "date"        : "2025-11-14 23:59:00 +0900",
      "description" : "Early troubleshooting notes from adding prompt-based slide generation to a Django project on macOS.",
      "content"     : "Starting pointThe goal was to let a user enter a prompt, generate slide text with OpenAI, and create a Google Slides presentation from that result.This first phase was mostly about getting the local development environment and external service wiring into a working state.Environment setupThe first mistake was using the wrong virtual environment activation path. I initially reached for Windows-style paths:source Scripts/activateOn macOS and Linux, the right path is:python3 -m venv venvsource venv/bin/activateSmall setup errors like this matter because they can hide whether the real issue is Python, Django, package installation, or application code.Django and database setupThe project also needed Django and MySQL to agree on configuration: Database host, user, password, and schema had to match the local MySQL instance. Python dependencies for MySQL needed to compile correctly. Environment variables had to be loaded consistently.Once the database connection was stable, the next problem moved to external APIs.API integration lessonsThe Google Slides integration required careful credential handling. The OpenAI part needed prompt structure and response parsing. The Django view had to connect both without losing state between request, generation, and presentation creation.The practical lesson: build one link at a time. Confirm the virtual environment, then Django, then database, then OpenAI, then Google Slides. Debugging all of them at once makes the failure impossible to read."
    } ,
  
    {
      "title"       : "Architecture Notes for a Django-Based AI Quiz Platform",
      "category"    : "",
      "tags"        : "django, python, architecture, ai, web",
      "url"         : "/en/DjangoProject.html",
      "date"        : "2025-11-14 23:59:00 +0900",
      "description" : "A structural walkthrough of Quizly, a Django 5.1 application that generates quizzes from uploaded study materials.",
      "content"     : "Project overviewQuizly is an AI study platform built around one workflow: a user uploads learning material, the app extracts usable text, and OpenAI generates quiz questions from that content.The project combines a traditional Django web app with document parsing and AI generation.StackThe main stack was: Django 5.1.6 for the web backend. Python 3.12 for application code. MySQL 8.0 for persistent data. OpenAI API for quiz generation. django-allauth for Google, Kakao, and Naver login. PyMuPDF, python-docx, and python-pptx for document extraction. Nginx and Cloudflare for deployment and edge traffic.Module responsibilitiesThe important architecture decision was to keep the workflow separated by responsibility: Authentication handles user identity and social login. Upload logic stores files and validates allowed document types. Parsing logic extracts clean text from PDF, DOCX, and PPTX input. AI logic turns extracted text into quiz data. View logic connects the user-facing pages to each step.This separation makes the project easier to debug because a failed quiz can be traced to one stage: upload, parse, prompt, generation, or rendering.What the project taughtThe hard part was not simply calling an AI model. The real work was shaping the app around predictable data boundaries.The document parser must produce usable text. The prompt layer must receive that text in a controlled format. The database must store generated results in a way the UI can render later.That is the main lesson from this project: AI features still need normal backend discipline."
    } ,
  
    {
      "title"       : "Building Auto Deployment for a Jekyll Blog with GitHub Actions and Nginx",
      "category"    : "",
      "tags"        : "devops, jekyll, github-actions, nginx, rbenv",
      "url"         : "/en/firstblogpost.html",
      "date"        : "2025-11-07 23:59:00 +0900",
      "description" : "How I wired a Jekyll blog to deploy automatically to an Ubuntu EC2 server after every push to main.",
      "content"     : "GoalI wanted the blog to update from a normal writing workflow:write locally -&gt; git push main -&gt; GitHub Actions builds the site -&gt; server receives the new _site output -&gt; Nginx serves the updated blogThe point was to remove the manual cycle of SSH login, build commands, file copy, and Nginx checks every time I wrote a post.Deployment pathThe working deployment path became:Local machine -&gt; GitHub Actions -&gt; SSH into EC2 -&gt; bundle install -&gt; bundle exec jekyll build -&gt; rsync _site/ to /var/www/myblog -&gt; Nginx serves the static outputThe server used Ubuntu, rbenv-managed Ruby, Bundler, and Nginx. The repository stayed as the source of truth; the server only held the built static result.Main issuesThe useful debugging points were not about Jekyll itself. They were around runtime consistency: Ruby and Bundler versions had to match what the project expected. The GitHub Actions SSH key needed correct permissions and host access. Nginx needed to point at the final static directory, not the repository root. rsync had to overwrite old static files without leaving stale artifacts.ResultAfter this setup, publishing became a push-based workflow. The blog source lives in Git, deployment is reproducible, and the EC2 server only serves static files through Nginx.That is the right shape for a personal technical blog: simple runtime, clear deployment boundary, and no manual publish step."
    } ,
  
    {
      "title"       : "Reverse Proxy and the Basic Nginx Deployment Shape",
      "category"    : "",
      "tags"        : "devops, nginx, django, reverse-proxy, ubuntu",
      "url"         : "/en/reverse_proxy.html",
      "date"        : "2024-11-22 00:00:00 +0900",
      "description" : "A practical note on how Nginx sits in front of Django and Gunicorn as a reverse proxy.",
      "content"     : "What a reverse proxy doesA reverse proxy receives the browser request first, then forwards it to the application server that actually handles the work.The deployment shape is simple:Browser -&gt; Nginx -&gt; Gunicorn / DjangoFrom the browser’s point of view, Nginx is the only visible server. The Django process, the Gunicorn port, and the number of backend processes stay hidden behind it.Why Nginx belongs in frontNginx handles the parts that should not be Django’s job: Accept public HTTP and HTTPS traffic. Serve static files directly. Forward dynamic requests to Gunicorn. Keep backend ports private. Apply request size, timeout, and host rules at the edge.That separation matters because a Django app server is built to run application code, not to be the public traffic gate.The core configuration ideaThe important line is the proxy target:location / { proxy_pass http://127.0.0.1:8000;}This says: public users connect to Nginx, but dynamic requests are passed to a local backend process. The backend can listen on 127.0.0.1, which keeps it unavailable from the outside internet.What I took from this setupThe useful mental model is that Nginx is not just a “web server” in front of Django. It is the boundary between public traffic and private application processes.Once that boundary is clear, later topics such as SSL termination, load balancing, static file serving, and upstream health checks become easier to reason about."
    } 
  
]
