Overview


The application combines two complementary approaches: a layered architecture at the infrastructure level and the MVT (Model, View, Template) pattern within Django.

At the infrastructure layer, the separation between reverse proxy, application server, and database ensures isolation of responsibilities. Docker guarantees that this architecture is explicitly defined and consistently executed in any environment.
Each component serves a specific function: Caddy handles public traffic and HTTPS, Gunicorn runs the application, Django concentrates the logic, and PostgreSQL persists the data. This division makes the system more predictable, simplifies maintenance, and allows scaling or replacing parts individually without impacting the rest.

Within the application, the MVT pattern organizes the code clearly. Models represent the data, Views encapsulate the logic, and Templates handle rendering. This separation reduces coupling, improves readability, and makes it easier to evolve the system over time.

First — what I wanted to build — and why?


Since I started studying and practicing programming about two and a half years ago, I felt the need to document much of what I was learning. Usually, I would write in text editors and export PDFs to reread when necessary.

Over time, some problems began to appear (in addition to the inevitable light theme in editors and PDFs): lack of practical organization, an uncomfortable writing and reading workflow, and difficulty maintaining consistency between versions.

The main motivation was to write and read without friction. I needed something that could provide this in a structured way and, most importantly, under my control.

That’s how I decided to create a technical blog focused on software engineering.

Technologies used


Before anything else, it’s worth listing the technologies used. This helps provide an overview of the system’s structure and how its components connect.

Stack

  • Infrastructure: Docker
  • Language: Python
  • Database: PostgreSQL
  • Framework: Django

  • Application server: Gunicorn

  • Web server (reverse proxy): Caddy

  • Frontend: Tailwind CSS and Django Templates

Supporting components

  • Content editing: Summernote and Markdown

  • Security: Django Axes, Django middleware, and Bleach

  • Media processing: Pillow

  • Static files: Caddy and WhiteNoise

The beginning of the infrastructure — Docker


Context

Some time ago, a coworker mentioned recurring difficulties with development environments: dependencies that wouldn’t install correctly, configurations that frequently broke, and inconsistencies between machines.

The problems involved database versions, port conflicts, and differences in language versions.

At the time, I had already had some exposure to Docker, but it was at that moment that I decided to take the tool more seriously. She was describing exactly the kind of problem Docker aims to solve.

The decision

The first decision was to use Docker as the foundation of the project’s infrastructure.

Containerizing everything allowed me to have a fully reproducible environment on any system that supports Docker.

The goal was to reduce dependency-related issues and maintain an explicitly organized infrastructure that could scale and be maintained without headaches. Database versions, Python, and other dependencies are all clearly defined within the environment.

It is extremely satisfying to migrate to a server and, with a simple docker compose up --build, see the system being built step by step, without dependency errors and without the need for manual configuration.

It is worth noting that Docker does not remove complexity simply by allowing everything to run with a single command. It merely shifts this complexity from manual configuration to declarative files such as Dockerfile, docker-compose.yml, and entrypoint scripts.
In other words, most of the infrastructure is declared in these files, making the environment explicit and reproducible.

Docker Compose

The project uses Docker Compose to orchestrate three services: the database (PostgreSQL), the application (Python and Django), and the reverse proxy (Caddy). Each has a specific role and communicates through Docker’s internal network.

To ensure that docker compose up --build is the only required step after cloning the repository, all configuration is executed automatically when the containers start. This includes installing dependencies, building CSS with Tailwind, and running database migrations.

The database — PostgreSQL


A relational database is the correct choice for modeling my blog, since the data structure is naturally relational, with well-defined entities (Post, Category, and Tag) and clear relationships between them. In this context, PostgreSQL was the chosen candidate, in addition to being the default database for Django applications.

The database persists in a named volume managed by Docker. The data is stored in the host’s filesystem but abstracted by Docker, without the need for manual intervention. Using a named volume was the most appropriate option in this scenario, ensuring persistence independent of the container lifecycle. Other approaches, such as external volumes (NFS), would add unnecessary complexity for the scope of the project.

The application container only initializes after PostgreSQL is fully ready to accept connections. This is done through a test connection in the application’s entrypoint script. This detail is important, as it avoids race conditions during the initialization process between services.

Django Framework


Python is my main programming language, so Django naturally was already part of my web development stack. Still, the framework was chosen mainly because it provides everything a content-oriented application needs without requiring these components to be built from scratch; as the project itself defines, Django is batteries included:

  • ORM
  • Administrative interface
  • Template system
  • URL routing
  • Authentication
  • Middleware
  • Database migration system

Specifically for a blog, Django’s administrative interface is particularly valuable. It provides a way to manage content without the need to develop a custom frontend for these activities.

The application has three main models: Post, Category, and Tag. Posts have a many-to-one relationship with Category (one category per post) and a many-to-many relationship with Tag (multiple tags per post). These relationships are managed by Django’s ORM, which abstracts the construction of SQL queries from relationships between Python objects.

How would I write comfortably?


As introduced at the beginning of the post, my goal is to write and read without friction. Therefore, it is crucial that writing is comfortable, customizable, standardized, and fast. Due to previous study projects in Django, I was familiar with the Summernote text editor.

For greater practicality and versatility, I also chose to include support for writing in Markdown. For developers, it is a faster and more precise way to structure content compared to a WYSIWYG editor like Summernote, in addition to allowing writing outside the application, in any text editor.

Technical details of these choices

All content, regardless of how it is written, is converted to HTML before being stored. This means that posts written in Markdown have a rendered HTML version, which is what is used in the application.

Before being persisted, posts go through a sanitization process using the Bleach library, which allows only a predefined set of HTML tags. This ensures that the stored content is safe for direct rendering in templates.

An important detail is that the flow is not symmetrical: while Markdown is converted to HTML, the HTML generated by Summernote is not converted back to Markdown. In other words, not all content has a Markdown representation, but all content has an HTML representation. For this reason, I always prioritize writing in Markdown, and only later make adjustments via Summernote if necessary.

Summernote

Summernote is a WYSIWYG (what you see is what you get) editor integrated into the Django admin. It allows writing and formatting content directly through the administrative interface, without the need to handle HTML manually (although it is possible via the code view feature). It also eliminates the need for external tools and reduces friction between writing and publishing.

Since Summernote integrates naturally with Django Admin, there is no need to develop a custom interface for content editing.

However, for me, the biggest advantage is one of the features it provides: image uploads, which are stored on the server and referenced by URL in the post content.

Image resizing

To avoid storing images larger than necessary, before being saved, an auxiliary resizing module using the Pillow library is triggered via Django signals, processing the image before persistence.

Markdown

Markdown is a lightweight markup language that allows writing content in plain text, later converted into HTML for rendering. For developers, this makes writing faster and more precise, especially for technical content.

The main motivation for including Markdown support was to allow writing outside the application, in editors such as Neovim (or any preferred editor), without dependency on specific interfaces or environments.

Trade-offs

Summernote is a user-friendly text editor, since features such as italics, headings, image insertion, and links are accessed visually, like in modern editors. However, this makes writing less fluid and slower, due to the constant need to interact with the interface, and also reduces control over the generated HTML.

Markdown, on the other hand, is lighter, more predictable, faster, and portable.

Its main limitation is the absence of a native mechanism for image uploads. Unlike Summernote, which uploads automatically at the moment of insertion, in Markdown this process must be done separately, with the image being manually referenced by URL in the content. For cases involving media, it is ideal to complement or edit the post later via Summernote.

Application server — Gunicorn


Gunicorn (Green Unicorn) is a WSGI server for Python applications, widely used to run Django applications in production. It manages concurrent requests through multiple worker processes.

Gunicorn should not be exposed directly to the internet. It listens on an internal port, and all traffic reaches it through a reverse proxy.

It acts as the execution layer of the application, while responsibilities such as HTTPS, routing, and static file delivery are delegated to the reverse proxy.

Reverse proxy — Caddy


Caddy sits in front of Gunicorn and manages all public traffic. Its role is to receive requests coming from the internet and route them to the correct destination.

Using a reverse proxy is necessary because the application server (Gunicorn) should not be exposed directly to the internet. It is only responsible for running the application, not for handling all the responsibilities of a public web server.

Caddy automates three important aspects that, without it, would require significant additional configuration:

  • HTTPS termination
  • HTTP to HTTPS redirection
  • SSL certificate management via Let's Encrypt

Static files

In addition, Caddy can serve media files directly without passing through Django. Static files, such as CSS and fonts, are served by WhiteNoise, a Python library that efficiently handles static files within the application process.

Frontend


CSS Framework — Tailwind

The frontend uses the Tailwind CSS framework, a utility-first framework where styles are applied directly in HTML through predefined classes. Instead of writing custom CSS for each component, layout and styles are composed directly in the HTML markup.

Tailwind is built inside the Docker container during startup. The Node.js toolchain is installed in the application image, and the CSS build process is executed as part of the container startup. The compiled CSS file is generated on each initialization.

Light and dark mode

The blog allows switching between light and dark mode. The user’s preference is stored in the browser’s local storage and applied on each page load. The toggle changes the dark class on the root HTML element, activating the style variations defined in Tailwind.

Syntax Highlighting

For code blocks, syntax highlighting is provided by the JavaScript library Highlight.js, loaded only on pages that display technical content.

Fonts

The fonts are self-hosted. Instead of being loaded from external servers, such as Google’s, they are served by the same application server. This removes an external dependency, reduces the number of requests, avoids additional DNS lookups, and eliminates privacy implications associated with third parties.

Security


Protection against brute-force attacks

Brute-force is a type of attack in which a malicious user programmatically attempts multiple password combinations until successfully logging into the system.

To protect against this type of attack on the admin login page, I used django-axes, which tracks authentication attempts and blocks an IP address after a configurable number of failures.

HTTPS and Middleware

HTTPS is mandatory; all HTTP requests are automatically redirected to HTTPS by Caddy. Django’s security middleware adds headers that instruct the browser to connect via HTTPS in the future (HSTS), prevent MIME type sniffing, and avoid the application being embedded in iframes from other domains.

Content sanitization — XSS

XSS (Cross-Site Scripting) is a type of attack in which malicious code is injected into content that will be rendered to other users. When the page is loaded, this code is executed in the victim’s browser context, potentially accessing cookies, manipulating the DOM, or performing authenticated actions.

Although I am the only author of the posts, content sanitization was implemented using the Bleach library. Only a predefined set of HTML tags is allowed, preventing unauthorized JavaScript from being injected.

Protection against CSRF attacks

CSRF (Cross-Site Request Forgery) is a type of attack in which a malicious site tricks a user’s browser into executing authenticated requests on another system. Protection is implemented through unique tokens validated on each request, ensuring that only actions initiated by the application itself are accepted.

CSRF protection is provided by Django’s middleware, which validates a token on every POST request. Session cookies and CSRF tokens are marked as secure and transmitted only via HTTPS.

Backup


It would make no sense to build a system for writing and reading, maintaining a chronological record of posts, while risking losing all the work done. Therefore, backing up posts was a concern from the very beginning.

Among the possible approaches, I chose to use Django signals combined with custom logic.

Event-driven backup

The backup system is event-driven, using Django signals to automatically react to database changes.

When a post is saved, the post_save signal triggers the creation of an initial snapshot or an incremental backup, depending on the context. In the case of creation, the system generates a full snapshot of the content, including associated media. On updates, only new resources (such as images) are stored, creating a versioned history over time.

This model allows reconstructing any version of a post, maintaining both content and its associated resources consistently.

Issue with tag backups

Many-to-many relationships (tags) are persisted separately in Django, which was not initially considered during the implementation of the backup system. As a result, a second signal (m2m_changed) was triggered after post creation, causing every new post to automatically generate an initial snapshot followed by an update.

To ensure consistency, this second signal started being handled explicitly. When the post is newly created (within a short time window), the M2M event is treated as part of the creation, overwriting the initial snapshot. In cases of actual edits, a new incremental backup is generated.

Database backup

As an additional safety layer, database dumps are periodically created using pg_dump. This approach provides redundancy to the backup system, allowing full restoration of the application state from a .sql file.

Request flow


A request to the system follows this flow:
Browser → Caddy → Gunicorn → Django → ORM → PostgreSQL → ORM → Django → Gunicorn → Caddy → Browser

The browser sends the main request, which is received by Caddy and forwarded to Gunicorn. After receiving the HTML response, the browser makes additional requests for static and media files, which are served directly by Caddy without passing through the application.

In the main request flow, Gunicorn delegates the request to a worker, which passes it to the Django application. Within Django, the request goes through the middleware chain, is resolved by the URL router, and processed by a view, which queries the database via the ORM (when necessary) and renders the response template.

The response is then built and sent back through the same path to the client.

Resource requests

As described above, the main request returns the HTML to the client. From that HTML, the browser identifies references to additional resources and makes new requests (subresource requests) for files such as CSS, JavaScript, images, and fonts.

These requests are handled directly by Caddy when they refer to static or media files, without passing through the Django application, which reduces load on the application server and improves overall performance.

Final considerations


Building LearningSea, from the beginning to production, taught me a lot about making conscious decisions at each layer of the system in practice.

Each choice—from using Docker to adopting Django, including the separation between proxy, application server, and database—was guided by predictability, control, and operational simplicity. The goal was never to create something overly complex, but rather a system that is understandable, reproducible, and easy to evolve.

Throughout the process, some aspects proved to be more important than expected, such as ensuring consistency between services, dealing with persistence details (like many-to-many relationships), thinking about backup strategies from the beginning, and considering security aspects across different parts of the system. These factors, although less visible, are fundamental to the reliability of the application.

More than the final product, the value lies in the process: understanding how each part connects and what trade-offs are involved in each decision. It is this understanding that allows the system to evolve safely, without relying on trial and error.


With that in mind, the next posts will explore each of these layers in greater depth, detailing decisions and implementation aspects.