If you’ve already explored what Raspberry Pi can do, picked the right hardware, set everything up, and gotten comfortable with the basics of Linux — it’s time to level up. In the final part of our series, we’re diving into hands-on web development. You’ll learn practical tips and step-by-step guidance to turn your Raspberry Pi into a rock-solid, secure, and easy-to-manage web server.
How to deploy our service to the web

How to deploy the application to the device?
Once your web application is ready, the first challenge is to get it onto the Raspberry Pi. The simplest approach is to rely on Git. You can clone the repository directly on the device, install dependencies, compile the code, and run the application—just like during development. This is a straightforward method, but it comes with limitations, especially for more demanding or complex projects, where you might need to configure more intricate dependencies or ensure that development and production environments are identical.
If you need a more robust solution or want to ensure the application runs in an isolated and precisely defined environment, an ideal deployment method is using Docker containers. Docker allows you to package the application along with all its dependencies into a single image. You can then run it anywhere Docker is supported. The key is to prepare a Dockerfile
, which acts as a recipe for building the image. Here’s an example for a simple Node.js application (based on the official node:18
base image):

You then upload the finished image to a container registry, such as Docker Hub or GitHub Container Registry (which is much more generous in its free tier):
docker login
docker build -t /: .
docker push /:
On the Raspberry Pi, you can download and run the image with a single command. The -p 8080:3000
flag is used to expose (map) the port outside the container — this makes the service available at the local address http://localhost:8080
:
docker run -d -p 8080:3000 /:
For more complex applications made up of multiple layers (e.g., web server, database, local file system), the docker-compose.yml
file is an invaluable tool. It defines all the services in the project and how they are interconnected:

docker compose up -d
How to ensure continuous uptime of the application?
As soon as you move from a local development environment to a “live” deployment, any downtime becomes a real issue. When deploying on a Raspberry Pi, it’s crucial to ensure that your application automatically restarts after unexpected crashes or device reboots.
One of the most widely used and simplest solutions is the tool systemd
, which is built into most Linux distributions (including Raspberry Pi OS). With systemd
, you can define a simple service file that manages the lifecycle of your application: starting it on system boot, automatically restarting it after crashes, limiting resource usage, and providing easy access to logs. A basic configuration might look like this:

sudo systemctl daemon-reload
sudo systemctl enable .service
sudo systemctl start .service
The advantage of systemd
is its simple configuration and minimal performance overhead, making it ideal for deployment on low-power devices.
If your application runs as a Docker container, the process is even simpler. You just need to add the --restart unless-stopped
flag to the container run command:
docker run -d --restart unless-stopped /:
Alternatively, you can add the line restart: unless-stopped
to the services defined in your docker-compose.yml
configuration. In both cases, you can be confident that your application will run reliably, with minimal need for manual intervention—even in the event of unexpected issues.
How to expose the application to the public network?
In some cases, having the application accessible only within your local network is enough—such as in home automation setups. However, if you want your service to be accessible from the outside, you’ll need to deal with networking, DNS, and security.
Traditionally, applications are made accessible via port forwarding on your router. While your Raspberry Pi remains inside your private home network, it is “hidden” behind the router. Port forwarding allows you to redirect a port from your public IP address to the internal IP of your Raspberry Pi. Combined with a DynDNS (Dynamic DNS) service, you won’t have to worry about your internet provider changing your IP address every day—DynDNS maps your dynamic IP to a fixed domain name.
However, this approach is recommended only for users who really know what they’re doing! You’re exposing your home network to the public internet, which poses significant security risks. Also, many providers no longer allow the required ports to be opened without restrictions.
A simpler, faster, and safer option (especially for smaller projects) is to use a third-party service like the free Cloudflare Tunnel. This uses reverse proxy tunneling: your application on the Raspberry Pi establishes an outbound connection to Cloudflare, and through that tunnel (and Cloudflare’s infrastructure), your service is securely exposed to the public internet.
No need to open any ports on your router. The only requirement is owning a domain (even one registered outside Cloudflare), but you’ll need that anyway if you want your application to be accessible under a professional-looking address.
Cloudflare Tunnel also comes with extra benefits. The tunnel is encrypted, resistant to common DDoS attacks, and includes a free HTTPS certificate. This allows you to sidestep many security, networking, and legal complications—and most importantly, your device is not left “naked” on the internet.

Do I need a reverse proxy web server? (Nginx, Apache)
A reverse proxy is a type of server that receives requests from the internet and forwards them to an internal application. If you’re building a backend API service, it typically runs its own HTTP server—often on a specific port (e.g., 3000)—and can handle requests directly, without a separate intermediary. However, web servers like Nginx or Apache offer a valuable middle layer that can handle many low-level tasks for you: caching, rate-limiting, protection against brute-force and DoS attacks, basic logging, monitoring, HTTPS certificate handling, and more.
That said, you might not need these features for a smaller project, or you may choose to implement them directly in your backend app. Alternatively, services like Cloudflare can take care of many of these responsibilities for you.
On the other hand, if your goal is not to provide a dynamic API service, but simply to serve a static website or a single-page application (SPA), then Nginx or Apache alone may be all you need—without a separate backend application. These servers can efficiently and reliably serve static files directly from disk (HTML, CSS, JS, images).
Even when a backend app is present, it’s often desirable to serve the frontend as a separate, independent process on a different port (or even a different server). This ensures better isolation between frontend and backend, simplifies CI/CD, enhances security and performance, and makes scaling easier.
How to automate build-deploy processes?
Every manual operation costs valuable time and increases the risk of human error. As your project grows in complexity, it becomes easier to overlook something during deployment. The solution is CI/CD (Continuous Integration / Continuous Deployment)—a system where each push to the repository triggers a series of automated tasks (build, test, deploy) without human intervention.
For smaller projects, GitHub Actions or GitLab CI/CD are more than sufficient. In a YAML workflow
file, you define the individual steps of the process. For example, to automate the deployment of a new image after each push to the main
branch, you could use something like this:

On your Raspberry Pi device, you can automate pulling the latest image using a tool like Watchtower. You just need to extend your docker-compose.yml
configuration with the following:

If you’re not using Docker and simply want to set up something like automatic service updates, you can create a basic script on your Raspberry Pi—for example, deploy.sh
—that pulls and runs the latest version of your application…

…which you can then run using a CRON job (a scheduled, recurring task), for example every night (in the example below, at 3:00 AM):
0 3 * * * /home///deploy.sh >> /home///deploy.log 2>&1
Alternatively, you can configure the script to run directly from GitHub Actions using SSH.
How to monitor the application's status?
Having your application deployed is only half the journey. The other half is knowing what’s happening inside it and how it behaves in real time. Monitoring isn’t just a “nice to have” — it’s a fundamental requirement for reliable production. When you know what’s going on (or not going on) in your application and on the server, you can fix issues before anyone even notices them.
System Logs
If you’re running your application via systemd, you can view its current logs with the command:journalctl -u -f
Here, -u
(unit) selects the specific service by name, and -f
(follow) displays new log entries in real time.
For Dockerized services, you can get a similar real-time view with docker logs
or, if you’re using docker compose
, you can monitor everything at once withdocker compose logs -f
.
These commands let you quickly track startups, errors, and other important application events. Once your app is running, logs become your most valuable source of insight. Monitor them regularly, automate log cleanup, and make sure you know where and how they’re stored—because disorganized logs will come back to haunt you exactly when you need them most.
Automated Status / Availability Monitoring
It’s not just about logs—you often need to know as soon as something stops working. For simple alerting, tools like UptimeRobot or the self-hosted Uptime Kuma are perfectly sufficient. These tools periodically check whether your web or API services are reachable from the outside and immediately notify you of any outage via email, SMS, or push notification.

/health
alebo /status
, ktorý testuje nielen dostupnosť, ale aj stav databázy, externých služieb či fronty na spracovanie úloh. Ak taký endpoint nemáte, pridajte si ho a nechajte monitorovaciu službu pravidelne kontrolovať práve tento bod. Získate tak rýchle varovanie nielen v prípade, že je server nedostupný, ale aj keď prestane korektne fungovať niektorá jeho kritická časť. Advanced Metrics and Visualizations
System logs and availability monitoring are the bare minimum. For a more comprehensive overview of your application’s health, you’ll need metrics that reveal things like CPU and RAM usage, response times, or the status of dependencies via custom /metrics
endpoints.
The most common setup for this is Prometheus combined with Grafana—together they allow you to collect and visualize all key data in one place.
For tracking errors and exceptions in your code, a service like Sentry can be extremely helpful. It captures detailed information about each error along with the context in which it occurred, making debugging much easier.

However, consider how much complexity you want to take on in the beginning. Every additional layer means more maintenance and higher system requirements. If you’re launching a small project, it’s perfectly fine to start with just basic logs and simple availability monitoring. As your application grows, you can expand your monitoring setup as needed.
What should you watch out for in terms of security?
The Raspberry Pi may be a small and friendly device, but when it comes to security, the same rules apply as with “full-sized” servers: any device exposed to the internet is a potential target for attacks. Without basic security hygiene, you’re asking for trouble.
- Make sure to apply security patches regularly. On Raspberry Pi OS (or any Debian-based system), you can use the
unattended-upgrades
package to automatically apply security updates without manual intervention:
sudo apt install unattended-upgrades
sudo dpkg-reconfigure --priority=low unattended-upgrades
- Periodically update major package versions and Raspberry Pi firmware as well. However, before performing a major upgrade, it’s best to back up important data and configurations—so if something goes wrong, you’ll still have a recovery option.
sudo apt update
sudo apt full-upgrade
sudo rpi-eeprom-update
sudo reboot
- Disable password-based SSH login and allow only SSH key authentication. Avoid using default accounts (e.g.,
pi
)—ideally, rename, disable, or delete them. After updating the SSH configuration, don’t forget to restart the SSH service withsudo systemctl restart ssh
.
# /etc/ssh/sshd_config
PasswordAuthentication no
PermitRootLogin no
- Isolate applications from the system. Your application should not run under the root user. Configure it to run as a less-privileged user with limited permissions. If you’re using Docker, your app is already partially isolated by design—but even then, make sure to explicitly set safe restrictions (user, capabilities, resource limits).
- Regular backups are your best insurance policy. Back up regularly (e.g., using cron jobs)—including databases, configuration files, certificates, and application code. Store backups off the device, in a secure location (such as remote storage or the cloud). Occasionally verify that you can actually restore from a backup.
The Journey Continues
The Raspberry Pi is a small hardware marvel, and in the hands of a developer, it can become a solid web infrastructure, an idea incubator, or a gateway into the world of DevOps. Along the way, you’ll get hands-on with log analysis, monitoring, basic networking, and security hygiene—skills that are valuable in any professional or personal tech project.
As mentioned at the beginning, this article couldn’t cover everything. We didn’t get into topics like backup power supplies (UPS), advanced security, rollbacks, different deployment strategies, or scaling. Still, I hope you’ve gained a clear overview of the fundamentals and a solid launchpad for further experimentation and self-learning.

If you’ve read this far—or better yet, if your application is already up and running—that’s a big achievement! One day, you might hit the limits of your little “home server”: the database slows down, the SD card wears out, memory usage gets tight. But take that as a sign that your project is gaining momentum. That’s when it might be time to consider moving to a more powerful server, to the cloud, or to specialized hosting. Or perhaps you’ll discover that, with efficient resource management or even a homegrown cluster, your Raspberry Pi can handle much more than you expected. Wherever your next steps lead, I wish you the best of luck and lots of success!

Tomáš Bencko
The author is a frontend developer specializing in React, Vue.js, and TypeScript. He develops modern, scalable frontend solutions while balancing development with the finer points of design. Outside of client work, he’s constantly seeking ways to improve team workflows, experimenting with AI and automation, and bringing fresh ideas to advance projects and inspire colleagues.