Rust na VPS — deployment aplikacji webowej Actix-web i Axum
Opublikowano: 10 kwietnia 2026 · Kategoria: VPS
Rust oferuje wydajność porównywalną z C/C++ z bezpieczeństwem pamięci bez garbage collectora. Dla web API to oznacza: miliony requestów na sekundę, minimalny RAM, brak GC pauses i mały binary (kilka-kilkanaście MB). Actix-web i Axum to dwa główne frameworki — oba async z Tokio, oba w top benchmarkach TechEmpower. Ten artykuł pokazuje jak zbudować, skonfigurować i wdrożyć aplikację Rust na VPS z Nginx, systemd i PostgreSQL.
Axum — kompletna aplikacja REST API
# Cargo.toml
[package]
name = "my-api"
version = "0.1.0"
edition = "2021"
[dependencies]
axum = { version = "0.7", features = ["macros"] }
tokio = { version = "1", features = ["full"] }
sqlx = { version = "0.7", features = ["runtime-tokio-rustls", "postgres", "uuid", "chrono"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
tower = "0.4"
tower-http = { version = "0.5", features = ["trace", "cors", "compression-gzip"] }
tracing = "0.1"
tracing-subscriber = "0.3"
uuid = { version = "1", features = ["serde", "v4"] }
chrono = { version = "0.4", features = ["serde"] }
anyhow = "1"
[profile.release]
opt-level = 3
lto = true # Link-time optimization (mniejszy i szybszy binary)
codegen-units = 1 # Lepsza optymalizacja (wolniejszy build)
strip = true # Strip symboli debugowania (mniejszy binary) // src/main.rs — Axum aplikacja z SQLx
use axum::{
routing::{get, post},
Router, Json, Extension, extract::State,
http::StatusCode,
};
use sqlx::{PgPool, postgres::PgPoolOptions};
use serde::{Deserialize, Serialize};
use std::sync::Arc;
use tower_http::{trace::TraceLayer, cors::CorsLayer, compression::CompressionLayer};
use tracing_subscriber::{layer::SubscriberExt, util::SubscriberInitExt};
#[derive(Clone)]
struct AppState {
db: PgPool,
}
#[derive(Serialize, sqlx::FromRow)]
struct User {
id: i32,
email: String,
is_active: bool,
}
#[derive(Deserialize)]
struct CreateUser {
email: String,
password: String,
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
// Inicjalizacja tracingu (logi)
tracing_subscriber::registry()
.with(tracing_subscriber::EnvFilter::new(
std::env::var("RUST_LOG").unwrap_or_else(|_| "info".to_string()),
))
.with(tracing_subscriber::fmt::layer())
.init();
let database_url = std::env::var("DATABASE_URL")
.expect("DATABASE_URL must be set");
let pool = PgPoolOptions::new()
.max_connections(20)
.min_connections(2)
.connect(&database_url)
.await?;
// Automatyczne migracje (plik migrations/*.sql)
sqlx::migrate!("./migrations").run(&pool).await?;
let state = AppState { db: pool };
let app = Router::new()
.route("/health", get(health_handler))
.route("/api/users", get(list_users).post(create_user))
.route("/api/users/:id", get(get_user))
.with_state(state)
.layer(TraceLayer::new_for_http())
.layer(CorsLayer::permissive()) // dostosuj w produkcji
.layer(CompressionLayer::new());
let addr = "127.0.0.1:8080";
tracing::info!("Listening on {}", addr);
let listener = tokio::net::TcpListener::bind(addr).await?;
axum::serve(listener, app).await?;
Ok(())
}
async fn health_handler() -> Json<serde_json::Value> {
Json(serde_json::json!({"status": "ok"}))
}
async fn list_users(State(state): State<AppState>) -> Result<Json<Vec<User>>, StatusCode> {
let users = sqlx::query_as::<_, User>(
"SELECT id, email, is_active FROM users WHERE is_active = true ORDER BY id"
)
.fetch_all(&state.db)
.await
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
Ok(Json(users))
}
async fn create_user(
State(state): State<AppState>,
Json(payload): Json<CreateUser>,
) -> Result<(StatusCode, Json<User>), StatusCode> {
let hashed = bcrypt_hash(&payload.password); // pseudokod
let user = sqlx::query_as::<_, User>(
"INSERT INTO users (email, hashed_password) VALUES ($1, $2) RETURNING id, email, is_active"
)
.bind(&payload.email)
.bind(&hashed)
.fetch_one(&state.db)
.await
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
Ok((StatusCode::CREATED, Json(user)))
} Build release i cross-compilation
# Build release na serwerze VPS (prosta metoda) # Zainstaluj Rust (rustup) curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh source $HOME/.cargo/env cargo build --release # Binary: target/release/my-api (~5-15 MB po strip) ls -lh target/release/my-api # Cross-compilation z macOS/Linux na Linux musl (statyczny binary) # Instalacja cross cargo install cross # Build dla Linux x86_64 z musl (jeden plik, zero zależności systemowych) cross build --release --target x86_64-unknown-linux-musl # Wyslij binary na serwer przez SCP scp target/x86_64-unknown-linux-musl/release/my-api user@vps:/srv/myapi/ # Sprawdzenie binarnego (czy nie ma broken deps) ldd target/release/my-api # musl: "not a dynamic executable" = OK, statyczny binary
Docker multi-stage build
# Dockerfile — multi-stage: build + minimalny runtime
FROM rust:1.77-slim-bookworm AS builder
WORKDIR /app
# Cache dependencies (osobna warstwa — speedup rebuild)
COPY Cargo.toml Cargo.lock ./
RUN mkdir src && echo "fn main() {}" > src/main.rs
RUN cargo build --release
RUN rm src/main.rs
# Teraz build wlasciwej aplikacji
COPY src ./src
COPY migrations ./migrations
RUN touch src/main.rs && cargo build --release
# Runtime image — ultra-maly (distroless lub alpine)
FROM gcr.io/distroless/cc-debian12
COPY --from=builder /app/target/release/my-api /usr/local/bin/my-api
COPY --from=builder /app/migrations /migrations
EXPOSE 8080
ENV RUST_LOG=info
CMD ["/usr/local/bin/my-api"]
# Rozmiar image: ~15-20 MB (vs 1.77 GB image buildowy!)
# docker-compose.yml
# services:
# api:
# build: .
# restart: unless-stopped
# ports:
# - "127.0.0.1:8080:8080"
# environment:
# - DATABASE_URL=${DATABASE_URL}
# - RUST_LOG=info Systemd service i Nginx
# /etc/systemd/system/rust-api.service
[Unit]
Description=Rust API (Axum)
After=network.target postgresql.service
[Service]
Type=simple
User=rustapi
Group=rustapi
WorkingDirectory=/srv/myapi
EnvironmentFile=/srv/myapi/.env
Environment=RUST_LOG=info
ExecStart=/srv/myapi/my-api
Restart=on-failure
RestartSec=5
# Bezpieczenstwo
NoNewPrivileges=yes
PrivateTmp=yes
ProtectSystem=strict
ReadWritePaths=/srv/myapi/logs
[Install]
WantedBy=multi-user.target
# /etc/nginx/sites-available/rust-api
server {
listen 443 ssl http2;
server_name api.example.com;
ssl_certificate /etc/letsencrypt/live/api.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.example.com/privkey.pem;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_read_timeout 30s;
}
}
# Start
sudo systemctl daemon-reload && sudo systemctl enable --now rust-api
sudo nginx -t && sudo systemctl reload nginx Benchmarki — Rust vs Node.js vs Go
| Framework | Req/s (JSON) | Latency P99 | RAM (idle) | Binary size |
|---|---|---|---|---|
| Rust Actix-web | ~1 200 000 | <1 ms | ~8 MB | ~8 MB |
| Rust Axum | ~950 000 | <1 ms | ~10 MB | ~10 MB |
| Go (Gin / Fiber) | ~700 000 | ~1 ms | ~15 MB | ~12 MB |
| Node.js Fastify | ~350 000 | ~3 ms | ~50 MB | N/A (runtime) |
| Python FastAPI | ~50 000 | ~8 ms | ~80 MB | N/A (runtime) |
Wyniki orientacyjne z benchmarkow TechEmpower dla prostego endpointu JSON. W praktycznych aplikacjach z bazą danych różnica maleje — bottleneck przenosi się na I/O. Rust wciąż wygrywa przy niskim RAM i braku GC pauses.
Actix-web vs Axum — porównanie
| Cecha | Actix-web | Axum |
|---|---|---|
| Ekosystem | Dojrzały, bogate middleware | Rośnie (tokio team) |
| Middleware | Własny system | Tower middleware (share z Hyper) |
| Routing | Makra (get!, post!) | Chainable Router API |
| Error handling | ResponseError trait | IntoResponse trait |
| WebSocket | Wbudowany | Via axum::extract::ws |
| Zalecany dla | Dojrzałe projekty, bogatszy ekosystem | Nowe projekty, integracja z Tower |