Safe Without GC — Building a High-Performance REST API Server with Rust + Axum (Tokio · SQLx · Real-World Code)
Honestly, when I first encountered Rust, I thought, "Do I really need to learn this?" Go is fast, Node.js is more than usable — why bother with such a difficult language? But after reading about how Discord rewrote its Read States service from Go to Rust, my perspective changed. Due to GC pauses in Go, a latency spike of 10–50ms occurred every 2 minutes in an environment with 5 million concurrent users, and it wasn't until the switch to Rust that those spikes disappeared and memory usage dropped by more than half. When I also heard that Cloudflare was migrating 20% of its internet traffic to a Rust-based platform, it became clear this was more than just a "hot language."
In this post, you'll see how Rust's ownership system guarantees memory safety without a GC, and how to structure a production-grade REST API using the Axum + Tokio + SQLx stack — with real code. This is aimed at backend developers using Go or Node.js who have hit performance limits or felt the pain of GC issues. If you're weighing whether to adopt Rust, this post will give you a framework for making that decision.
Core Concepts
Ownership — The Secret to Memory Safety Without GC
When you work with Go or Java, there are moments when GC betrays you. The Discord case was exactly that. Rust takes a fundamentally different approach to this problem. Instead of a GC, the compiler analyzes the code, infers the lifetime of each value, and automatically frees memory at the appropriate time. The ownership system and borrow checker are what enforce these rules.
fn main() {
let s1 = String::from("hello"); // s1 owns "hello"
let s2 = s1; // ownership moves to s2
// println!("{}", s1); // compile error! s1 is no longer valid
println!("{}", s2); // works fine
} // memory is automatically freed when s2 goes out of scopeAt first, these rules feel incredibly restrictive. I also thought, "Why is it blocking me like this?" — but over time you come to feel that this is a powerful safety net that converts runtime crashes into compile-time errors.
Borrow Checker: A core component of the Rust compiler that prevents data races and dangling pointers at the compilation stage. According to a 2019 report from Microsoft, approximately 70% of C/C++ security bugs are exactly these kinds of memory errors.
Tokio — The Heart of Async I/O
Rust's async/await does not include a runtime in the language itself. A separate async runtime is therefore needed, and that's Tokio. Describing it simply as "a multi-threaded version of the Node.js event loop" can be misleading — more precisely, it is an M:N thread model based on a work-stealing scheduler. It efficiently schedules thousands of lightweight tasks over a pool of OS threads, and is structured to handle CPU-bound and I/O-bound work separately.
#[tokio::main]
async fn main() {
// This single macro initializes the Tokio runtime
// and runs the main function asynchronously
println!("Async server starting!");
}Zero-cost abstractions: High-level abstractions in Rust like iterators and generics produce the same performance as low-level code after compilation. In the case of
async/await, it is internally compiled into a state machine, but the key point is that it operates without GC or heap allocation overhead. "Use it conveniently, but without the cost of GC" is the core philosophy of Rust.
Axum — Why Axum?
Axum is directly maintained by the Tokio team, integrates naturally with the Tower ecosystem, and is seeing steady adoption in production. While Actix-web may score higher in pure throughput benchmarks, Axum has a far more consistent middleware composition model and lets you leverage the Tower ecosystem's assets directly — making it a better fit for team projects. The router types are explicit, so the code can look a bit verbose at first, but that actually pays off greatly during maintenance.
Practical Application
Example 1: Setting Up a Basic REST API Server with Axum
First, add the dependencies to Cargo.toml. Version mismatches are a common blocker, so it's recommended to be explicit from the start.
[dependencies]
axum = "0.7"
tokio = { version = "1", features = ["full"] }
serde = { version = "1", features = ["derive"] }
serde_json = "1"Next, let's establish the basic routing structure. get_user is a minimal example that, without DB integration yet, simply reflects the path parameter id in the response. It will be replaced with an actual DB query in the next example.
use axum::{
routing::{get, post},
Router, Json,
extract::Path,
http::StatusCode,
};
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize)]
struct User {
id: i32,
name: String,
email: String,
}
#[derive(Deserialize)]
struct CreateUser {
name: String,
email: String,
}
// GET /users/:id — replaced with a DB query in the next example
async fn get_user(Path(id): Path<i32>) -> Result<Json<User>, StatusCode> {
Ok(Json(User {
id,
name: "Alice".into(),
email: "alice@example.com".into(),
}))
}
// POST /users
async fn create_user(Json(payload): Json<CreateUser>) -> (StatusCode, Json<User>) {
let user = User {
id: 42,
name: payload.name,
email: payload.email,
};
(StatusCode::CREATED, Json(user))
}
#[tokio::main]
async fn main() {
let app = Router::new()
.route("/users/:id", get(get_user))
.route("/users", post(create_user));
let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await.unwrap();
println!("Server running: http://localhost:3000");
axum::serve(listener, app).await.unwrap();
}Looking at the actual code, it feels more declarative than Go's net/http. Here's a breakdown of what each element does:
| Code Element | Role |
|---|---|
#[derive(Serialize, Deserialize)] |
Serde automatically generates JSON conversion code |
Path(id): Path<i32> |
Extracts URL parameters in a type-safe manner |
Json(payload): Json<CreateUser> |
Automatically deserializes the request body |
Result<Json<User>, StatusCode> |
Success and failure are clearly separated at the type level: Ok(Json(...)) for success, Err(StatusCode::...) for failure. Errors can be automatically propagated with the ? operator |
Example 2: DB Integration with SQLx — Compile-Time SQL Validation
The decisive reason our team chose SQLx over an ORM was query transparency. If you've ever been burned by N+1 problems while running in production without knowing what SQL an ORM was generating internally, you'll understand — SQLx lets you write raw SQL while connecting to the actual DB at compile time to validate queries. It catches misspelled column names and type mismatches before deployment.
Add SQLx to Cargo.toml.
sqlx = { version = "0.7", features = ["runtime-tokio", "postgres"] }use sqlx::PgPool;
use axum::extract::State;
#[derive(sqlx::FromRow, Serialize)]
struct User {
id: i32,
name: String,
email: String,
}
// Share the DB connection pool as app state
#[derive(Clone)]
struct AppState {
db: PgPool,
}
async fn get_user_from_db(
State(state): State<AppState>,
Path(id): Path<i32>,
) -> Result<Json<User>, StatusCode> {
// query_as! connects to the actual DB at compile time to validate the query
let user = sqlx::query_as!(
User,
"SELECT id, name, email FROM users WHERE id = $1",
id
)
.fetch_optional(&state.db)
.await
.map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?;
user.map(Json).ok_or(StatusCode::NOT_FOUND)
}
#[tokio::main]
async fn main() {
let database_url = std::env::var("DATABASE_URL").expect("DATABASE_URL required");
let pool = PgPool::connect(&database_url).await.expect("DB connection failed");
let state = AppState { db: pool };
let app = Router::new()
.route("/users/:id", get(get_user_from_db))
.with_state(state);
let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await.unwrap();
axum::serve(listener, app).await.unwrap();
}PostgreSQL's SERIAL/INT4 type maps to i32 in Rust. Occasionally people use u32, but since SQLx catches type mismatches at compile time, this gets validated naturally as well.
Example 3: Adding Timeouts and Logging with Tower Middleware
This example takes the get_user_from_db handler and AppState from the previous example and adds a middleware layer on top. Because Axum leverages the Tower ecosystem directly, being able to compose timeout, compression, and logging like LEGO bricks is the biggest tangible benefit here.
tower = "0.4"
tower-http = { version = "0.5", features = ["trace", "timeout", "compression-gzip"] }
tracing-subscriber = "0.3"use axum::Router;
use tower::ServiceBuilder;
use tower_http::{
timeout::TimeoutLayer,
trace::TraceLayer,
compression::CompressionLayer,
};
use std::time::Duration;
#[tokio::main]
async fn main() {
tracing_subscriber::fmt::init();
let database_url = std::env::var("DATABASE_URL").expect("DATABASE_URL required");
let pool = PgPool::connect(&database_url).await.expect("DB connection failed");
let state = AppState { db: pool };
let app = Router::new()
.route("/users/:id", get(get_user_from_db))
.with_state(state)
.layer(
ServiceBuilder::new()
.layer(TraceLayer::new_for_http())
.layer(TimeoutLayer::new(Duration::from_secs(10)))
.layer(CompressionLayer::new()),
);
let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await.unwrap();
axum::serve(listener, app).await.unwrap();
}Pros and Cons
Pros
In practice, the one you'll feel most strongly is the first one: predictable latency. The absence of a GC doesn't simply mean "it's fast" — it means P99 latency doesn't fluctuate.
| Item | Details |
|---|---|
| Predictable latency | Without GC Stop-the-World, latency spikes simply don't occur |
| Memory safety | Memory errors — responsible for ~70% of C/C++ security bugs (Microsoft, 2019) — are eliminated at compile time |
| Top-tier throughput | Ranks at the top across all languages in TechEmpower benchmarks; the same traffic can be handled with significantly fewer containers |
| Reduced operational costs | Lower memory usage means fewer containers, and there are reported cases of noticeably reduced cloud costs |
| Recommended by national cybersecurity agencies | The US NSA and CISA officially recommend Rust as a memory-safe language |
Cons and Caveats
Looking at the cons table, there's one common theme: time. Learning time, compile time, team onboarding time. Even if the performance benefits are significant, whether you're in a position to absorb these costs is the crux of the adoption decision.
| Item | Details | Mitigation |
|---|---|---|
| Steep learning curve | The ownership and borrowing system requires a paradigm shift for those experienced with GC | Recommended to start with the Rust Book + a small CLI project |
| Slow compilation | Build times in large projects can be longer than Go | Can be improved with cargo check, sccache, and incremental build configuration |
| Poor for rapid prototyping | Go and Node.js are better suited for fast MVP-level development | Realistically, it's best to adopt selectively for only performance-critical services |
| Team onboarding cost | The market for experienced Rust developers is small | Internal training + pair programming to gradually build capability is effective |
Stop-the-World: The phenomenon where a GC briefly pauses application execution while cleaning up memory. It occurs in GC-based languages like JVM and Go, and is the cause of latency spikes in services where real-time responsiveness is critical.
The Most Common Mistakes in Production
When reviewing Rust code in the field, certain patterns repeat themselves. These are cases where developers, exhausted from fighting the borrow checker, resort to shortcuts — most of which are caught during the early conceptual understanding phase.
- Overusing
clone()and undermining your own performance gains: When the borrow checker throws an error, it's tempting to just slap on.clone()to fix it, but this habit halves the reason for using Rust. Understanding the ownership rules and thinking through data flow from the design stage is important. - Overusing
unwrap(): Usingunwrap()for faster development and then getting panics in production. It's good practice to use the?operator andResulttype to propagate errors explicitly. Declaring handler return types asResult<Json<User>, StatusCode>makes error propagation clean with just?. - Running synchronous blocking code in an async context: Using
std::thread::sleep()or heavy CPU work directly inside a Tokio task blocks the event loop. It's recommended to distinguish betweentokio::time::sleep()andtokio::task::spawn_blocking().
Closing Thoughts
Rust is not "a faster language" — it's "a language that refuses to sacrifice either performance or safety." If latency predictability is critical, memory safety is business-critical, or you need to drastically reduce operational costs, it's worth serious consideration. Conversely, if rapid prototyping is your goal or your team has zero Rust experience, sticking with Go is the practical choice. Rust is not the answer for every service.
Here are 3 steps you can take right now:
- Install Rust from rustup.rs and run an Axum Hello World: Create a new project with
cargo new my-api-server, copy theCargo.tomlsnippet from this post, and run the first example code. - Check the TechEmpower benchmark results yourself: Seeing the actual numbers for how Axum, Actix-web, Go, and Node.js compare will help inform your adoption decision.
- Rewrite one performance-bottleneck endpoint from your existing service in Rust: Rather than a full migration, writing a single microservice or sidecar service in Rust is a realistic way to build real-world experience while minimizing your team's onboarding burden.
The next post will cover how to deploy the full Axum production stack with SQLx + PostgreSQL + JWT authentication using Docker. The example code from this post carries over directly, so following the series will leave you with a complete production boilerplate.
Next post: A Complete Guide to the Axum Production Stack with SQLx + PostgreSQL + JWT Auth + Docker Compose
References
- Is Rust Still Surging in 2026? | ZenRows
- Rust Web Frameworks in 2026: Axum vs Actix vs Rocket | Medium
- Rust Web Frameworks in 2025: Axum vs Actix vs Rocket Benchmark | Markaicode
- Rewriting in Rust: When It Makes Sense (Discord, Cloudflare, Amazon) | Nandann
- Cloudflare Open Sources tokio-quiche for QUIC/HTTP3 | InfoQ
- Enterprise Rust 2025: Framework Analysis | Jason Grey
- Rust Crates to Watch in 2025: Tokio, Axum, SQLx | Medium
- Creating a REST API with Axum + SQLx | Hashnode
- Getting started with Axum + PostgreSQL + Redis + JWT + Docker | Sheroz.com