Actix-web is one of the fastest web server frameworks for Rust and perfect for high-performance web applications. In this quick guide, I'll show you how to set up a working Actix web server in just a few minutes.
Why Actix?
- Blazing fast – consistently tops web framework benchmarks (yes, faster than your Go microservice)
- Type-safe through Rust's type system – the compiler is your best friend and worst enemy
- Async/await support out of the box – because callback hell is so 2015
- Middleware system for reusable components – DRY principle on steroids
- Built-in WebSocket support – real-time features without the headache
Setup
First, let's create a new Rust project and add Actix-web as a dependency:
cargo new actix-demo cd actix-demo
Hello World Server
The simplest Actix server looks like this:
In Cargo.toml, add Actix:
[dependencies] actix-web = "4"
use actix_web::{get, App, HttpResponse, HttpServer, Responder};
async fn index() -> impl Responder {
HttpResponse::Ok().body("Hello, Actix!")
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
println!("Server running at http://127.0.0.1:8080");
HttpServer::new(|| {
App::new()
.route("/", web::get().to(index))
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}
Hello World Server: Openssl
The simplest Actix server looks like this:
In Cargo.toml, add openssl:
[dependencies] actix-web = "4" openssl = "0.10.68"
use actix_web::{get, App, HttpResponse, HttpServer, Responder};
use openssl::ssl::{SslAcceptor, SslFiletype, SslMethod};
async fn index() -> impl Responder {
HttpResponse::Ok().body("Hello, Actix!")
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
println!("Server running at http://127.0.0.1:8080");
HttpServer::new(|| {
App::new()
.route("/", web::get().to(index))
})
.bind_openssl("0.0.0.0:443", builder)?
.run()
.await
}
JSON API Endpoint
For a REST API, we need JSON support. Let's add serde:
[dependencies]
actix-web = "4"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
Now we can receive and send JSON data:
use actix_web::{post, web, App, HttpResponse, HttpServer, Responder};
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize)]
struct User {
name: String,
age: u32,
}
#[post("/user")]
async fn create_user(user: web::Json<User>) -> impl Responder {
HttpResponse::Ok().json(User {
name: format!("Created: {}", user.name),
age: user.age,
})
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(|| {
App::new()
.service(create_user)
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}
State Management
For shared state, we use web::Data:
use actix_web::{get, web, App, HttpResponse, HttpServer};
use std::sync::Mutex;
struct AppState {
counter: Mutex<i32>,
}
#[get("/count")]
async fn count(data: web::Data<AppState>) -> HttpResponse {
let mut counter = data.counter.lock().unwrap();
*counter += 1;
HttpResponse::Ok().body(format!("Count: {}", counter))
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
let app_state = web::Data::new(AppState {
counter: Mutex::new(0),
});
HttpServer::new(move || {
App::new()
.app_data(app_state.clone())
.service(count)
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}
Tip: Actix uses multiple worker threads. That's why we need Mutex for shared mutable state. For read-heavy workloads, RwLock is more performant.
Middleware Example: Compression
One of the most useful middleware components is automatic response compression. Why send 10KB when you can send 2KB? Your users' data plans will thank you:
use actix_web::{middleware, App, HttpServer, HttpResponse};
#[actix_web::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(|| {
App::new()
.wrap(middleware::Compress::default())
.service(index)
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}
The Compress middleware automatically compresses responses using gzip, deflate, or brotli based on the client's Accept-Encoding header. It's transparent, efficient, and makes your API responses fly over the wire.
Actix Guard
The simplest Actix guard looks like this:
#[actix_web::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(|| {
App::new()
//http://localhost
.service(web::resource("/")
.guard(guard::Host("localhost"))
.route(web::get().to(hello_from_localhost)),
)
//http://test.com
.service(web::resource("/")
.guard(guard::Host("test.com"))
.route(web::get().to(hello_from_test)),
)
//http://127.0.0.1
.route("/", web::get().to(hello))
.route("/hey", web::get().to(hey))
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}
Performance Tips
- Use .workers() to control the number of worker threads
- For CPU-intensive tasks: use web::block() for blocking code
- For maximum performance: compile with --release
- Connection pooling for databases is mandatory
Conclusion
Actix-web is an extremely fast and robust framework. The learning curve is a bit steep at first, but once you understand the concepts, everything flows smoothly. For production web services in Rust, Actix is my first choice.
The complete code is of course production-ready and runs stable under load. Happy coding! 🦀