Implementing the first User Story
:
As a
blog visitor:I want to
subscribe to the newsletter,So that
I can receive email updates when new content is published on the blog.
- Blog visitors will input email in a form on a web page
- collect email and name so we can add a personalized greeting
- The form performs a
POST
request to/subscriptions
- The server will store the email and respond to request
- Integration tests
- How to read data collected in an HTML form in actix-web
- i.e. how do I parse the request body of a
POST
?
- i.e. how do I parse the request body of a
- What libraries are available to work with a PostgreSQL database in rust
- diesel vs sqlx vs tokio-postgres
- How to setup and manage migrations for our database
- How to get our hands on a database connection in our API request handlers
- How to test for side-effects (a.k.a stored data) in our integration tests
- How to avoid weird interactions between tests when working with a database.
- Find and vet a web framework
- Define a testing strategy
- Implement a
GET /health_checkpoint
- Find and vet a database driver
- Define a strategy for database changes (a.k.a migrations)
- Actually write some queries
Actix Web is a powerful, pragmatic, and extremely fast web framework for Rust
We will use actix-web because:
- it's one of Rust's oldest frameworks
- seen a lot of production usage
- it runs on tokio which will prevent other async library incompatibilities
Tokio is an asynchronous runtime for the Rust programming language. It provides the building blocks needed for writing network applications. It gives the flexibility to target a wide range of systems, from large servers with dozens of cores to small embedded devices.
Add 📦s:
cargo add actix-web@4
cargo add tokio@1
cargo add --dev reqwest@0.11
Should look like Cargo.toml
:
[package]
name = "zero2prod"
version = "0.1.0"
edition = "2021"
[lib]
path = "src/lib.rs"
[[bin]]
path = "src/main.rs"
name = "zero2prod"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
actix-web = "4"
tokio = { version = "1", features = ["macros", "rt-multi-thread"] }
[dev-dependencies]
reqwest = "0.11"
App resides in src/lib.rs
:
use std::net::TcpListener;
use actix_web::{web, App, HttpServer, HttpResponse};
use actix_web::dev::Server;
async fn health_check() -> HttpResponse {
HttpResponse::Ok().finish()
}
pub fn run(listener: TcpListener) -> Result<Server, std::io::Error> {
let server = HttpServer::new(|| {
App::new()
.route("/health_check", web::get().to(health_check))
})
.listen(listener)?
.run();
Ok(server)
}
Wire up the server in src/main.rs
:
use std::net::TcpListener;
use zero2prod::run;
#[tokio::main]
async fn main() -> std::io::Result<()> {
let address = TcpListener::bind("127.0.0.1:8000")?;
run(address)?.await
}
First integration test in tests/health_check.rs
:
use std::net::TcpListener;
#[tokio::test]
async fn health_check_works() {
let address = spawn_app();
let client = reqwest::Client::new();
let response = client
.get(&format!("{}/health_check", &address))
.send()
.await
.expect("Failed to execute request.");
assert!(response.status().is_success());
assert_eq!(Some(0), response.content_length());
}
fn spawn_app() -> String {
let listener = TcpListener::bind("127.0.0.1:0")
.expect("Failed to bind random port");
let port = listener.local_addr().unwrap().port();
let server = zero2prod::run(listener).expect("failed to bind address");
let _ = tokio::spawn(server);
format!("http://127.0.0.1:{port}")
}
🎗 You can ONLY have 1
library in a project, but you can have many
binaries! [[bin]]
in Cargo.toml
represents an array.
💡 Threads are for working in parallel, async is for waiting in parallel.
HttpServer
- handles all transport level concernsApp
- application logic: routing, middleware, request handlers, etc.route
- an endpoint with 2 parameters:path
androute
async fn greet()
- is an asynchronous handlerResponder
- is a trait, which behavior is anything that can be converted into aHttpResponse
#[tokio::main]
- changes the main runtime behave asynchronously; it's an Executor Trait- basically, it's just syntax sugar, the macro actually injects it
Expanded macro #[tokio::main]
:
// =====================================
// Recursive expansion of the main macro
// =====================================
fn main() -> std::io::Result<()> {
let body = async {
HttpServer::new(|| {
App::new()
.route("/", web::get().to(greet))
.route("/{name}", web::get().to(greet))
})
.bind("127.0.0.1:8000")?
.run()
.await
};
#[allow(clippy::expect_used, clippy::diverging_sub_expression)]
{
return tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.expect("Failed building the Runtime")
.block_on(body);
}
}
#[tokio::test]
is the testing equivalent of tokio::main
; It also spares you from having to specify the #[test]
attribute.
Expanded macro #[tokio::test]
// =====================================
// Recursive expansion of the test macro
// =====================================
#[::core::prelude::v1::test]
fn health_check_works() {
let body = async {
let address = spawn_app();
let client = reqwest::Client::new();
let response = client
.get(&format!("{}/health_check", &address))
.send()
.await
.expect("Failed to execute request.");
assert!(response.status().is_success());
assert_eq!(Some(0), response.content_length());
};
tokio::pin!(body);
let body: ::std::pin::Pin<&mut dyn ::std::future::Future<Output = ()>> = body;
#[allow(clippy::expect_used, clippy::diverging_sub_expression)]
{
return tokio::runtime::Builder::new_current_thread()
.enable_all()
.build()
.expect("Failed building the Runtime")
.block_on(body);
}
}
- Endpoints exposed in an API define the contract
- A contract is between the server and its clients which is a shared agreement about inputs and outputs of the system; its interface
- Backwards compatibility in this context looks like adding new endpoints
- Breaking changes in this context looks like removing an endpoint or removing or changing the expected field of the schema output
- Black box testing is when we verify the behaviour of a system by examining its output given a set of inputs without having access to the details of its internal implementation
tokio::spawn
takes a future and hands it over to the runtime for polling, without waiting for its completion; it therefore runs concurrently with downstream futures and tasks (e.g. our test logic) [section 3.5]tokio::test
spins up a new runtime at the beginning of each test case and shuts down at the end of each test case; therefore no cleanup is required for zombie processes- Port
0
is special-cased at theOS
level: trying to bind port 0 will trigger anOS
scan for an available port which will then be bound to the application - Table driven tests with tuples
await
requires to be directly inside anasync
fn/scope, except forfor in
loops- For
PgConnection::connect
to work during tests add a.env
withDATABASE_URL
for compile time appeasement;configuration.yaml
is for runtime
When dealing with integration tests you want a database interaction to be isolated; clean slate/state.
There are 2 techniques to ensure test isolation when interacting with a relational database test:
- Wrap the whole test in a SQL transaction and rollback at the end of it
- Fast and clever; rolling back a SQL transaction is quick
- Good for unit tests for queries but tricky for integration tests
- The application will borrow a
PgConnection
from aPgPool
and we have no way to capture that connection in a SQL transaction context
- The application will borrow a
- Spin up a brand-new logical database for each integration test.
- Create a new logical database with a unique name
- Run database migrations on it
- Clean-up is difficult; delete/destroy database after test
HttpServer::new
does not take App as argument, instead it wants a closure that returns an App struct
- This is to support multi-threading
- Actix-web's runtime will spin up a worker process for each available core on your machine
- Each worker runs its own copy of the application built by
HttpServer
- So
.app_data
requires an argument that is clone-able
An extractor is provides a type-safe request information access
- An extractor can be accessed as an argument to a handler function
- Actix-web supports up to ~12 extractors per handler function
- Argument position does not matter
- An extractor is a type that implements the
FromRequest
trait actix_web::web::data
is another extractor, it wraps things in an ARC pointer- In this chapter we wrap our
web::Data::new(PgConnection)
and instead of each worker getting a raw copy it gets a pointer to one
- In this chapter we wrap our
available extractors:
- Path to get dynamic path segments from a request’s path
- Query for query parameters
- Json to parse a JSON-encoded request body
- Other
use actix_web::{post, web};
use serde::Deserialize;
#[derive(Deserialize)]
struct Info {
name: String,
}
// This handler is only called if:
// - request headers declare the content type as `application/x-www-form-urlencoded`
// - request payload is deserialized into a `Info` struct from the URL encoded format
#[post("/")]
async fn index(form: web::Form<Info>) -> String {
format!("Welcome {}!", form.name)
}
Serde is a framework for ser
ializing and de
serializing Rust data structures efficiently and generically
- serde defines a set of interfaces; a data model
- you'll need to import/use crates for specific types like serde_json, but serde is the base
Install sqlx-cli:
cargo install --version=0.5.7 sqlx-cli --no-default-features --features postgres
- sqlx-cli is a tool to manage database migrations
Then we'll need to install psql
to interface with our postgres container:
brew install libpq
brew link --force libpq
You can run migrations from the command line with:
export DATABASE_URL=postgres://postgres:password@127.0.0.1:5432/newsletter
sqlx migrate add create_subscriptions_table
- This will create a
migrations/{timestamp}_create_subscriptions_table.sql
Adding sqlx to our project:
[dependencies.sqlx]
version = "0.6"
default-features = false
features = [
"runtime-actix-rustls", # Use actix runtime for its futures and rustls instead of TLS
"macros", # Enables `sqlx::query!` and `sqlx::query_as!` macros
"postgres", # Unlocks postgres specific functionality; e.g. non-standard SQL types
"uuid", # Adds support for mapping SQL UUIDs to the Uuid type from the uuid crate
"chrono", # Adds support for mapping SQL timestamptz to the DateTime<T> form chrono crate
"migrate", # Migrate exposes same functions as sql-cli to manage migrations
]
- Sqlx has an asynchronous interface, but it does not allow you to run multiple queries concurrently over the same database connection.
- It requires a mutable connection reference so it can enforce this guarantee in their API
- You can think of the mutable reference as a unique reference
- The compiler guarantees to execute that it has exclusive access to that PgConnection because there cannot be two active mutable references to the same value at the same time in the whole program
PgPool is a pool of connections to a Postgres database.
- It bypasses the concurrency issue of
PgConnection
- There is still interior mutability
- when you run a query against a &PgPool, sqlx will borrow a PgConnection from the pool and use it to execute the query; if no connection is available, it will create a new one or wait until one frees up.
- This increases the number of concurrent queries that our application can run and it also improves its resiliency:
- A single slow query will not impact the performance of all incoming requests by creating contention on the connection lock
config organizes hierarchical or layered configurations for Rust applications.
tldr; config enables you to structure/organize your project how you see fit
Config lets you set a set of default parameters and then extend them via merging in configuration from a variety of sources:
- Environment variables
- String literals in well-known formats
- Another Config instance
- Files: TOML, JSON, YAML, INI, RON, JSON5 and custom ones defined with Format trait
- Manual, programmatic override (via a .set method on the Config instance)
Additionally, Config supports:
- Live watching and re-reading of configuration files
- Deep access into the merged configuration via a path syntax
- Deserialization via serde of the configuration or any subset defined via a path
- actix-web = Actix Web is a powerful, pragmatic, and extremely fast web framework for Rust
- tokio = An event-driven, non-blocking I/O platform for writing asynchronous I/O backed applications.
- reqwest = higher level HTTP client library
- serde = A generic serialization/deserialization framework
- sqlx = SQLx is an async, pure Rust† SQL crate featuring compile-time checked queries without a DSL.
- config = Layered configuration system for Rust applications.
- chrono = Date and time library for Rust
- uuid = A library to generate and parse UUIDs.