Modern Clojure, Ring compliant, HTTP server and client, designed for ease of use and performance
- Usage
- Requirements
- Building
- Start up options
- Creating a Donkey
- Server
- Client
- Metrics
- Debug mode
- Troubleshooting
- License
TOC Created by gh-md-toc
Including the library in project.clj
[com.appsflyer/donkey "0.5.2"]
Including the library in deps.edn
com.appsflyer/donkey {:mvn/version "0.5.2"}
Including the library in pom.xml
<dependency>
<groupId>com.appsflyer</groupId>
<artifactId>donkey</artifactId>
<version>0.5.2</version>
</dependency>
The preferred way to build the project for local development is using Maven. It's also possible to generate an uberjar using Leiningen, but you must use Maven to install the library locally.
Creating a jar with Maven
mvn package
Creating an uberjar with Leiningen
lein uberjar
Installing to a local repository
mvn clean install
JVM system properties that can be supplied when running the application
-Dvertx.threadChecks=false
: Disable blocked thread checks. Used by Vert.x to warn the user if an event loop or worker thread is being occupied above a certain threshold which will indicate the code should be examined.-Dvertx.disableContextTimings=true
: Disable timing context execution. These are used by the blocked thread checker. It does not disable execution metrics that are exposed via JMX.
In Donkey, you create HTTP servers and clients using a - Donkey
. Creating
a Donkey
is simple:
(ns com.appsflyer.sample-app
(:require [com.appsflyer.donkey.core :refer [create-donkey]]))
(def ^Donkey donkey-core (create-donkey))
We can also configure our donkey instance:
(ns com.appsflyer.sample-app
(:require [com.appsflyer.donkey.core :refer [create-donkey]]))
(def donkey-core (create-donkey {:event-loops 4}))
There should only be a single Donkey
instance per application. That's because
the client and server will share the same resources making them very efficient.
Donkey
is a factory for creating server(s) and client(s) (you can create
multiple servers and clients with a Donkey
, but in almost all cases you will
only want a single server and / or client per application).
The following examples assume these required namespaces
(:require [com.appsflyer.donkey.core :refer [create-donkey create-server]]
[com.appsflyer.donkey.server :refer [start]]
[com.appsflyer.donkey.result :refer [on-success]])
Creating a server is done using a Donkey
instance. Let's start by creating a
server listening for requests on port 8080.
(->
(create-donkey)
(create-server {:port 8080})
start
(on-success (fn [_] (println "Server started listening on port 8080"))))
Note that the following example will not work yet - for it to work we need to add a route which we will do next.
After creating the server we start
it, which is an asynchronous call that may
return before the server actually started listening for incoming connections.
It's possible to block the current thread execution until the server is running
by calling start-sync
or by "derefing" the arrow macro.
The next thing we need to do is define a route. We talk about routes in depth later on, but a route is basically a definition of an endpoint. Let's define a route and create a basic "Hello world" endpoint.
(->
(create-donkey)
(create-server {:port 8080
:routes [{:handler (fn [_request respond _raise]
(respond {:body "Hello, world!"}))}]})
start
(on-success (fn [_] (println "Server started listening on port 8080"))))
As you can see we added a :routes
key to the options map used to initialize
the server. A route is a map that describes what kind of requests are handled at
a specific resource address (or :path
), and how to handle them. The only
required key is :handler
, which will be called when a request matches a route.
In the example above we're saying that we would like any request to be handled
by our handler function.
Our handler is a Ring compliant asynchronous handler. If you are not familiar with the Ring async handler specification, here's an excerpt:
An asynchronous handler takes 3 arguments: a request map, a callback function for sending a response, and a callback function for raising an exception. The response callback takes a response map as its argument. The exception callback takes an exception as its argument.
In the handler we are calling the response callback respond
with a response
map where the body of the response is "Hello, world!".
If you run the example and open a browser on http://localhost:8080
you will
see a page with "Hello, World!".
In Donkey HTTP requests are routed to handlers. When you initialize a server you
define a set of routes that it should handle. When a request arrives the server
checks if one of the routes can handle the request. If no matching route is
found, then a 404 Not Found
response is returned to the client.
Let's see a route example:
{
:handler (fn [request respond raise] ...)
:handler-mode :non-blocking
:path "/api/v2"
:match-type :simple
:methods [:get :put :post :delete]
:consumes ["application/json"]
:produces ["application/json"]
:middleware [(fn [handler] (fn [request respond raise] (handler request respond raise)))]
}
:handler
A function that accepts 1 or 3 arguments (depending on
:handler-mode
). The function will be called if a request matches the route.
This is where you call your application code. The handler should return a
response map with the following optional fields:
:status
: The response status code (defaults to 200):headers
: Map of key -> valueString
pairs:body
: The response body asbyte[]
,String
, orInputStream
:handler-mode
To better understand the use of the :handler-mode
, we need to
first get some background about Donkey. Donkey is an abstraction built on top of
a web tool-kit called Vert.x, which in turn is built on a
very popular and performant networking library called
Netty. Netty's architecture is based on the concept of a
single threaded event loop that serves requests. An event loop is conceptually a
long-running task with a queue of events it needs to dispatch. As long as events
are dispatched "quickly" and don't occupy too much of the event loop's time, it
can dispatch events at a very high rate. Because it is single threaded, or in
other words serial, during the time it takes to dispatch one event no other
event can be dispatched. Therefore, it's extremely important not to block the
event loop.
The :handler-mode
is a contract where you declare the type of handling your
route does - :blocking
or :non-blocking
(default).
:non-blocking
means that the handler is performing very fast CPU-bound tasks,
or non-blocking IO bound tasks. In both cases the guarantee is that it will not
block the event loop. In this case the :handler
must accept 3 arguments.
Sometimes reality has it that we have to deal with legacy code that is doing
some blocking operations that we just cannot change easily. For these occasions
we have :blocking
handler mode. In this case, the handler will be called on a
separate worker thread pool without needing to worry about blocking the event
loop. The worker thread pool size can be configured when creating a
Donkey
instance by setting the :worker-threads
option.
:path
is the first thing a route is matched on. It is the part after the
hostname in a URI that identifies a resource on the host the client is trying to
access. The way the path is matched depends on the :match-type
.
:match-type
can be either :simple
or :regex
.
:simple
match type will match in two ways:
- Exact match. Going back to the example route at the begining of the
section, the route will only match requests to
http://localhost:8080/api/v2
. It will not match requests to:http://localhost:8080/api
http://localhost:8080/api/v3
http://localhost:8080/api/v2/user
- Path variables. Take for example the path
/api/v2/user/:id/address
.:id
is a path variable that matches on any sub-path. All the following paths will match:/api/v2/user/1035/address
/api/v2/user/2/address
/api/v2/user/foo/address
The really nice thing about path variables is that you get the value that was in the path when it matched, in the request. The value will be available in the:path-params
map. If we take the first example, the request will look like this:
{
;; ...
:path-params {"id" "1035"}
;; ...
}
:regex
match type will match on arbitrary regular expressions. For example, if
wanted to only match the /api/v2/user/:id/address
path if :id
is a number,
then we could use :match-type :regex
and supply this path:
/api/v2/user/[0-9]+/address
. In this case the route will only match if a
client requests the path with a numeric id, but we won't have access to the id
in the :path-params
map. If we wanted the id we could fix it by adding
capturing groups: /api/v2/user/([0-9]+)/address
. Now everything within the
parenthesis will be available in :path-params
.
{
:path-params {"param0" "1035"}
}
We can also add multiple capturing groups, for example the path
/api/v(\d+\.\d{1})/user/([0-9]+)/address
will match /api/v4.7/user/9/address
and :path-params
will include both capturing groups.
{
:path-params {"param0" "4.7"
"param1" "9"}
}
:methods
is a vector of HTTP methods the route supports, such as GET, POST,
etc'. By default, any method will match the route.
:consumes
is a vector of media types that the handler can consume. If a route
matches but the Content-Type
header of the request doesn't match one of the
supported media types, then the request will be rejected with a
415 Unsupported Media Type
code.
:produces
is a vector of media types that the handler produces. If a route
matches but the Accept
header of the request doesn't match one of the
supported media types, then the request will be rejected with a
406 Not Acceptable
code.
:middleware
is a vector of middleware functions that will be
applied to the route. It is also possible to supply a "global"
:middleware
vector when creating a server that will be
applied to all the routes. In that case the global middleware will be applied
first, followed by the middleware specific to the route.
Sometimes we have an existing service using some HTTP server and routing libraries such as Compojure or reitit, and we don't have time to rewrite the routing logic right away. It's very easy to simply plug all your existing routing logic to Donkey without changing a line of code.
We'll use Compojure and reitit as examples, but the same goes for any other Ring compatible library you use.
Here is an excerpt from Metosin's reitit Ring-router documentation, demonstrating how to create a simple router.
(require '[reitit.ring :as ring])
(defn handler [_]
{:status 200, :body "ok"})
(defn wrap [handler id]
(fn [request]
(update (handler request) :wrap (fnil conj '()) id)))
(def app
(ring/ring-handler
(ring/router
["/api" {:middleware [[wrap :api]]}
["/ping" {:get handler
:name ::ping}]
["/admin" {:middleware [[wrap :admin]]}
["/users" {:get handler
:post handler}]]])))
Now let's see how you would use this router with Donkey.
(->
(create-donkey)
(create-server {:port 8080
:routes [{:handler app
:handler-mode :blocking}]})
start)
That's it!
Basically, we're creating a single route that will match any request to the
server and will delegate the routing logic and request handling to the reitit
router. You'll notice we had to add :handler-mode :blocking
to the route.
That's because this particular example uses the one argument ring handler. If we
add a three argument arity to handler
and wrap
, then we'll be able to remove
:handler-mode :blocking
and use the default non-blocking mode.
Here is an excerpt from James Reeves' Compojure repository on GitHub, demonstrating how to create a simple router.
(ns hello-world.core
(:require [compojure.core :refer :all]
[compojure.route :as route]))
(defroutes app
(GET "/" [] "<h1>Hello World</h1>")
(route/not-found "<h1>Page not found</h1>"))
To use this router with Donkey we do exactly the same thing we did for reitit's router.
(->
(create-donkey)
(create-server {:port 8080
:routes [{:handler app
:handler-mode :blocking}]})
start)
Every server needs to be able to serve static resources such as HTML,
JavaScript, or image files. In Donkey, you configure how to serve static files
by providing a :resources
map when creating the server. An example is worth a
thousand words:
:resources {:enable-caching true
:max-age-seconds 1800
:local-cache-duration-seconds 60
:local-cache-size 1000
:resources-root "public"
:index-page "home.html"
:routes [{:path "/"}
{:path "/js/.+\.min\.js"}
{:path "/images/.+"
:produces ["image/*"]}]}
The configuration enables cache handling via the Cache-Control
header, and
defines when cached resources become stale. The :index-page
tells the server
which file to serve when a directory is requested, and the resources-root
is
the directory where all assets reside.
Now let's take a look at the :routes
vector that defines the paths where
different resources are located. The first route defines the file that's served
when requesting the root directory of the site. For example, if our site's
hostname is example.com
, then when the server gets a request
for http://example.com
or http://example.com/
it will serve the index page home.html
. The file is served from
<path to resources directory>/public/home.html
The second and third routes use regular expressions to define which files should
be served from the js
and image
directories. here is an example of a request
for a JavaScript file:
http://example.com/js/app.min.js
In this example, if the unminified files are requested the route won't match:
http://example.com/js/app.js ;; will return 404 not found
The third route defines where images are served from, and it also declares that
it will only serve files with mime type image/*
. If the request's
Accept
header doesn't match an image mime type, then the request will be
rejected with a 406 Not Acceptable
code.
The term "middleware" is generally used in the context of HTTP frameworks as a pluggable unit of functionality that can examine or manipulate the flow of bytes between a client and a server. In other words, it allows users to do things such as logging, compression, validation, authorization, and transformation (to name a few) of requests and responses.
According to
the Ring
specification, middleware are implemented
as higher-order functions
that accept one or more arguments, where the first argument is the
next handler
function, and any optional arguments required by the middleware.
A handler
in this context can be either another middleware, or
a route handler. The higher-order function should return a function
that accepts one or three arguments:
- One argument: Called when
:handler-mode
is:blocking
with arequest
map. - Three arguments: Called when
:handler-mode
is:non-blocking
with arequest
map,respond
function, andraise
function. Therespond
function should be called with the result of the next handler, and theraise
function should be called when it is impossible to continue processing the request because of an exception.
The handler
argument that was given to the higher-order function has the same
signature as the function being returned. It is the middleware author's
responsibility to call the next handler
at some point.
Let's start with a middleware that adds a timestamp to a request. It can be
called with :handler-mode
:blocking
or non-blocking
:
(defn add-timestamp-middleware [handler]
(fn
([request]
(handler
(assoc request :timestamp (System/currentTimeMillis))))
([request respond raise]
(try
(handler
(assoc request :timestamp (System/currentTimeMillis)) respond raise)
(catch Exception ex
(raise ex))))))
In the last example we updated the request and called the next handler with the
transformed request. However, middleware is not limited to only processing and
transforming the request. Here is an example of a three argument middleware that
adds a Content-Type
header to the response.
(defn add-content-type-middleware [handler]
(fn [request respond raise]
(let [respond' (fn [response]
(try
(respond
(update response :headers assoc "Content-Type" "text/plain"))
(catch Exception ex
(raise ex))))]
(handler request respond' raise))))
As mentioned before, the three argument function is called when the
:handler-mode
is :non-blocking
. Notice that we are doing the processing on
the calling thread - the event loop. That's because the overhead of
context switching
and potentially spawning a new thread by offloading a simple assoc
or update
to a separate thread pool would greatly outweigh the processing time
on the event loop. However, if for example we had a middleware that performs
some blocking operation on a remote database, then we would need to run it on a
separate thread.
In this example we authenticate a user with a remote service. For the sake of
the example, all we need to know is that we get back a
CompletableFuture
that is executed on a different thread. When the future completes, we check if
we had an exception, and then either call the next handler
with the updated
request, or stop the execution by calling raise
.
(defn user-authentication-middleware [handler]
(fn [request respond raise]
(.whenComplete
^CompletableFuture (authenticate-user request)
(reify BiConsumer
(accept [this result exception]
(if (nil? exception)
(handler (assoc request :authenticated result) respond raise)
(raise exception)))))))
There are some common operations that Donkey provides as pre-made middleware
that can be found under com.appsflyer.donkey.middleware.*
namespaces. All the
middleware that come with Donkey take an optional options map. The options map
can be used, for example, to supply an exception handler.
A very common use case is inspecting the query parameters sent by a client in
the url of a GET request. By default, the query parameters are available in the
request as a string under :query-string
. It would be much more useful if we
also had a map of name value pairs we can easily use.
(:require [com.appsflyer.donkey.middleware.params :refer [parse-query-params]])
(->
(create-donkey)
(create-server {:port 8080
:routes [{:path "/greet"
:methods [:get]
:handler (fn [req res _err]
(res {:body (str "Hello, "
(get-in req [:query-params "fname"])
" "
(get-in req [:query-params "lname"]))}))
:middleware [(parse-query-params)]}]})
start)
In this example we are using the parse-query-params
middleware, that does
exactly that. Now if we make a GET
request
http://localhost:8080/greet?fname=foo&lname=bar
we'll get back:
Hello, foo bar
Another common use case is converting the names of each query parameter into a keyword. We can achieve both objectives with one middleware:
(:require [com.appsflyer.donkey.middleware.params :refer [parse-query-params]])
(->
(create-donkey)
(create-server {:port 8080
:routes [{:path "/greet"
:methods [:get]
:handler (fn [req res _err]
(res {:body (str "Hello, "
(-> req :query-params :fname)
" "
(-> req :query-params :lname))}))
:middleware [(parse-query-params {:keywordize true})]}]})
start)
Consumes & Produces (see Routes section)
(->
(donkey/create-donkey)
(donkey/create-server
{:port 8080
:routes [{:path "/hello-world"
:methods [:get]
:handler-mode :blocking
:consumes ["text/plain"]
:produces ["application/json"]
:handler (fn [request]
{:status 200
:body "{\"greet\":\"Hello world!\"}"})}]})
server/start)
Path variables (see Routes section)
(->
(donkey/create-donkey)
(donkey/create-server
{:port 8080
:routes [{:path "/greet/:name"
:methods [:get]
:consumes ["text/plain"]
:handler (fn [req respond _raise]
(respond
{:status 200
:headers {"content-type" "text/plain"}
:body (str "Hello " (-> :path-params req (get "name")))}))}]})
server/start)
The following examples assume these required namespaces
(:require [com.appsflyer.donkey.core :as donkey]
[com.appsflyer.donkey.client :refer [request stop]]
[com.appsflyer.donkey.result :refer [on-complete on-success on-fail]]
[com.appsflyer.donkey.request :refer [submit submit-form submit-multipart-form]])
Creating a client is as simple as this
(let [donkey-client (->
(donkey/create-donkey)
donkey/create-client)])
We can set up the client with some default options, so we won't need to supply them on every request
(let [donkey-client (->
(donkey/create-donkey)
donkey/create-client
{:default-host "reqres.in"
:default-port 443
:ssl true
:keep-alive true
:keep-alive-timeout-seconds 30
:connect-timeout-seconds 10
:idle-timeout-seconds 20
:enable-user-agent true
:user-agent "Donkey Server"
:compression true})]
(-> donkey-client
(request {:method :get
:uri "/api/users"})
submit
(on-complete
(fn [res ex]
(println (if ex "Failed!" "Success!"))))))
The previous example made an HTTPS request to some REST api and printed out "Failed!" if an exception was received, or "Success!" if we got a response from the server. We'll discuss how submitting requests and handling responses work shortly.
Once we're done with a client we should always stop it. This will release all the resources being held by the client, such as connections, event loops, etc'. You should reuse a single client throughout the lifetime of the application, and stop it only if it won't be used again. Once stopped it should not be used again.
(stop donkey-client)
When creating a request we supply an options map that defines it. The map has to
contain a :method
key, and either an :uri
or an :url
. The :uri
key
defines the location of the resource being requested, for example:
(->
donkey-client
(request {:method :get
:uri "/api/v1/users"}))
The :url
key defines the absolute URL of the resource, for example:
(->
donkey-client
(request {:method :get
:url "http://www.example.com/api/v1/users"}))
When an :url
is supplied then the :uri
, :port
, :host
and :ssl
keys are ignored.
Calling (def async-request (request donkey-client opts))
creates an
AsyncRequest
but does not submit the request yet. You can reuse an
AsyncRequest
instance to make the same request multiple times. There are
several ways a request can be submitted:
(submit async-request)
submits a request without a body. This is usually the case when doing aGET
request.(submit async-request body)
submits a request with a raw body.body
can be either a string, or a byte array. A typical use case would bePOST
ing serialized data such as JSON. Another common use case is sending binary data by also adding aContent-Type: application/octet-stream
header to the request.(submit-form async-request body)
submits an urlencoded form. AContent-Type: application/x-www-form-urlencoded
header will be added to the request, and the body will be urlencoded.body
is a map of string key-value pairs. For example, this is how you would typically submit a sign in form on a website:
(submit-form async-request {"email" "frankies15@example.com"
"password" "password"})
(submit-multipart-form async-request body)
submits a multipart form. AContent-Type: multipart/form-data
header will be added to the request. Multipart forms can be used to send simple key-value attribute pairs, and uploading files. For example, you can upload a file from the filesystem along with some attributes like this:
(submit-multipart-form
async-request
{"Lyrics" "Phil Silvers"
"Music" "Jimmy Van Heusen"
"Title" "Nancy (with the Laughing Face)"
"Media Type" "MP3"
"Media" {
"filename" "nancy.mp3"
"pathname" "/home/bill/Music/Sinatra/Best of Columbia/nancy.mp3"
"media-type" "audio/mpeg"
"upload-as" "binary"}})
Requests are submitted asynchronously, meaning the request is executed on a
background thread, and calls to submit[-xxx]*
return a FutureResult
immediately. You can think of a FutureResult
as a way to subscribe to an event
that may have happened or will happen some time in the future. The api is very
simple:
(on-success async-result (fn [result]))
will call the supplied function with a response map from the server, iff there were no client side errors while executing the request. Client side errors include an unhandled exception, or problems connecting with the server. It does not include server errors such as 4xx or 5xx response status codes. The response will have the usual Ring fields:status
,:body
, and optional:headers
.(on-fail async-result (fn [ex]))
will call the supplied function with anExceptionInfo
indicating the request failed due to a client error.(on-complete async-result (fn [result ex]))
will always call the supplied function whether the request was successful or not. A successful request will be called withex
beingnil
, and a failed request will be called withresult
beingnil
. The two are mutually exclusive which makes it simple to check the outcome of the request.
If the response is irrelevant as is the case in "call and forget" type requests, then the result can be ignored:
(submit async-request) ; => The `FutureResult` returned is ignored
... do the rest of your application logic
Or if you are only interested to know if the request failed:
(->
(submit async-request)
(on-fail (fn [ex] (println (str "Oh, no. That was not expected - " (ex-message ex)))))
... do the rest of your application logic
Although it is not recommended in the context of asynchronous operations, results can also be dereferenced:
(let [result @(submit async-request)]
(if (map? result)
(println "Yea!")
(println "Nay :(")))
In this case the call to submit
will block the calling thread until a result
is available. The result may be either a response map, if the request was
successful, or an ExceptionInfo
if it wasn't.
Each function returns a new FutureResult
instance, which makes it possible to
chain handlers. Let's look at an example:
(ns com.appsflyer.donkey.exmaple
(:require [com.appsflyer.donkey.result :as result])
(:import (com.appsflyer.donkey FutureResult)))
; Chaning example. Each function gets the return value of the previous
(letfn [(increment [val]
(let [res (update val :count (fnil inc 0))]
(println res)
res))]
(->
(FutureResult/create {})
(result/on-success increment)
(result/on-success increment)
(result/on-success increment)
(result/on-fail (fn [_ex] (println "We have a problem"))))
; Output:
; {:count 1}
; {:count 2}
; {:count 3}
We start off by defining an increment
function that takes a map and increments
a :counter
key. We then create a FutureResult
that completes with an empty
map. The first example shows how chaining the result of one function to the next
works.
The rest of the examples assume the following vars are defined
(def donkey-core (donkey/create-donkey))
(def donkey-client (donkey/create-client donkey-core)
Making HTTPS requests requires setting :ssl
to true
and :default-port
or
:port
when creating a client or a request respectively.
(->
(request donkey-client {:host "reqres.in"
:port 443
:ssl true
:uri "/api/users?page=2"
:method :get})
submit
(on-success (fn [res] (println res)))
(on-fail (fn [ex] (println ex))))
; Will output something like this:
; `{:status 200,
:headers {Age 365, Access-Control-Allow-Origin *, CF-Cache-Status HIT, Via 1.1 vegur, Set-Cookie __cfduid=1234.abcd; expires=Mon, 12-Oct-20 14:50:48 GMT; path=/; domain=.reqres.in; HttpOnly; SameSite=Lax; Secure, Date Sat, 12 Sep 2020 14:50:48 GMT, Accept-Ranges bytes, cf-request-id 0909abcd, Expect-CT max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct", Cache-Control max-age=14400, Content-Length 1245, Server cloudflare, Content-Type application/json; charset=utf-8, Connection keep-alive, Etag W/"4dd-IPv5LdOOb6s5S9E3i59wBCJ1k/0", X-Powered-By Express, CF-RAY 5d1a7165fa2cad73-TLV},
:body #object[[B 0x7be7d50c [B@7be7d50c]}`
The library uses Dropwizard to capture different metrics. The metrics can be largely grouped into three categories:
- Thread Pool
- Server
- Client
Metrics collection can be set up when creating a Donkey
by supplying a pre
instantiated instance of MetricRegistry
. It's the user's responsibility to
implement reporting to a monitoring backend such
as Prometheus, or graphite
. As later described, metrics are named using a dot .
separator. By default,
all metrics are prefixed with donkey
, but it's also possible to supply
a :metrics-prefix
with the :metric-registry
to use a different string.
Base name: <:metrics-prefix>
event-loop-size
- A Gauge of the number of threads in the event loop poolworker-pool-size
- A Gauge of the number of threads in the worker pool
Base name: <:metrics-prefix>.pools.worker.vert.x-worker-thread
queue-delay
- A Timer measuring the duration of the delay to obtain the resource, i.e. the wait time in the queuequeue-size
- A Counter of the actual number of waiters in the queueusage
- A Timer measuring the duration of the usage of the resourcein-use
- A count of the actual number of resources usedpool-ratio
- A ratio Gauge of the in use resource / pool sizemax-pool-size
- A Gauge of the max pool size
Base name: <:metrics-prefix>.http.servers.<host>:<port>
open-netsockets
- A Counter of the number of open net socket connectionsopen-netsockets.<remote-host>
- A Counter of the number of open net socket connections for a particular remote hostconnections
- A Timer of a connection and the rate of its occurrenceexceptions
- A Counter of the number of exceptionsbytes-read
- A Histogram of the number of bytes read.bytes-written
- A Histogram of the number of bytes written.requests
- A Throughput Timer of a request and the rate of it’s occurrence<http-method>-requests
- A Throughput Timer of a specific HTTP method request, and the rate of its occurrence. Examples: get-requests, post-requestsresponses-1xx
- A ThroughputMeter of the 1xx response coderesponses-2xx
- A ThroughputMeter of the 2xx response coderesponses-3xx
- A ThroughputMeter of the 3xx response coderesponses-4xx
- A ThroughputMeter of the 4xx response coderesponses-5xx
- A ThroughputMeter of the 5xx response code
Base name: <:metrics-prefix>.http.clients
open-netsockets
- A Counter of the number of open net socket connectionsopen-netsockets.<remote-host>
- A Counter of the number of open net socket connections for a particular remote hostconnections
- A Timer of a connection and the rate of its occurrenceexceptions
- A Counter of the number of exceptionsbytes-read
- A Histogram of the number of bytes read.bytes-written
- A Histogram of the number of bytes written.connections.max-pool-size
- A Gauge of the max connection pool sizeconnections.pool-ratio
- A ratio Gauge of the open connections / max connection pool sizeresponses-1xx
- A Meter of the 1xx response coderesponses-2xx
- A Meter of the 2xx response coderesponses-3xx
- A Meter of the 3xx response coderesponses-4xx
- A Meter of the 4xx response coderesponses-5xx
- A Meter of the 5xx response code
Debug mode is activated when creating a Donkey
with :debug true
. In this
mode several loggers are set to log at the trace
level. It means the logs will
be very verbose. For that reason it is not suitable for production use, and
should only be enabled in development as needed.
The logs include:
- All of Netty's low level networking, system configuration, memory leak detection logs and more.
- Hexadecimal representation of each batch of packets being transmitted to a server or from a client.
- Request routing, which is useful to debug a route that is not being matched.
- Donkey trace logs.
The library doesn't include any logging implementation, and can be used with any
SLF4J compatible logging library. The exception is when
running in debug
mode. In order to dynamically change the logging level
without forcing users to add XML configuration files, Donkey
uses Logback as its implementation. It should be
included on the project's classpath, otherwise a warning will be printed and
debug logging will be disabled.
Execution error (ClassNotFoundException) at jdk.internal.loader.BuiltinClassLoader/loadClass (BuiltinClassLoader.java:581). com.codahale.metrics.JmxAttributeGauge
Donkey has a transitive dependency io.dropwizard.metrics/metrics-core
version
4.X.X. If you are using a library that is dependent on version 3.X.X then you
could get a dependency collision. To avoid it you can exclude the dependency
when importing Donkey. For example:
project.clj
:dependencies [com.appsflyer/donkey "0.5.2" :exclusions [io.dropwizard.metrics/metrics-core]]
deps.edn
{:deps
{com.appsflyer/donkey {:mvn/version "0.5.2"
:exclusions [io.dropwizard.metrics/metrics-core]}}}
Copyright 2020 AppsFlyer
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.