Persistence for REST resources in the filesystem, Aws S3 storage or a redis database.
Stores resources in a hierarchical way according to their URI. It actually implements a generic CRUD REST service.
It uses usual mime mapping to determine content type, so you can also use it as a web server. Without extension, JSON is assumed.
The following methods are supported on leaves (documents):
- GET: Returns the content of the resource.
- PUT: Stores the request body in the resource.
- DELETE: Deletes the resource.
The following methods are supported on intermediate nodes (collections):
- GET: Returns the list of collection members. Serves JSON and HTML representations.
- POST (StorageExpand): Returns the expanded content of the sub resources of the (collection) resource. The depth is limited to 1 level. See description below
- DELETE: Delete the collection and all its members.
Runs either as a module or can be integrated into an existing application by instantiating the RestStorageHandler class directly.
- clone the repository
- Install and start Redis
- Debian/Ubuntu:
apt-get install redis-server
- Fedora/RedHat/CentOS:
yum install redis
- OS X:
brew install redis
- Windows
- Other
- run
mvn install -Dmaven.test.skip=true
- run the fatjar with `java -jar build/libs/rest-storage-x.x.x-all.jar
- you get a rest-storage, that stores to the filesystem in the directory where you started it. If you want to use the rest-storage with redis, you have to pass the configuration over a json file with
-conf conf.json
Invoking GET request on a leave (document) returns the content of the resource.
GET /storage/resources/resource_1
Invoking GET request on a collection returns a list of collection members.
GET /storage/resources/
Parameter | Description |
---|---|
limit | defines the amount of returned resources |
offset | defines the amount of resources to skip. Can be used in combination with limit to provide paging functionality |
Given a collection of ten items (res1-res10) under the path /server/tests/offset/resources/
Request | Returned items |
---|---|
GET /server/tests/offset/resources/?limit=10 | all |
GET /server/tests/offset/resources/?limit=99 | all |
GET /server/tests/offset/resources/?limit=5 | res1,res10,res2,res3,res4 |
GET /server/tests/offset/resources/?offset=2 | res2,res3,res4,res5,res6,res7,res8,res9 |
GET /server/tests/offset/resources/?offset=11 | no items (empty array) |
GET /server/tests/offset/resources/?offset=2&limit=-1 | res2,res3,res4,res5,res6,res7,res8,res9 |
GET /server/tests/offset/resources/?offset=0&limit=3 | res1,res10,res2 |
GET /server/tests/offset/resources/?offset=1&limit=10 | res10,res2,res3,res4,res5,res6,res7,res8,res9 |
The returned json response look like this:
{
"resources": [
"res1",
"res10",
"res2",
"res3",
"res4"
]
}
Invoking DELETE request on a leave (document) deletes the resource.
DELETE /storage/resources/resource_1
Invoking DELETE request on a collection deletes the collection and all its children.
DELETE /storage/resources/
Parameter | Description |
---|---|
recursive | When configuration property confirmCollectionDelete is set to true, the url parameter recursive=true has to be added to delete collections. |
The StorageExpand feature expands the hierarchical resources and returns them as a single concatenated json resource.
Having the following resources in the storage:
key: data:test:collection:resource1 value: {"myProp1": "myVal1"}
key: data:test:collection:resource2 value: {"myProp2": "myVal2"}
key: data:test:collection:resource3 value: {"myProp3": "myVal3"}
would lead to this result
{
"collection" : {
"resource1" : {
"myProp1": "myVal1"
},
"resource2" : {
"myProp2": "myVal2"
},
"resource3" : {
"myProp3": "myVal3"
}
}
}
To use the StorageExpand feature you have to make a POST request to the desired collection to expand having the url parameter storageExpand=true. Also, you wil have to send the names of the sub resources in the body of the request. Using the example above, the request would look like this:
POST /yourStorageURL/collection?storageExpand=true with the body:
{
"subResources" : ["resource1", "resource2", "resource3"]
}
The amount of sub resources that can be provided is defined in the configuration by the property maxStorageExpandSubresources.
To override this for a single request, add the following request header with an appropriate value:
x-max-expand-resources: 1500
The redis storage provides a feature to reject PUT requests when the memory gets low. The information about the used memory is provided by the redis INFO command.
Attention:
The stats received by the INFO command depend on the redis version. The required stats are used_memory and total_system_memory. Without these stats, the feature is disabled!
To enable the feature, set the rejectStorageWriteOnLowMemory property (ModuleConfiguration) to true. Additionally, the freeMemoryCheckIntervalMs property can be changed to modify the interval for current memory usage calculation.
To define the importance level of PUT requests, add the following header:
x-importance-level: 75
The value defines the percentage of used memory which is not allowed to be exceeded to accept the PUT request. In the example above, the Request will only be acepted when the currently used memory is lower than 75%. When the currently used memory is higher than 75%, the request will be rejected with a status code 507 Insufficient Storage.
The higher the x-importance-level value, the more important the request. When no x-importance-level header is provided, the request is handled with the highest importance.
The lock mechanism allows you to lock a resource for a specified time. This way only the owner of the lock is able to write or delete the given resource. To lock a resource, you have to add the following headers to your PUT / DELETE request.
Headers | Type | Default value | Description |
---|---|---|---|
x-lock | String | The owner of the lock. | |
x-lock-mode | silent | silent | Any PUT or DELETE performed on this resource without the valid owner will have no effect and get 200 OK back. |
reject | Any PUT or DELETE performed on this resource without the valid owner will have no effect and get 409 Conflict back. | ||
x-lock-expire-after | long | 300 | Defines the lock lifetime. The default value is set to 300 seconds. |
x-expire-after | long | Defines the lifetime of a resource |
Warning:
The lock will always be removed if you perform a DELETE on a collection containing a locked resource. There is no check for locks in collections.
In order to optimize the memory usage when using the redis storage, it's possible to store resources compressed using the gzip compression algorithm.
To store a resource compressed, add the following header to the PUT request:
x-stored-compressed: true
When making a GET request to a compressed resource, the resource will be uncompressed before returning. No additional header is required!
Restrictions
The data compression feature is not compatible with all vertx-rest-storage features. The following listing contains the restrictions of this feature:
- Data compression is available in redis storage only
- Data compression cannot be used with merge=true url parameter concurrently. Such PUT requests will be rejected.
- Compressed resources cannot be used in storageExpand requests. storageExpand requests to a collection containing a compressed resource will be rejected.
- If a resource is already stored in a different compression state (state = not compressed, compressed) as the compression of sent resource, the stored resource will be overwritten in every case. Like this we prevent unexpected behaviour considering the etag mechanism.
The following configuration values are available:
Property | Type | Default value | Description |
---|---|---|---|
root | common | . | The prefix for the directory or redis key |
storageType | common | filesystem | The storage implementation to use. Choose between filesystem or redis |
port | common | 8989 | The port the mod listens to when HTTP API is enabled. |
httpRequestHandlerEnabled | common | true | When set to false, the storage is accessible throught the event bus only. |
httpRequestHandlerAuthenticationEnabled | common | false | Enable / disable authentication for the HTTP API |
httpRequestHandlerUsername | common | The username for the HTTP API authentication | |
httpRequestHandlerPassword | common | The password for the HTTP API authentication | |
prefix | common | / | The part of the URL path before this handler (aka "context path" in JEE terminology) |
storageAddress | common | resource-storage | The eventbus address the mod listens to. |
editorConfig | common | Additional configuration values for the editor | |
confirmCollectionDelete | common | false | When set to true, an additional recursive=true url parameter has to be set to delete collections |
maxStorageExpandSubresources | common | 1000 | The amount of sub resources to expand. When limit exceeded, 413 Payload Too Large is returned |
redisHost | redis | localhost | The host where redis is running on |
redisPort | redis | 6379 | The port where redis is running on |
redisReconnectAttempts | redis | 0 | The amount of reconnect attempts when connection to redis is lost. Use -1 for continuous reconnects or 0 for no reconnects at all |
redisReconnectDelaySec | redis | 30 | The delay (in seconds) between each reconnect attempt |
redisPoolRecycleTimeoutMs | redis | 180000 | The timeout [ms] when the connection pool is recycled. Use -1 when having reconnect feature enabled. |
expirablePrefix | redis | rest-storage:expirable | The prefix for expirable data redis keys |
resourcesPrefix | redis | rest-storage:resources | The prefix for resources redis keys |
collectionsPrefix | redis | rest-storage:collections | The prefix for collections redis keys |
deltaResourcesPrefix | redis | delta:resources | The prefix for delta resources redis keys |
deltaEtagsPrefix | redis | delta:etags | The prefix for delta etags redis keys |
lockPrefix | redis | rest-storage:locks | The prefix for lock redis keys |
resourceCleanupAmount | redis | 100000 | The maximum amount of resources to clean in a single cleanup run |
resourceCleanupIntervalSec | redis | The interval (in seconds) how often to perform the storage cleanup. When set to null no periodic storage cleanup is performed | |
rejectStorageWriteOnLowMemory | redis | false | When set to true, PUT requests with the x-importance-level header can be rejected when memory gets low |
freeMemoryCheckIntervalMs | redis | 60000 | The interval in milliseconds to calculate the actual memory usage |
redisReadyCheckIntervalMs | redis | -1 | The interval in milliseconds to calculate the "ready state" of redis. When value < 1, no "ready state" will be calculated |
awsS3Region | s3 | The region of AWS S3 server, with local service such localstack, also need set a valid region | |
s3BucketName | s3 | The S3 bucket name | |
s3AccessKeyId | s3 | The s3 access key Id | |
s3SecretAccessKey | s3 | The s3 secret access key | |
localS3 | s3 | Set to true in order to use a local S3 instance instead of AWS | |
localS3Endpoint | s3 | The endpoint/host to use in case that localS3 is set to true, e.g. 127.0.0.1 (in my case it had to be an IP) | |
localS3Port | s3 | The port to use in case that localS3 is set to true, e.g. 4566 | |
createBucketIfNotExist | s3 | create bucket if bucket not exist, related permission required |
The configurations have to be passed as JsonObject to the module. For a simplified configuration the ModuleConfigurationBuilder can be used.
Example:
ModuleConfiguration config = with()
.redisHost("anotherhost")
.redisPort(1234)
.editorConfig(new JsonObject().put("myKey", "myValue"))
.build();
JsonObject json = config.asJsonObject();
Properties not overridden will not be changed. Thus remaining default.
To use default values only, the ModuleConfiguration constructor without parameters can be used:
JsonObject json = new ModuleConfiguration().asJsonObject();
Currently, there are three storage types supported. File system storage, S3 storage and redis storage.
The data is stored hierarchically on the file system. This is the default storage type when not overridden in the configuration.
The data is stored in a S3 instance.
See https://aws.amazon.com/s3 it is also possible to use a local instance using https://docs.localstack.cloud/user-guide/aws/s3/
docker run --rm -p 4566:4566 -v ./s3:/var/lib/localstack localstack/localstack:s3-latest
The data is stored in a redis database. Caution: The redis storage implementation does not currently support streaming. Avoid transferring too big payloads since they will be entirely copied in memory.
-
Starting from 2.6.x rest-storage requires Java 11.
-
This module uses Vert.x v3.3.3 (or later), so Java 8 is required.