Releases: meilisearch/meilisearch
v1.7.6 🐇
v1.7.5 🐇
After a security flaw has been discovered in the implementation of http2, it’s possible for an attacker to slow down your instance; see this link for more information.
This PR updates our web stack to the latest version containing a fix against this attack.
What's Changed
Full Changelog: v1.7.4...v1.7.5
v1.7.4 🐇
v1.7.3
This new release doesn’t contain any fixes or features.
We make it only because the release-v1.7.2 had an issue and didn’t contain all the required assets (Linux, macOS, and Windows x86 binaries were missing).
What's Changed
- Update version for the next release (v1.7.3) in Cargo.toml by @meili-bot in #4519
Full Changelog: v1.7.2...v1.7.3
v1.7.2 🐇
v1.7.1 🐇
Indexing Speed Improvement 🏇
- Skip reindexing when modifying unknown faceted fields by @Kerollmops in #4479
v1.7.0 🐇
Meilisearch v1.7.0 focuses on improving v1.6.0 features, indexing speed and hybrid search.
🧰 All official Meilisearch integrations (including SDKs, clients, and other tools) are compatible with this Meilisearch release. Integration deployment happens between 4 to 48 hours after a new version becomes available.
Some SDKs might not include all new features—consult the project repository for detailed information. Is a feature you need missing from your chosen SDK? Create an issue letting us know you need it, or, for open-source karma points, open a PR implementing it (we'll love you for that ❤️).
New features and improvements 🔥
Improved AI-powered search — Experimental
To activate AI-powered search, set vectorStore
to true
in the /experimental-features
route. Consult the Meilisearch documentation for more information.
🗣️ This is an experimental feature and we need your help to improve it! Share your thoughts and feedback on this GitHub discussion.
New OpenAI embedding models
When configuring OpenAI embedders), you can now specify two new models:
text-embedding-3-small
with a default dimension of 1536.text-embedding-3-large
with a default dimension of 3072.
These new models are cheaper and improve search result relevancy.
Custom OpenAI model dimensions
You can configure dimensions
for sources using the new OpenAI models: text-embedding-3-small
and text-embedding-3-large
. Dimensions must be bigger than 0 and smaller than the model size:
"embedders": {
"new_model": {
"source": "openAi",
"model": "text-embedding-3-large",
"dimensions": 512 // must be >0, must be <= 3072 for "text-embedding-3-large"
},
"legacy_model": {
"source": "openAi",
"model": "text-embedding-ada-002"
}
}
You cannot customize dimensions for older OpenAI models such as text-embedding-ada-002
. Setting dimensions
to any value except the default size of these models will result in an error.
GPU support when computing Hugging Face embeddings
Activate CUDA to use Nvidia GPUs when computing Hugging Face embeddings. This can significantly improve embedding generation speeds.
To enable GPU support through CUDA for HuggingFace embedding generation:
- Install CUDA dependencies
- Clone and compile Meilisearch with the
cuda
feature:cargo build --release --package meilisearch --features cuda
- Launch your freshly compiled Meilisearch binary
- Activate vector search
- Add a Hugging Face embedder
Improved indexing speed and reduced memory crashes
- Auto-batch task deletion to reduce indexing time (#4316) @irevoire
- Improved indexing speed for vector store (Hybrid search experimental feature indexing time more than 10 times faster) (#4332) @Kerollmops @irevoire
- Capped the maximum memory of grenad sorters to reduce memory usage (#4388) @Kerollmops
- Added multiple technical and internal indexing improvements (#4350) @ManyTheFish
- Enhance facet incremental indexing (#4433) @ManyTheFish
- Change the threshold triggering incremental indexing (#4462) @ManyTheFish
Stabilized showRankingScoreDetails
The showRankingScoreDetails
search parameter, first introduce as an experimental feature in Meilisearch v1.3.0, is now a stable feature.
Use it with the /search
endpoint to view detailed scores per ranking rule for each returned document:
curl \
-X POST 'http://localhost:7700/indexes/movies/search' \
-H 'Content-Type: application/json' \
--data-binary '{ "q": "Batman Returns", "showRankingScoreDetails": true }'
When showRankingScoreDetails
is set to true
, returned documents include a _rankingScoreDetails
field:
"_rankingScoreDetails": {
"words": {
"order": 0,
"matchingWords": 1,
"maxMatchingWords": 1,
"score": 1.0
},
"typo": {
"order": 1,
"typoCount": 0,
"maxTypoCount": 1,
"score": 1.0
},
"proximity": {
"order": 2,
"score": 1.0
},
"attribute": {
"order": 3,
"attributes_ranking_order": 0.8,
"attributes_query_word_order": 0.6363636363636364,
"score": 0.7272727272727273
},
"exactness": {
"order": 4,
"matchType": "noExactMatch",
"matchingWords": 0,
"maxMatchingWords": 1,
"score": 0.3333333333333333
}
}
Improved logging
Log output modified
Log messages now follow a different pattern:
# new format ✅
2024-02-06T14:54:11Z INFO actix_server::builder: 200: starting 10 workers
# old format ❌
[2024-02-06T14:54:11Z INFO actix_server::builder] starting 10 workers
Log output format — Experimental
You can now configure Meilisearch to output logs in JSON.
Relaunch your instance passing json
to the --experimental-logs-mode
command-line option:
./meilisearch --experimental-logs-mode json
--experimental-logs-format
accepts two values:
human
: default human-readable outputjson
: JSON structured logs
🗣️ This feature is experimental and we need your help to improve it! Share your thoughts and feedback on this GitHub discussion.
New /logs/stream
and /logs/stderr
routes — Experimental
Meilisearch v1.7 introduces 2 new experimental API routes: /logs/stream
and /logs/stderr
.
Use the /experimental-features
route to activate both routes during runtime:
curl \
-X PATCH 'http://localhost:7700/experimental-features/' \
-H 'Content-Type: application/json' \
--data-binary '{
"logsRoute": true
}'
🗣️ This feature is experimental, and we need your help to improve it! Share your thoughts and feedback on this GitHub discussion.
/logs/stream
Use the POST
endpoint to output logs in a stream. The following example disables actix logging and keeps all other logs at the DEBUG
level:
curl \
-X POST http://localhost:7700/logs/stream \
-H 'Content-Type: application/json' \
--data-binary '{
"mode": "human",
"target": "actix=off,debug"
}'
This endpoint requires two paramaters:
target
: defines the log level and on which part of the engine you want to apply it. Must be a string formatted ascode_part=log_level
. Omitcode_part=
to set a single log level for the whole strram. Valid values for log level are:trace,
debug,
info,
warn,
error
, oroff
mode
: acceptsfmt
(basic) orprofile
(verbose trace)
Use the DELETE
endpoint of /logs/stream
to interrupt a stream:
curl -X DELETE http://localhost:7700/logs/stream
You may only have one listener at a time. Meilisearch log streams are not compatible with xh
or httpie
.
/logs/stderr
Use the POST
endpoint to configure the default log output for non-stream logs:
curl \
-X POST http://localhost:7700/logs/stream \
-H 'Content-Type: application/json' \
--data-binary '{
"target": "debug"
}'
/logs/stderr
accepts one parameter:
target
: defines the log level and on which part of the engine you want to apply it. Must be a string formatted ascode_part=log_level
. Omitcode_part=
to set a single log level for the whole strram. Valid values for log level are:trace,
debug,
info,
warn,
error
, oroff
Other improvements
- Prometheus experimental feature: add job variable to Grafana dashboard (#4330) @capJavert
- Multiple language support improvements, including expanded Vietnamese normalization (Ð and Đ into d). Now uses Charabia v0.8.7. (#4365) @agourlay, @choznerol, @ngdbao, @timvisee, @xshadowlegendx, and @ManyTheFish
- New experimental feature: change the behavior of Meilisearch in a few ways to run meilisearch in a cluster by externalizing the task queue.
- Add the content type to the webhook (#4450) @irevoire
Fixes 🐞
- Make update file deletion atomic (#4435) @irevoire
- Do not omit vectors when importing a dump (#4446) @dureuill
- Put a bound on OpenAI timeout (#4459) @dureuill
Misc
v1.7.0-rc.2 🐇
What's Changed since previous RC
Facet indexing
The facet incremental indexing has been optimized, and the threshold used to choose between bulk and incremental indexing has been changed to fit users' needs:
- Enhance facet incremental by @ManyTheFish in #4433
- Divide threshold by ten by @ManyTheFish in #4462
Semantic search
- Add GPU analytics by @dureuill in #4443
- Put a bound on OpenAI timeout by @dureuill in #4459
- Do not omit vectors when importing a dump by @dureuill in #4446
Benchmarks
- Add subcommand to run benchmarks by @dureuill in #4445
- Replace logging timer by spans by @dureuill in #4458
HA
Fixes
v1.7.0-rc.1 🐇
What's Changed since previous RC
- Make several indexing optimizations by @ManyTheFish in #4350
- Update charabia by @ManyTheFish in #4365
- Implement the experimental log mode cli flag and log level updates at runtime by @irevoire in #4410
- Output logs to stderr by @irevoire in #4418