diff --git a/CHANGELOG.md b/CHANGELOG.md index 7b14248342a..87802b2d5a4 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -13,6 +13,7 @@ * The encryption code no longer behaves differently depending on the system page size, which should entirely eliminate a recurring source of bugs related to copying encrypted Realm files between platforms with different page sizes. One known outstanding bug was ([RNET-1141](https://github.com/realm/realm-dotnet/issues/3592)), where opening files on a system with a larger page size than the writing system would attempt to read sections of the file which had never been written to ([PR #7698](https://github.com/realm/realm-core/pull/7698)). * There were several complicated scenarios which could result in stale reads from encrypted files in multiprocess scenarios. These were very difficult to hit and would typically lead to a crash, either due to an assertion failure or DecryptionFailure being thrown ([PR #7698](https://github.com/realm/realm-core/pull/7698), since v13.9.0). * Encrypted files have some benign data races where we can memcpy a block of memory while another thread is writing to a limited range of it. It is logically impossible to ever read from that range when this happens, but Thread Sanitizer quite reasonably complains about this. We now perform a slower operations when running with TSan which avoids this benign race ([PR #7698](https://github.com/realm/realm-core/pull/7698)). +* Tokenizing strings for full-text search could pass values outside the range [-1, 255] to `isspace()`, which is undefined behavior ([PR #7698](https://github.com/realm/realm-core/pull/7698), since the introduction of FTS in v13.0.0). ### Breaking changes * None. diff --git a/src/realm/tokenizer.cpp b/src/realm/tokenizer.cpp index f6bc42604cc..401be2fc4c6 100644 --- a/src/realm/tokenizer.cpp +++ b/src/realm/tokenizer.cpp @@ -61,7 +61,7 @@ std::pair, std::set> Tokenizer::get_search_to } }; for (; m_cur_pos != m_end_pos; m_cur_pos++) { - if (isspace(*m_cur_pos)) { + if (isspace(static_cast(*m_cur_pos))) { add_token(); } else {