Encode/decode UTF8.
As you probably noticed, there is already standards ways to encode/decode utf-8 strings into buffers in the NodeJS standard library.
This library can be useful if you need to write at given buffer indexes or to validate utf-8 encoded buffers.
Otherwise, use NodeJS standard library.
npm install utf-8
A char:
import * as UTF8 from 'utf-8';
UTF8.setBytesFromCharCode('é'.charCodeAt(0));
// [0xC3, 0xA9]
A string:
UTF8.setBytesFromString('1.3$ ~= 1€');
// [49, 46, 51, 36, 32, 126, 61, 32, 49, 226, 130, 172]
A char:
String.fromCharCode(UTF8.getCharCode([0xc3, 0xa9]));
// 'é'
A string:
UTF8.getStringFromBytes([49, 46, 51, 36, 32, 126, 61, 32, 49, 226, 130, 172]);
// '1.3$ ~= 1€'
As inputs:
const bytes = new Uint8Array([
0xc3, 0xa9, 49, 46, 51, 36, 32, 126, 61, 32, 49, 226, 130, 172,
]);
// The first char
String.fromCharCode(UTF8.getCharCode(bytes));
// é
// The following string at the offset 2
UTF8.getStringFromBytes(bytes, 2);
// '1.3$ ~= 1€'
As well as outputs :
const bytes = new Uint8Array(14);
// First encoding a char
UTF8.setBytesFromCharCode('é'.charCodeAt(0));
// Then encoding a string
UTF8.setBytesFromString('1.3$ ~= 1€', 2);
UTF8.isNotUTF8(bytes);
// true | false
This function can prove the text contained by the given bytes is not UTF-8 (or badly encoded UTF-8 string). It's not reciprocally true, especially for short strings with which false positives are frequent.
If you try to encode an UTF8 string in an ArrayBuffer too short to contain the complete string, it will silently fail. To avoid this behavior, use the strict mode :
UTF8.setBytesFromString('1.3$ ~= 1€', 2, null, true);
- The Debian project for it's free (as freedom) russian/japanese man pages used for real world files tests!