Build Status | Code Coverage | Version | Total Downloads | Minimum PHP Version | License |
---|---|---|---|---|---|
This library can detect the language of a given text string. It can parse given training text in many different idioms into a sequence of N-grams and builds a database file in PHP to be used in the detection phase. Then it can take a given text and detect its language using the database previously generated in the training phase. The library comes with text samples used for training and detecting text in 110 languages.
- Installation with Composer
- How to upgrade from 3.y.z to 4.y.z?
- Basic Usage
- API
- Method Chaining
- Array Access
- List of supported languages
- Other languages
- FAQ
- Contributing
- License
Note: This library requires the Multibyte String extension in order to work.
$ composer require patrickschur/language-detection
Important: Only for people who are using a custom directory with their own translation files.
Starting with version 4.y.z
we have updated the resource files. For performance reasons we now use PHP instead of JSON as a format. That means people who want to use 4.y.z
and used 3.y.z
before, have to upgrade their JSON files to PHP. To upgrade your resource files you must generate a language profile again. The JSON files are then no longer needed.
You can delete unnecessary JSON files under Linux with the following command.
rm resources/*/*.json
To detect the language correctly, the length of the input text should be at least some sentences.
use LanguageDetection\Language;
$ld = new Language;
$ld->detect('Mag het een onsje meer zijn?')->close();
Result:
Array
(
"nl" => 0.66193548387097,
"af" => 0.51338709677419,
"br" => 0.49634408602151,
"nb" => 0.48849462365591,
"nn" => 0.48741935483871,
"fy" => 0.47822580645161,
"dk" => 0.47172043010753,
"sv" => 0.46408602150538,
"bi" => 0.46021505376344,
"de" => 0.45903225806452,
[...]
)
You can pass an array of languages to the constructor. To compare the desired sentence only with the given languages. This can dramatically increase the performance. The other parameter is optional and the name of the directory where the translations files are located.
$ld = new Language(['de', 'en', 'nl']);
// Compares the sentence only with "de", "en" and "nl" language models.
$ld->detect('Das ist ein Test');
Provide a whitelist. Returns a list of languages, which are required.
$ld->detect('Mag het een onsje meer zijn?')->whitelist('de', 'nn', 'nl', 'af')->close();
Result:
Array
(
"nl" => 0.66193548387097,
"af" => 0.51338709677419,
"nn" => 0.48741935483871,
"de" => 0.45903225806452
)
Provide a blacklist. Removes the given languages from the result.
$ld->detect('Mag het een onsje meer zijn?')->blacklist('dk', 'nb', 'de')->close();
Result:
Array
(
"nl" => 0.66193548387097,
"af" => 0.51338709677419,
"br" => 0.49634408602151,
"nn" => 0.48741935483871,
"fy" => 0.47822580645161,
"sv" => 0.46408602150538,
"bi" => 0.46021505376344,
[...]
)
Returns the best results.
$ld->detect('Mag het een onsje meer zijn?')->bestResults()->close();
Result:
Array
(
"nl" => 0.66193548387097
)
You can specify the number of records to return. For example the following code will return the top three entries.
$ld->detect('Mag het een onsje meer zijn?')->limit(0, 3)->close();
Result:
Array
(
"nl" => 0.66193548387097,
"af" => 0.51338709677419,
"br" => 0.49634408602151
)
Returns the result as an array.
$ld->detect('This is an example!')->close();
Result:
Array
(
"en" => 0.5889400921659,
"gd" => 0.55691244239631,
"ga" => 0.55376344086022,
"et" => 0.48294930875576,
"af" => 0.48218125960061,
[...]
)
The script use a tokenizer for getting all words in a sentence. You can define your own tokenizer to deal with numbers for example.
$ld->setTokenizer(new class implements TokenizerInterface
{
public function tokenize(string $str): array
{
return preg_split('/[^a-z0-9]/u', $str, -1, PREG_SPLIT_NO_EMPTY);
}
});
This will return only characters from the alphabet in lowercase and numbers between 0 and 9.
Returns the top entrie of the result. Note the echo
at the beginning.
echo $ld->detect('Das ist ein Test.');
Result:
de
Serialized the data to JSON.
$object = $ld->detect('Tere tulemast tagasi! Nägemist!');
json_encode($object, JSON_PRETTY_PRINT);
Result:
{
"et": 0.5224748810153358,
"ch": 0.45817028027498674,
"bi": 0.4452670544685352,
"fi": 0.440983606557377,
"lt": 0.4382866208355367,
[...]
}
You can also combine methods with each other. The following example will remove all entries specified in the blacklist and returns only the top four entries.
$ld->detect('Mag het een onsje meer zijn?')->blacklist('af', 'dk', 'sv')->limit(0, 4)->close();
Result:
Array
(
"nl" => 0.66193548387097
"br" => 0.49634408602151
"nb" => 0.48849462365591
"nn" => 0.48741935483871
)
You can also access the object directly as an array.
$object = $ld->detect(Das ist ein Test');
echo $object['de'];
echo $object['en'];
echo $object['xy']; // does not exists
Result:
0.6623339658444
0.56859582542694
NULL
The library currently supports 110 languages. To get an overview of all supported languages please have a look at here.
The library is trainable which means you can change, remove and add your own language files to it.
If your language not supported, feel free to add your own language files.
To do that, create a new directory in resources
and add your training text to it.
Note: The training text should be a .txt file.
|- resources
|- ham
|- ham.txt
|- spam
|- spam.txt
As you can see, we can also used it to detect spam or ham.
When you stored your translation files outside of resources
, you have to specify the path.
$t->learn('YOUR_PATH_HERE');
Whenever you change one of the translation files you must first generate a language profile for it. This may take a few seconds.
use LanguageDetection\Trainer;
$t = new Trainer();
$t->learn();
Remove these few lines after execution and now we can classify texts by their language with our own training text.
To improve the detection phase you have to use more n-grams. But be careful this will slow down the script. I figured out that the detection phase is much better when you are using around 9.000 n-grams (default is 310). To do that look at the code right below:
$t = new Trainer();
$t->setMaxNgrams(9000);
$t->learn();
First you have to train it. Now you can classify texts like before but you must specify how many n-grams you want to use.
$ld = new Language();
$ld->setMaxNgrams(9000);
// "grille pain" is french and means "toaster" in english
var_dump($ld->detect('grille pain')->bestResults());
Result:
class LanguageDetection\LanguageResult#5 (1) {
private $result =>
array(2) {
'fr' =>
double(0.91307037037037)
'en' =>
double(0.90623333333333)
}
}
No it is not. The trainer class will only use the best 310 n-grams of the language. If you don't change this number or add more language files it will not affect the performance. Only creating the N-grams is slower. However, the creation of N-grams must be done only once. The detection phase is only affected when you are trying to detect big chunks of texts.
Summary: The training phase will be slower but the detection phase remains the same.
Feel free to contribute. Any help is welcome.
This projects is licensed under the terms of the MIT license.