BlinkID SDK for Android is SDK that enables you to perform scans of various ID cards in your app. You can simply integrate the SDK into your app by following the instructions below and your app will be able to benefit the scanning feature for various documents.
As of version 1.8.0
you can also scan barcodes and perform OCR of structured or free-form text. Supported barcodes are the same as in sister product PDF417.mobi.
Using BlinkID in your app requires a valid license. You can obtain a trial license by registering to Microblink dashboard. After registering, you will be able to generate a license for your app. License is bound to package name of your app, so please make sure you enter the correct package name when asked.
See below for more information about how to integrate BlinkID SDK into your app and also check latest [Release notes](Release notes.md).
- Android BlinkID integration instructions
- Quick Start
- Advanced BlinkID integration instructions
Recogniezr
concept andRecognizerBundle
- List of available recognizers
Field by field
scanning featureProcessor
andParser
- Scanning generic documents with Templating API
- Extracting additional fields of interest from machine-readable travel documents by using Templating API
- The
Detector
concept - Embedding BlinkID inside another SDK
- Processor architecture considerations
- Troubleshooting
- Additional info
The package contains Android Archive (AAR) that contains everything you need to use BlinkID library. Besides AAR, package also contains a sample project that contains following modules:
- BlinkID-aMinimalSample demonstrates quick and simple integration of BlinkID library
- BlinkID-AllRecognizersSample demonstrates integration of almost all available features. This sample application is best for performing a quick test of supported features.
- BlinkID-CustomCombinedSample demonstrates advanced custom UI integration and usage of the combined recognizers within a custom scan activity.
- BlinkID-CustomFieldByFieldScanSample demonstrates advanced integration of Field by Field feature within custom scan activity. It shows how to create a custom scan activity for scanning little text fields.
- BlinkID-CustomUISample demonstrates advanced integration within custom scan activity.
- BlinkID-DetectorSample demonstrates how to perform document detection and obtain dewarped image of the detected document.
- BlinkID-DirectApiSample demonstrates how to perform scanning of Android Bitmaps
- BlinkID-ImagesSample demonstrates how to obtain document images.
- BlinkID-TemplatingSample shows how to use Templating API to implement support for scanning generic documents.
The source code of all sample apps is given to you to show you how to perform integration of BlinkID SDK into your app. You can use this source code and all resources as you wish. You can use sample apps as a basis for creating your own app, or you can copy/paste the code and/or resources from sample apps into your app and use them as you wish without even asking us for permission.
BlinkID is supported on Android SDK version 16 (Android 4.1) or later.
The list of all provided scan activities can be found in the Built-in activities and overlays section.
You can also create your own scanning UI - you just need to embed RecognizerRunnerView
into your activity and pass activity's lifecycle events to it and it will control the camera and recognition process. For more information, see Embedding RecognizerRunnerView
into custom scan activity.
- Open Android Studio.
- In Quick Start dialog choose Import project (Eclipse ADT, Gradle, etc.).
- In File dialog select BlinkIDSample folder.
- Wait for the project to load. If Android studio asks you to reload project on startup, select
Yes
.
In your build.gradle
you first need to add BlinkID maven repository to repositories list:
repositories {
maven { url 'http://maven.microblink.com' }
}
After that, you just need to add BlinkID as a dependency to your application (make sure, transitive
is set to true):
dependencies {
implementation('com.microblink:blinkid:4.2.0@aar') {
transitive = true
}
}
Android studio 3.0 should automatically import javadoc from maven dependency. If that doesn't happen, you can do that manually by following these steps:
- In Android Studio project sidebar, ensure project view is enabled
- Expand
External Libraries
entry (usually this is the last entry in project view) - Locate
blinkid-4.2.0
entry, right click on it and selectLibrary Properties...
- A
Library Properties
pop-up window will appear - Click the second
+
button in bottom left corner of the window (the one that contains+
with little globe) - Window for defining documentation URL will appear
- Enter following address:
https://blinkid.github.io/blinkid-android/
- Click
OK
-
In Android Studio menu, click File, select New and then select Module.
-
In new window, select Import .JAR or .AAR Package, and click Next.
-
In File name field, enter the path to LibBlinkID.aar and click Finish.
-
In your app's
build.gradle
, add dependency toLibBlinkID
and appcompat-v7:dependencies { implementation project(':LibBlinkID') implementation "com.android.support:appcompat-v7:27.1.1" }
- In Android Studio project sidebar, ensure project view is enabled
- Expand
External Libraries
entry (usually this is the last entry in project view) - Locate
LibBlinkID-unspecified
entry, right click on it and selectLibrary Properties...
- A
Library Properties
pop-up window will appear - Click the
+
button in bottom left corner of the window - Window for choosing JAR file will appear
- Find and select
LibBlinkID-javadoc.jar
file which is located in root folder of the SDK distribution - Click
OK
We do not provide Eclipse integration demo apps. We encourage you to use Android Studio. We also do not test integrating BlinkID with Eclipse. If you are having problems with BlinkID, make sure you have tried integrating it with Android Studio prior to contacting us.
However, if you still want to use Eclipse, you will need to convert AAR archive to Eclipse library project format. You can do this by doing the following:
- In Eclipse, create a new Android library project in your workspace.
- Clear the
src
andres
folders. - Unzip the
LibBlinkID.aar
file. You can rename it to zip and then unzip it using any tool. - Copy the
classes.jar
tolibs
folder of your Eclipse library project. Iflibs
folder does not exist, create it. - Copy the contents of
jni
folder tolibs
folder of your Eclipse library project. - Replace the
res
folder on library project with theres
folder of theLibBlinkID.aar
file.
You’ve already created the project that contains almost everything you need. Now let’s see how to configure your project to reference this library project.
- In the project you want to use the library (henceforth, "target project") add the library project as a dependency
- Open the
AndroidManifest.xml
file insideLibBlinkID.aar
file and make sure to copy all permissions, features and activities to theAndroidManifest.xml
file of the target project. - Copy the contents of
assets
folder fromLibBlinkID.aar
intoassets
folder of target project. Ifassets
folder in target project does not exist, create it. - Clean and Rebuild your target project
- Add appcompat-v7 library to your workspace and reference it by target project (modern ADT plugin for Eclipse does this automatically for all new android projects).
Android Maven Plugin v4.0.0 or newer is required.
Open your pom.xml
file and add these directives as appropriate:
<repositories>
<repository>
<id>MicroblinkRepo</id>
<url>http://maven.microblink.com</url>
</repository>
</repositories>
<dependencies>
<dependency>
<groupId>com.microblink</groupId>
<artifactId>blinkid</artifactId>
<version>4.2.0</version>
<type>aar</type>
</dependency>
</dependencies>
-
Before starting a recognition process, you need to obtain a license from Microblink dashboard. After registering, you will be able to generate a trial license for your app. License is bound to package name of your app, so please make sure you enter the correct package name when asked.
After creating a license, you will have the option to download the license as a file that you must place within your application's assets folder. You must ensure that license key is set before instantiating any other classes from the SDK, otherwise you will get an exception at runtime. Therefore, we recommend that you extend Android Application class and set the license in its onCreate callback in the following way:
public class MyApplication extends Application { @Override public void onCreate() { MicroblinkSDK.setLicenseFile("path/to/license/file/within/assets/dir", this); } }
-
In your main activity, create recognizer objects that will perform image recognition, configure them and store them into RecognizerBundle object. You can see more information about available recognizers and about
RecognizerBundle
in chapter RecognizerBundle and available recognizers. For example, to scan Machine Readable Travel Document (MRTD), you can configure your recognizer object in the following way:public class MyActivity extends Activity { private MrtdRecognizer mRecognizer; private RecognizerBundle mRecognizerBundle; @Override protected void onCreate(Bundle bundle) { super.onCreate(bundle); // setup views, as you would normally do in onCreate callback // create MrtdRecognizer mRecognizer = new MrtdRecognizer(); // bundle recognizers into RecognizerBundle mRecognizerBundle = new RecognizerBundle(mRecognizer); } }
-
You can start recognition process by starting
DocumentScanActivity
activity by creatingDocumentUISettings
and callingActivityRunner.startActivityForResult
method:// method within MyActivity from previous step public void startScanning() { // Settings for DocumentScanActivity Activity DocumentUISettings settings = new DocumentUISettings(mRecognizerBundle); // tweak settings as you wish // Start activity ActivityRunner.startActivityForResult(this, MY_REQUEST_CODE, settings); }
-
After
DocumentScanActivity
activity finishes the scan, it will return to the calling activity or fragment and will call its methodonActivityResult
. You can obtain the scanning results in that method.@Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode == MY_REQUEST_CODE) { if (resultCode == DocumentScanActivity.RESULT_OK && data != null) { // load the data into all recognizers bundled within your RecognizerBundle mRecognizerBundle.loadFromIntent(data); // now every recognizer object that was bundled within RecognizerBundle // has been updated with results obtained during scanning session // you can get the result by invoking getResult on recognizer MrtdRecognizer.Result result = mRecognizer.getResult(); if (result.getResultState() == Recognizer.Result.State.Valid) { // result is valid, you can use it however you wish } } } }
For more information about available recognizers and
RecognizerBundle
, see RecognizerBundle and available recognizers.
-
Before starting a recognition process, you need to obtain a license from Microblink dashboard. After registering, you will be able to generate a trial license for your app. License is bound to package name of your app, so please make sure you enter the correct package name when asked.
After creating a license, you will have the option to download the license as a file that you must place within your application's assets folder. You must ensure that license key is set before instantiating any other classes from the SDK, otherwise you will get an exception at runtime. Therefore, we recommend that you extend Android Application class and set the license in its onCreate callback in the following way:
public class MyApplication extends Application { @Override public void onCreate() { MicroblinkSDK.setLicenseFile("path/to/license/file/within/assets/dir", this); } }
-
In your main activity create parser objects that will be used during recognition, configure them if needed, define scan elements and store them in
FieldByFieldBundle
object. For example, to scan three fields: amount, e-mail address and raw text, you can configure your recognizer object in the following way:public class MyActivity extends Activity { // parsers are member variables because it will be used for obtaining results private AmountParser mAmountParser; private EMailParser mEMailParser; private RawParser mRawParser; /** Reference to bundle is kept, it is used later for loading results from intent */ private FieldByFieldBundle mFieldByFieldBundle; @Override protected void onCreate(Bundle bundle) { super.onCreate(bundle); // setup views, as you would normally do in onCreate callback mAmountParser = new AmountParser(); mEMailParser = new EMailParser(); mRawParser = new RawParser(); // prepare scan elements and put them in FieldByFieldBundle // we need to scan 3 items, so we will create bundle with 3 elements mFieldByFieldBundle = new FieldByFieldBundle( // each scan element contains two string resource IDs: string shown in title bar // and string shown in text field above scan box. Besides that, it contains parser // that will extract data from the OCR result. new FieldByFieldElement(R.string.amount_title, R.string.amount_msg, mAmountParser), new FieldByFieldElement(R.string.email_title, R.string.email_msg, mEMailParser), new FieldByFieldElement(R.string.raw_title, R.string.raw_msg, mRawParser) ); } }
-
You can start recognition process by starting
FieldByFieldScanActivity
. You need to do that by creatingFieldByFieldUISettings
and callingActivityRunner.startActivityForResult
method:// method within MyActivity from previous step public void startFieldByFieldScanning() { // we use FieldByFieldUISettings - settings for FieldByFieldScanActivity FieldByFieldUISettings scanActivitySettings = new FieldByFieldUISettings(mFieldByFieldBundle); // tweak settings as you wish // Start activity ActivityRunner.startActivityForResult(this, MY_REQUEST_CODE, scanActivitySettings); }
-
After
FieldByFieldScanActivity
finishes the scan, it will return to the calling activity or fragment and will call its methodonActivityResult
. You can obtain the scanning results in that method.@Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode == MY_REQUEST_CODE) { if (resultCode == FieldByFieldScanActivity.RESULT_OK && data != null) { // load the data into all parsers bundled within your FieldByFieldBundle mFieldByFieldBundle.loadFromIntent(data); // now every parser object that was bundled within FieldByFieldBundle // has been updated with results obtained during scanning session // you can get the results by invoking getResult on each parser, and then // invoke specific getter for each concrete parser result type String amount = mAmountParser.getResult().getAmount(); String email = mEMailParser.getResult().getEmail(); String rawText = mRawParser.getResult().getRawText(); if (!amount.isEmpty()) { // amount has been successfully parsed, you can use it however you wish } if (!email.isEmpty()) { // email has been successfully parsed, you can use it however you wish } if (!rawText.isEmpty()) { // raw text has been successfully returned, you can use it however you wish } } } }
This section covers more advanced details of BlinkID integration.
- First part will discuss the methods for checking whether BlinkID is supported on current device.
- Second part will cover the possible customizations when using UI provided by the SDK.
- Third part will describe how to embed
RecognizerRunnerView
into your activity with the goal of creating a custom UI for scanning, while still using camera management capabilities of the SDK. - Fourth part will describe how to use the
RecognizerRunner
singleton (Direct API) for recognition directly from android bitmaps without the need of camera or to recognize camera frames that are obtained by custom camera management. - Fifth part will describe how to subscribe to and handle processing events when using either
RecognizerRunnerView
orRecognizerRunner
.
Even before settings the license key, you should check if BlinkID is supported on current device. This is required because the BlinkID is a native library that needs to be loaded by the JVM and it is possible that it doesn't support CPU architecture of the current device. Attempt of calling any methods from the SDK that rely on native code, such as license check, on a device with unsupported CPU architecture will cause a crash of your app.
BlinkID requires Android 4.1 as the minimum android version. For best performance and compatibility, we recommend Android 5.0 or newer.
OpenGL ES 2.0 can be used to accelerate BlinkID's processing but is not mandatory. However, it should be noted that if OpenGL ES 2.0 is not available processing time will be significantly large, especially on low end devices.
Since we use OpenGL ES 2.0 for image processing on background thread, we entirely use off-screen rendering to achieve our goals. Unfortunately, some devices have bugs in their OpenGL drivers that cause a crash of the application when it attempts to perform off-screen OpenGL ES context initialization. We do our best to maintain a list of such devices and ensure that no OpenGL is used on those devices. Unfortunately again, the nature of that bug is such that it cannot be detected and worked around at runtime, so a blacklist is required. Fortunately however, most of the affected devices run Android 4.1 or Android 4.2, while most of devices running newer versions of Android are not affected. Also note that most of the devices out there that still run Android 4.1/4.2 are not affected by this bug. If you happen to find a crash of your app which may lead to the described problem, please let us know so we can blacklist the problematic device.
Camera video preview resolution also matters. In order to perform successful scans, camera preview resolution cannot be too low. Minimum camera preview resolution in order to perform a scan is 480p. It must be noted that camera preview resolution is not the same as the video record resolution, although on most devices those are the same. However, there are some devices that allow recording of HD video (720p resolution), but do not allow high enough camera preview resolution (for example, Sony Xperia Go supports video record resolution at 720p, but camera preview resolution is only 320p - BlinkID does not work on that device).
BlinkID is native application, written in C++ and available for multiple platforms. Because of this, BlinkID cannot work on devices that have obscure hardware architectures. We have compiled BlinkID native code only for most popular Android ABIs. See Processor architecture considerations for more information about native libraries in BlinkID and instructions how to disable certain architectures in order to reduce the size of final app.
To check whether the BlinkID is supported on the device, you can do it in the following way:
// check if BlinkID is supported on the device
RecognizerCompatibilityStatus status = RecognizerCompatibility.getRecognizerCompatibilityStatus(this);
if (status == RecognizerCompatibilityStatus.RECOGNIZER_SUPPORTED) {
Toast.makeText(this, "BlinkID is supported!", Toast.LENGTH_LONG).show();
} else if (status == RecognizerCompatibilityStatus.NO_CAMERA) {
Toast.makeText(this, "BlinkID is supported only via Direct API!", Toast.LENGTH_LONG).show();
} else {
Toast.makeText(this, "BlinkID is not supported! Reason: " + status.name(), Toast.LENGTH_LONG).show();
}
However, some recognizers require camera with autofocus. If you try to start recognition with those recognizers on a device that does not have a camera with autofocus, you will get an error. To prevent that, you can check whether certain recognizer requires autofocus by calling its requiresAutofocus method.
If you already have an array of recognizers, you can easily filter out all recognizers that require autofocus from array using the following code snippet:
Recognizer[] recArray = ...;
if(!RecognizerCompatibility.cameraHasAutofocus(CameraType.CAMERA_BACKFACE, this)) {
recArray = RecognizerUtils.filterOutRecognizersThatRequireAutofocus(recArray);
}
This utility method basically iterates over the given array of recognizers and throws out each recognizer that returns true
from its requiresAutofocus method.
This section will discuss supported appearance and behaviour customizations of built-in activities and will show how to use RecognizerRunnerFragment
with provided built-in scanning overlays to get the built-in UI experience within any part of your app.
As shown in first scan example, you need to create a settings object that is associated with the activity you wish to use. Attempt to start built-in activity directly via custom-crafted Intent
will result with either crashing the app or with undefined behaviour of the scanning procedure.
List of available built-in scan activities in BlinkID are listed in section Built-in activities and fragments.
If you want to integrate UI provided by our built-in activity somewhere within your activity, you can do so by using RecognizerRunnerFragment
. Any activity that will host the RecognizerRunnerFragment
must implement ScanningOverlayBinder
interface. Attempt of adding RecognizerRunnerFragment
to activity that does not implement the aforementioned interface will result in a ClassCastException
. This design is in accordance with the recommendation for communication between fragments.
The ScanningOverlayBinder
is responsible for returning non-null
implementation of ScanningOverlay
- class that will manage UI on top of RecognizerRunnerFragment
. It is not recommended to create your own implementation of ScanningOverlay
as effort to do so might be equal or even greater to creating your custom UI implementation in the recommended way.
Here is the minimum example for activity that hosts the RecognizerRunnerFragment
:
public class MyActivity extends Activity implements RecognizerRunnerFragment.ScanningOverlayBinder {
private MrtdRecognizer mRecognizer;
private RecognizerBundle mRecognizerBundle;
private DocumentOverlayController mScanOverlay = createOverlay();
private RecognizerRunnerFragment mRecognizerRunnerFragment;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate();
setContentView(R.layout.activity_my_activity);
if (null == savedInstanceState) {
// create fragment transaction to replace R.id.recognizer_runner_view_container with RecognizerRunnerFragment
mRecognizerRunnerFragment = new RecognizerRunnerFragment();
FragmentTransaction fragmentTransaction = getFragmentManager().beginTransaction();
fragmentTransaction.replace(R.id.recognizer_runner_view_container, mRecognizerRunnerFragment);
fragmentTransaction.commit();
} else {
// obtain reference to fragment restored by Android within super.onCreate() call
mRecognizerRunnerFragment = (RecognizerRunnerFragment) getFragmentManager().findFragmentById(R.id.recognizer_runner_view_container);
}
}
@Override
@NonNull
public ScanningOverlay getScanningOverlay() {
return mScanningOverlay;
}
private DocumentOverlayController createOverlay() {
// create MrtdRecognizer
mRecognizer = new MrtdRecognizer();
// bundle recognizers into RecognizerBundle
mRecognizerBundle = new RecognizerBundle(mRecognizer);
// Settings for DocumentOverlayController overlay
DocumentUISettings settings = new DocumentUISettings(mRecognizerBundle);
return new DocumentOverlayController(settings, mScanResultListener);
}
private final ScanResultListener mScanResultListener = new ScanResultListener() {
@Override
public void onScanningDone(@NonNull RecognitionSuccessType recognitionSuccessType) {
// pause scanning to prevent new results while fragment is being removed
mRecognizerRunnerFragment.getRecognizerRunnerView().pauseScanning();
// now you can remove the RecognizerRunnerFragment with new fragment transaction
// and use result within mRecognizer safely without the need for making a copy of it
// if not paused, as soon as this method ends, RecognizerRunnerFragments continues
// scanning. Note that this can happen even if you created fragment transaction for
// removal of RecognizerRunnerFragment - in the time between end of this method
// and beginning of execution of the transaction. So to ensure result within mRecognizer
// does not get mutated, ensure calling pauseScanning() as shown above.
}
};
}
Also please refer to demo apps provided with the SDK for more detailed example and make sure your host activity's orientation is set to nosensor
or has configuration changing enabled (i.e. is not restarted when configuration change happens). For more information, check this section.
Within BlinkID SDK there are several built-in activities and scanning overlays that you can use to perform scanning.
DocumentOverlayController
is overlay for RecognizerRunnerFragment
best suited for scanning of various card documents like ID cards, passports, driver's licenses, etc.
DocumentScanActivity
contains RecognizerRunnerFragment
with DocumentOverlayController
, which can be used out of the box to perform scanning using the default UI.
DocumentVerificationOverlayController
is overlay for RecognizerRunnerFragment
best suited for combined recognizers because it manages scanning of multiple document sides in the single camera opening and guides the user through the scanning process. It can also be used for single side scanning of ID cards, passports, driver's licenses, etc.
DocumentVerificationActivity
contains RecognizerRunnerFragment
with DocumentVerificationOverlayController
, which can be used out of the box to perform scanning using the default UI.
FieldByFieldOverlayController
is overlay for RecognizerRunnerFragment
best suited for performing scanning of small text fields, which are scanned in the predefined order, one by one.
FieldByFieldScanActivity
is the activity containing RecognizerRunnerFragment
with FieldByFieldOverlayController
, which can be used out of the box to simply perform the scanning using the default UI.
BarcodeOverlayController
is overlay for RecognizerRunnerFragment
best suited for performing scanning of various barcodes.
BarcodeScanActivity
contains RecognizerRunnerFragment
with BarcodeOverlayController
, which can be used out of the box to perform scanning using the default UI.
Built-in activities and overlays use resources from the res
folder within LibBlinkID.aar
to display its contents. If you need a fully customised UI, we recommend creating completely custom scanning procedure (either activity or fragment), as described here. However, if you just want to slightly change the appearance of built-in activity or overlay, you can do that by overriding appropriate resource values, however this is strictly not recommended, as it can have unknown effects on the appearance of the UI component. If you think that some part of our built-in UI component should be configurable in a way that it currently is not, please let us know and we will consider adding that configurability into appropriate settings object.
Strings used within built-in activities and overlays can be localized to any language. If you are using RecognizerRunnerView
(see this chapter for more information) in your custom scan activity or fragment, you should handle localization as in any other Android app. RecognizerRunnerView
does not use strings nor drawables, it only uses assets from assets/microblink
folder. Those assets must not be touched as they are required for recognition to work correctly.
However, if you use our built-in activities or overlays, they will use resources packed within LibBlinkID.aar
to display strings and images on top of the camera view. We have already prepared strings for several languages which you can use out of the box. You can also modify those strings, or you can add your own language.
To use a language, you have to enable it from the code:
-
To use a certain language, you should call method
LanguageUtils.setLanguageAndCountry(language, country, context)
. For example, you can set language to Croatian like this:// define BlinkID language LanguageUtils.setLanguageAndCountry("hr", "", this);
BlinkID can easily be translated to other languages. The res
folder in LibBlinkID.aar
archive has folder values
which contains strings.xml
- this file contains english strings. In order to make e.g. croatian translation, create a folder values-hr
in your project and put the copy of strings.xml
inside it (you might need to extract LibBlinkID.aar
archive to access those files). Then, open that file and translate the strings from English into Croatian.
To modify an existing string, the best approach would be to:
- Choose a language you want to modify. For example Croatian ('hr').
- Find
strings.xml
in folderres/values-hr
of theLibBlinkID.aar
archive - Choose a string key which you want to change. For example:
<string name="MBBack">Back</string>
- In your project create a file
strings.xml
in the folderres/values-hr
, if it doesn't already exist - Create an entry in the file with the value for the string which you want. For example:
<string name="MBBack">Natrag</string>
- Repeat for all the string you wish to change
This section discusses how to embed RecognizerRunnerView into your scan activity and perform scan.
- First make sure that
RecognizerRunnerView
is a member field in your activity. This is required because you will need to pass all activity's lifecycle events toRecognizerRunnerView
. - It is recommended to keep your scan activity in one orientation, such as
portrait
orlandscape
. Settingsensor
as scan activity's orientation will trigger full restart of activity whenever device orientation changes. This will provide very poor user experience because both camera and BlinkID native library will have to be restarted every time. There are measures against this behaviour that are discussed later. - In your activity's
onCreate
method, create a newRecognizerRunnerView
, set RecognizerBundle containing recognizers that will be used by the view, define CameraEventsListener that will handle mandatory camera events, define ScanResultListener that will receive call when recognition has been completed and then call itscreate
method. After that, add your views that should be layouted on top of camera view. - Override your activity's
onStart
,onResume
,onPause
,onStop
andonDestroy
methods and callRecognizerRunnerView's
lifecycle methodsstart
,resume
,pause
,stop
anddestroy
. This will ensure correct camera and native resource management. If you plan to manageRecognizerRunnerView's
lifecycle independently of host's lifecycle, make sure the order of calls to lifecycle methods is the same as is with activities (i.e. you should not callresume
method ifcreate
andstart
were not called first).
Here is the minimum example of integration of RecognizerRunnerView
as the only view in your activity:
public class MyScanActivity extends Activity {
private static final int PERMISSION_CAMERA_REQUEST_CODE = 69;
private RecognizerRunnerView mRecognizerRunnerView;
private MrtdRecognizer mRecognizer;
private RecognizerBundle mRecognizerBundle;
@Override
protected void onCreate(Bundle savedInstanceState) {
// create MrtdRecognizer
mRecognizer = new MrtdRecognizer();
// bundle recognizers into RecognizerBundle
mRecognizerBundle = new RecognizerBundle(mRecognizer);
// create RecognizerRunnerView
mRecognizerRunnerView = new RecognizerRunnerView(this);
// associate RecognizerBundle with RecognizerRunnerView
mRecognizerRunnerView.setRecognizerBundle(mRecognizerBundle);
// scan result listener will be notified when scanning is complete
mRecognizerRunnerView.setScanResultListener(mScanResultListener);
// camera events listener will be notified about camera lifecycle and errors
mRecognizerRunnerView.setCameraEventsListener(mCameraEventsListener);
mRecognizerRunnerView.create();
setContentView(mRecognizerRunnerView);
}
@Override
protected void onStart() {
super.onStart();
// you need to pass all activity's lifecycle methods to RecognizerRunnerView
mRecognizerRunnerView.start();
}
@Override
protected void onResume() {
super.onResume();
// you need to pass all activity's lifecycle methods to RecognizerRunnerView
mRecognizerRunnerView.resume();
}
@Override
protected void onPause() {
super.onPause();
// you need to pass all activity's lifecycle methods to RecognizerRunnerView
mRecognizerRunnerView.pause();
}
@Override
protected void onStop() {
super.onStop();
// you need to pass all activity's lifecycle methods to RecognizerRunnerView
mRecognizerRunnerView.stop();
}
@Override
protected void onDestroy() {
super.onDestroy();
// you need to pass all activity's lifecycle methods to RecognizerRunnerView
mRecognizerRunnerView.destroy();
}
@Override
public void onConfigurationChanged(Configuration newConfig) {
super.onConfigurationChanged(newConfig);
// you need to pass all activity's lifecycle methods to RecognizerRunnerView
mRecognizerRunnerView.changeConfiguration(newConfig);
}
private final CameraEventsListener mCameraEventsListener = new CameraEventsListener() {
@Override
public void onCameraPreviewStarted() {
// this method is from CameraEventsListener and will be called when camera preview starts
}
@Override
public void onCameraPreviewStopped() {
// this method is from CameraEventsListener and will be called when camera preview stops
}
@Override
public void onError(Throwable exc) {
/**
* This method is from CameraEventsListener and will be called when
* opening of camera resulted in exception or recognition process
* encountered an error. The error details will be given in exc
* parameter.
*/
}
@Override
@TargetApi(23)
public void onCameraPermissionDenied() {
/**
* Called in Android 6.0 and newer if camera permission is not given
* by user. You should request permission from user to access camera.
*/
requestPermissions(new String[]{Manifest.permission.CAMERA}, PERMISSION_CAMERA_REQUEST_CODE);
/**
* Please note that user might have not given permission to use
* camera. In that case, you have to explain to user that without
* camera permissions scanning will not work.
* For more information about requesting permissions at runtime, check
* this article:
* https://developer.android.com/training/permissions/requesting.html
*/
}
@Override
public void onAutofocusFailed() {
/**
* This method is from CameraEventsListener will be called when camera focusing has failed.
* Camera manager usually tries different focusing strategies and this method is called when all
* those strategies fail to indicate that either object on which camera is being focused is too
* close or ambient light conditions are poor.
*/
}
@Override
public void onAutofocusStarted(Rect[] areas) {
/**
* This method is from CameraEventsListener and will be called when camera focusing has started.
* You can utilize this method to draw focusing animation on UI.
* Areas parameter is array of rectangles where focus is being measured.
* It can be null on devices that do not support fine-grained camera control.
*/
}
@Override
public void onAutofocusStopped(Rect[] areas) {
/**
* This method is from CameraEventsListener and will be called when camera focusing has stopped.
* You can utilize this method to remove focusing animation on UI.
* Areas parameter is array of rectangles where focus is being measured.
* It can be null on devices that do not support fine-grained camera control.
*/
}
};
private final ScanResultListener mScanResultListener = new ScanResultListener() {
@Override
public void onScanningDone(@NonNull RecognitionSuccessType recognitionSuccessType) {
// this method is from ScanResultListener and will be called when scanning completes
// you can obtain scanning result by calling getResult on each
// recognizer that you bundled into RecognizerBundle.
// for example:
MrtdRecognizer.Result result = mRecognizer.getResult();
if (result.getResultState() == Recognizer.Result.State.Valid) {
// result is valid, you can use it however you wish
}
// Note that mRecognizer is stateful object and that as soon as
// scanning either resumes or its state is reset
// the result object within mRecognizer will be changed. If you
// need to create a immutable copy of the result, you can do that
// by calling clone() on it, for example:
MrtdRecognizer.Result immutableCopy = result.clone();
// After this method ends, scanning will be resumed and recognition
// state will be retained. If you want to prevent that, then
// you should call:
mRecognizerRunnerView.resetRecognitionState();
// Note that reseting recognition state will clear internal result
// objects of all recognizers that are bundled in RecognizerBundle
// associated with RecognizerRunnerView.
// If you want to pause scanning to prevent receiving recognition
// results or mutating result, you should call:
mRecognizerRunnerView.pauseScanning();
// if scanning is paused at the end of this method, it is guaranteed
// that result within mRecognizer will not be mutated, therefore you
// can avoid creating a copy as described above
// After scanning is paused, you will have to resume it with:
mRecognizerRunnerView.resumeScanning(true);
// boolean in resumeScanning method indicates whether recognition
// state should be automatically reset when resuming scanning - this
// includes clearing result of mRecognizer
}
};
}
If activity's screenOrientation
property in AndroidManifest.xml
is set to sensor
, fullSensor
or similar, activity will be restarted every time device changes orientation from portrait to landscape and vice versa. While restarting activity, its onPause
, onStop
and onDestroy
methods will be called and then new activity will be created anew. This is a potential problem for scan activity because in its lifecycle it controls both camera and native library - restarting the activity will trigger both restart of the camera and native library. This is a problem because changing orientation from landscape to portrait and vice versa will be very slow, thus degrading a user experience. We do not recommend such setting.
For that matter, we recommend setting your scan activity to either portrait
or landscape
mode and handle device orientation changes manually. To help you with this, RecognizerRunnerView
supports adding child views to it that will be rotated regardless of activity's screenOrientation
. You add a view you wish to be rotated (such as view that contains buttons, status messages, etc.) to RecognizerRunnerView
with addChildView method. The second parameter of the method is a boolean that defines whether the view you are adding will be rotated with device. To define allowed orientations, implement OrientationAllowedListener interface and add it to RecognizerRunnerView
with method setOrientationAllowedListener
. This is the recommended way of rotating camera overlay.
However, if you really want to set screenOrientation
property to sensor
or similar and want Android to handle orientation changes of your scan activity, then we recommend to set configChanges
property of your activity to orientation|screenSize
. This will tell Android not to restart your activity when device orientation changes. Instead, activity's onConfigurationChanged
method will be called so that activity can be notified of the configuration change. In your implementation of this method, you should call changeConfiguration
method of RecognizerView
so it can adapt its camera surface and child views to new configuration.
This section will describe how to use direct API to recognize android Bitmaps and java Strings
without the need for camera. You can use direct API anywhere from your application, not just from activities.
- First, you need to obtain reference to RecognizerRunner singleton using getSingletonInstance.
- Second, you need to initialize the recognizer runner.
- After initialization, you can use singleton to process Android bitmaps or images that are built from custom camera frames. Currently, it is not possible to process multiple images in parallel.
- Do not forget to terminate the recognizer runner singleton after usage (it is a shared resource).
Here is the minimum example of usage of direct API for recognizing android Bitmap:
public class DirectAPIActivity extends Activity {
private RecognizerRunner mRecognizerRunner;
private MrtdRecognizer mRecognizer;
private RecognizerBundle mRecognizerBundle;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate();
// initialize your activity here
// create MrtdRecognizer
mRecognizer = new MrtdRecognizer();
// bundle recognizers into RecognizerBundle
mRecognizerBundle = new RecognizerBundle(mRecognizer);
try {
mRecognizerRunner = RecognizerRunner.getSingletonInstance();
} catch (FeatureNotSupportedException e) {
Toast.makeText(this, "Feature not supported! Reason: " + e.getReason().getDescription(), Toast.LENGTH_LONG).show();
finish();
return;
}
mRecognizerRunner.initialize(this, mRecognizerBundle, new DirectApiErrorListener() {
@Override
public void onRecognizerError(Throwable t) {
Toast.makeText(DirectAPIActivity.this, "There was an error in initialization of Recognizer: " + t.getMessage(), Toast.LENGTH_SHORT).show();
finish();
}
});
}
@Override
protected void onResume() {
super.onResume();
// start recognition
Bitmap bitmap = BitmapFactory.decodeFile("/path/to/some/file.jpg");
mRecognizerRunner.recognizeBitmap(bitmap, Orientation.ORIENTATION_LANDSCAPE_RIGHT, mScanResultListener);
}
@Override
protected void onDestroy() {
super.onDestroy();
mRecognizerRunner.terminate();
}
private final ScanResultListener mScanResultListener = new ScanResultListener() {
@Override
public void onScanningDone(@NonNull RecognitionSuccessType recognitionSuccessType) {
// this method is from ScanResultListener and will be called
// when scanning completes
// you can obtain scanning result by calling getResult on each
// recognizer that you bundled into RecognizerBundle.
// for example:
MrtdRecognizer.Result result = mRecognizer.getResult();
if (result.getResultState() == Recognizer.Result.State.Valid) {
// result is valid, you can use it however you wish
}
}
};
}
Some recognizers support recognition from String
. They can be used through Direct API to parse given String
and return data just like when they are used on an input image. When recognition is performed on String
, there is no need for the OCR. Input String
is used in the same way as the OCR output is used when image is being recognized.
Recognition from String
can be performed in the same way as recognition from image, described in the previous section.
The only difference is that one of the RecognizerRunner singleton methods for recognition from string should be called:
DirectAPI's RecognizerRunner
singleton is actually a state machine which can be in one of 3 states: OFFLINE
, READY
and WORKING
.
- When you obtain the reference to
RecognizerRunner
singleton, it will be inOFFLINE
state. - You can initialize
RecognizerRunner
by calling initialize method. If you callinitialize
method whileRecognizerRunner
is not inOFFLINE
state, you will getIllegalStateException
. - After successful initialization,
RecognizerRunner
will move toREADY
state. Now you can call any of therecognize*
methods. - When starting recognition with any of the
recognize*
methods,RecognizerRunner
will move toWORKING
state. If you attempt to call these methods whileRecognizerRunner
is not inREADY
state, you will getIllegalStateException
- Recognition is performed on background thread so it is safe to call all
RecognizerRunner's
methods from UI thread - When recognition is finished,
RecognizerRunner
first moves back toREADY
state and then calls the onScanningDone method of the providedScanResultListener
. - Please note that
ScanResultListener
'sonScanningDone
method will be called on background processing thread, so make sure you do not perform UI operations in this callback. Also note that until theonScanningDone
method completes,RecognizerRunner
will not perform recognition of another image or string, even if any of therecognize*
methods have been called just after transitioning toREADY
state. This is to ensure that results of the recognizers bundled withinRecognizerBundle
associated withRecognizerRunner
are not modified while possibly being used withinonScanningDone
method. - By calling
terminate
method,RecognizerRunner
singleton will release all its internal resources. Note that even after callingterminate
you might receiveonScanningDone
event if there was work in progress whenterminate
was called. terminate
method can be called from anyRecognizerRunner
singleton's state- You can observe
RecognizerRunner
singleton's state with methodgetCurrentState
Both RecognizerRunnerView and RecognizerRunner
use the same internal singleton that manages native code. This singleton handles initialization and termination of native library and propagating recognizers to native library. It is possible to use RecognizerRunnerView
and RecognizerRunner
together, as internal singleton will make sure correct synchronization and correct recognition settings are used. If you run into problems while using RecognizerRunner
in combination with RecognizerRunnerView
, let us know!
This section will describe how you can subscribe to and handle processing events when using RecognizerRunner or RecognizerRunnerView. Processing events, also known as Metadata callbacks are purely intended for giving processing feedback on UI or to capture some debug information during development of your app using BlinkID SDK. For that reason, built-in activities and fragments do not support subscribing and handling of those events from third parties - they handle those events internally. If you need to handle those events by yourself, you need to use either RecognizerRunnerView or RecognizerRunner.
Callbacks for all events are bundled together into the MetadataCallbacks object. Both RecognizerRunner and RecognizerRunnerView have methods which allow you to set all your callbacks.
We suggest that you check for more information about available callbacks and events to which you can handle in the javadoc for MetadataCallbacks class.
Please note that both those methods need to pass information about available callbacks to the native code and for efficiency reasons this is done at the time setMetadataCallbacks
method is called and not every time when change occurs within the MetadataCallbacks
object. This means that if you, for example, set QuadDetectionCallback
to MetadataCallbacks
after you already called setMetadataCallbacks
method, the QuadDetectionCallback
will not be registered with the native code and you will not receive its events.
Similarly, if you, for example, remove the QuadDetectionCallback
from MetadataCallbacks
object after you already called setMetadataCallbacks
method, your app will crash with NullPointerException
when our processing code attempts to invoke the method on removed callback (which is now set to null
). We deliberately do not perform null
check here because of two reasons:
- it is inefficient
- having
null
callback, while still being registered to native code is illegal state of your program and it should therefore crash
Remember, each time you make some changes to MetadataCallbacks
object, you need to apply those changes to to your RecognizerRunner
or RecognizerRunnerView
by calling its setMetadataCallbacks
method.
This section will first describe what is a Recognizer
and how it should be used to perform recognition of the images, videos and camera stream. Next, we will describe how RecognizerBundle
can be used to tweak the recognition procedure and to transfer Recognizer
objects between activities.
RecognizerBundle is an object which wraps the Recognizers and defines settings about how recognition should be performed. Besides that, RecognizerBundle
makes it possible to transfer Recognizer
objects between different activities, which is required when using built-in activities to perform scanning, as described in first scan section, but is also handy when you need to pass Recognizer
objects between your activities.
List of all available Recognizer
objects, with a brief description of each Recognizer
, its purpose and recommendations how it should be used to get best performance and user experience, can be found here .
The Recognizer is the basic unit of processing within the BlinkID SDK. Its main purpose is to process the image and extract meaningful information from it. As you will see later, the BlinkID SDK has lots of different Recognizer
objects that have various purposes.
Each Recognizer
has a Result
object, which contains the data that was extracted from the image. The Result
object is a member of corresponding Recognizer
object its lifetime is bound to the lifetime of its parent Recognizer
object. If you need your Result
object to outlive its parent Recognizer
object, you must make a copy of it by calling its method clone()
.
Every Recognizer
is a stateful object, that can be in two states: idle state and working state. While in idle state, you can tweak Recognizer
object's properties via its getters and setters. After you bundle it into a RecognizerBundle
and use either RecognizerRunner or RecognizerRunnerView to run the processing with all Recognizer
objects bundled within RecognizerBundle
, it will change to working state where the Recognizer
object is being used for processing. While being in working state, you cannot tweak Recognizer
object's properties. If you need to, you have to create a copy of the Recognizer
object by calling its clone()
, then tweak that copy, bundle it into a new RecognizerBundle
and use reconfigureRecognizers
to ensure new bundle gets used on processing thread.
While Recognizer
object works, it changes its internal state and its result. The Recognizer
object's Result
always starts in Empty state. When corresponding Recognizer
object performs the recognition of given image, its Result
can either stay in Empty
state (in case Recognizer
failed to perform recognition), move to Uncertain state (in case Recognizer
performed the recognition, but not all mandatory information was extracted) or move to Valid state (in case Recognizer
performed recognition and all mandatory information was successfully extracted from the image).
As soon as one Recognizer
object's Result
within RecognizerBundle
given to RecognizerRunner
or RecognizerRunnerView
changes to Valid
state, the onScanningDone
callback will be invoked on same thread that performs the background processing and you will have the opportunity to inspect each of your Recognizer
objects' Results
to see which one has moved to Valid
state.
As already stated in section about RecognizerRunnerView
, as soon as onScanningDone
method ends, the RecognizerRunnerView
will continue processing new camera frames with same Recognizer
objects, unless paused. Continuation of processing or resetting recognition will modify or reset all Recognizer
objects's Results
. When using built-in activities, as soon as onScanningDone
is invoked, built-in activity pauses the RecognizerRunnerView
and starts finishing the activity, while saving the RecognizerBundle
with active Recognizer
objects into Intent
so they can be transferred back to the calling activities.
The RecognizerBundle is wrapper around Recognizers objects that can be used to transfer Recognizer
objects between activities and to give Recognizer
objects to RecognizerRunner
or RecognizerRunnerView
for processing.
The RecognizerBundle
is always constructed with array of Recognizer
objects that need to be prepared for recognition (i.e. their properties must be tweaked already). The varargs constructor makes it easier to pass Recognizer
objects to it, without the need of creating a temporary array.
The RecognizerBundle
manages a chain of Recognizer
objects within the recognition process. When a new image arrives, it is processed by the first Recognizer
in chain, then by the second and so on, iterating until a Recognizer
object's Result
changes its state to Valid
or all of the Recognizer
objects in chain were invoked (none getting a Valid
result state). If you want to invoke all Recognizers
in the chain, regardless of whether some Recognizer
object's Result
in chain has changed its state to Valid
or not, you can allow returning of multiple results on a single image.
You cannot change the order of the Recognizer
objects within the chain - no matter the order in which you give Recognizer
objects to RecognizerBundle
, they are internally ordered in a way that provides best possible performance and accuracy. Also, in order for BlinkID SDK to be able to order Recognizer
objects in recognition chain in the best way possible, it is not allowed to have multiple instances of Recognizer
objects of the same type within the chain. Attempting to do so will crash your application.
Besides managing the chain of Recognizer
objects, RecognizerBundle
also manages transferring bundled Recognizer
objects between different activities within your app. Although each Recognizer
object, and each its Result
object implements Parcelable interface, it is not so straightforward to put those objects into Intent and pass them around between your activities and services for two main reasons:
Result
object is tied to itsRecognizer
object, which manages lifetime of the nativeResult
object.Result
object often contains large data blocks, such as images, which cannot be transferred viaIntent
because of Android's Intent transaction data limit.
Although the first problem can be easily worked around by making a copy of the Result
and transfer it independently, the second problem is much tougher to cope with. This is where, RecognizerBundle's
methods saveToIntent and loadFromIntent come to help, as they ensure the safe passing of Recognizer
objects bundled within RecognizerBundle
between activities according to policy defined with method setIntentDataTransferMode
:
- if set to
STANDARD
, theRecognizer
objects will be passed viaIntent
using normal Intent transaction mechanism, which is limited by Android's Intent transaction data limit. This is same as manually puttingRecognizer
objects intoIntent
and is OK as long as you do not useRecognizer
objects that produce images or other large objects in theirResults
. - if set to
OPTIMISED
, theRecognizer
objects will be passed via internal singleton object and no serialization will take place. This means that there is no limit to the size of data that is being passed. This is also the fastest transfer method, but it has a serious drawback - if Android kills your app to save memory for other apps and then later restarts it and redeliversIntent
that should containRecognizer
objects, the internal singleton that should contain savedRecognizer
objects will be empty and data that was being sent will be lost. You can easily provoke that condition by choosing No background processes under Limit background processes in your device's Developer options, and then switch from your app to another app and then back to your app. - if set to
PERSISTED_OPTIMISED
, theRecognizer
objects will be passed via internal singleton object (just like inOPTIMISED
mode) and will additionaly be serialized into a file in your application's private folder. In case Android restarts your app and internal singleton is empty after re-delivery of theIntent
, the data will be loaded from file and nothing will be lost. The files will be automatically cleaned up when data reading takes place. Just likeOPTIMISED
, this mode does not have limit to the size of data that is being passed and does not have a drawback thatOPTIMISED
mode has, but some users might be concerned about files to which data is being written.- These files will contain end-user's private data, such as image of the object that was scanned and the extracted data. Also these files may remain saved in your application's private folder until the next successful reading of data from the file.
- If your app gets restarted multiple times, only after first restart will reading succeed and will delete the file after reading. If multiple restarts take place, you must implement
onSaveInstanceState
and save bundle back to file by calling itssaveState
method. Also, after saving state, you should ensure that you clear saved state in youronResume
, asonCreate
may not be called if activity is not restarted, whileonSaveInstanceState
may be called as soon as your activity goes to background (beforeonStop
), even though activity may not be killed at later time. - If saving data to file in private storage is a concern to you, you should use either
OPTIMISED
mode to transfer large data and image between activities or create your own mechanism for data transfer. Note that your application's private folder is only accessible by your application and your application alone, unless the end-user's device is rooted.
This section will give a list of all Recognizer
objects that are available within BlinkID SDK, their purpose and recommendations how they should be used to get best performance and user experience.
The FrameGrabberRecognizer
is the simplest recognizer in BlinkID SDK, as it does not perform any processing on the given image, instead it just returns that image back to its FrameCallback
. Its Result never changes state from Empty.
This recognizer is best for easy capturing of camera frames with RecognizerRunnerView
. Note that Image
sent to onFrameAvailable
are temporary and their internal buffers all valid only until the onFrameAvailable
method is executing - as soon as method ends, all internal buffers of Image
object are disposed. If you need to store Image
object for later use, you must create a copy of it by calling clone
.
Also note that FrameCallback
interface extends Parcelable interface, which means that when implementing FrameCallback
interface, you must also implement Parcelable
interface.
This is especially important if you plan to transfer FrameGrabberRecognizer
between activities - in that case, keep in mind that the instance of your object may not be the same as the instance on which onFrameAvailable
method gets called - the instance that receives onFrameAvailable
calls is the one that is created within activity that is performing the scan.
The SuccessFrameGrabberRecognizer
is a special Recognizer
that wraps some other Recognizer
and impersonates it while processing the image. However, when the Recognizer
being impersonated changes its Result
into Valid
state, the SuccessFrameGrabberRecognizer
captures the image and saves it into its own Result
object.
Since SuccessFrameGrabberRecognizer
impersonates its slave Recognizer
object, it is not possible to give both concrete Recognizer
object and SuccessFrameGrabberRecognizer
that wraps it to same RecognizerBundle
- doing so will have the same result as if you have given two instances of same Recognizer
type to the RecognizerBundle
- it will crash your application.
This recognizer is best for use cases when you need to capture the exact image that was being processed by some other Recognizer
object at the time its Result
became Valid
. When that happens, SuccessFrameGrabber's
Result
will also become Valid
and will contain described image. That image can then be retrieved with getSuccessFrame()
method.
The Pdf417Recognizer
is recognizer specialised for scanning PDF417 2D barcodes. This recognizer can recognize only PDF417 2D barcodes - for recognition of other barcodes, please refer to BarcodeRecognizer.
This recognizer can be used in any context, but it works best with the BarcodeScanActivity
, which has UI best suited for barcode scanning.
The BarcodeRecognizer
is recognizer specialised for scanning various types of barcodes. This recognizer should be your first choice when scanning barcodes as it supports lots of barcode symbologies, including the PDF417 2D barcodes, thus making PDF417 recognizer possibly redundant, which was kept only for its simplicity.
As you can see from javadoc, you can enable multiple barcode symbologies within this recognizer, however keep in mind that enabling more barcode symbologies affects scanning performance - the more barcode symbologies are enabled, the slower the overall recognition performance. Also, keep in mind that some simple barcode symbologies that lack proper redundancy, such as Code 39, can be recognized within more complex barcodes, especially 2D barcodes, like PDF417.
This recognizer can be used in any context, but it works best with the BarcodeScanActivity
, which has UI best suited for barcode scanning.
The BlinkInputRecognizer
is generic OCR recognizer used for scanning segments which enables specifying Processors
that will be used for scanning. Most commonly used Processor
within this recognizer is ParserGroupProcessor
that activates all Parsers
in the group to extract data of interest from the OCR result.
This recognizer can be used in any context. It is used internally in the implementation of the provided FieldByFieldOverlayController
.
Processors
are explained in The Processor concept section and you can find more about Parsers
in The Parser concept section.
The DetectorRecognizer
is recognizer for scanning generic documents using custom Detector
. You can find more about Detector
in The Detector concept section. DetectorRecognizer
can be used simply for document detection and obtaining its image. The more interesting use case is data extraction from the custom document type. DetectorRecognizer
performs document detection and can be configured to extract fields of interest from the scanned document by using Templating API. You can find more about Templating API in this section.
This recognizer can be used in any context, but it works best with the activity which has UI suited for document scanning.
The SimNumberRecognizer
is a special type of barcode recognizer specifically tailored for performing recognition of barcodes on packagings of SIM cards. This recognizer is useful in combination with some ID recognizers in use cases when application requires quick scanning of SIM number on the packaging of new mobile network subscriber after scanning the new subscriber's identity document.
This recognizer can be used in any context. However, we recommend its usage in combination with any ID document recognizer within custom UI.
Unless stated otherwise for concrete recognizer, single side BlinkID recognizes from this list can be used in any context, but they work best with the DocumentScanActivity
, which has UI best suited for document scanning.
Combined recognizers should be used with DocumentVerificationActivity
which manages scanning of multiple document sides in the single camera opening and guides the user through the scanning process. Some combined recognizers support scanning of multiple document types, but only one document type can be scanned at a time.
The MrtdRecognizer
is used for scanning and data extraction from the Machine Readable Zone (MRZ) of the various Machine Readable Travel Documents (MRTDs) like ID cards and passports. This recognizer is not bound to the specific country, but it can be configured to only return data that match some criteria defined by the MrzFilter
.
The MrtdRecognizer
can also be configured to extract additional fields of interest from the scanned document, which are not part of the Machine Readable Zone, by using Templating API. You can find more about Templating API in this section.
You can find information about usage context at the beginning of this section.
The MrtdCombinedRecognizer
scans Machine Readable Zone (MRZ) after scanning the full document image and face image (usually MRZ is on the back side and face image is on the front side of the document). Internally, it uses DocumentFaceRecognizer for obtaining full document image and face image as the first step and then MrtdRecognizer for scanning the MRZ.
You can find information about usage context at the beginning of this section.
The UsdlRecognizer
is used for scanning PDF417 barcode from the US / Canada driver's license.
You can find information about usage context at the beginning of this section.
The UsdlCombinedRecognizer
scans PDF417 barcode from the back side of US / Canada driver's license after scanning the full document image and face image from the front side. Internally, it uses DocumentFaceRecognizer for obtaining full document image and face image as the first step and then UsdlRecognizer for scanning the PDF417 barcode.
You can find information about usage context at the beginning of this section.
The EudlRecognizer
is used for scanning front side of European Union driver's licenses. Currently, driver's licenses from these countries are supported:
- Austria
- Germany
- United Kingdom
You can find information about usage context at the beginning of this section.
For all recognizers, you can find information about usage context at the beginning of this section.
The PaymentCardFrontRecognizer
and PaymentCardBackRecognizer
are used for scanning the front and back side of Payment / Debit card.
The PaymentCardCombinedRecognizer
scans back side of Payment / Debit card after scanning the front side and combines data from both sides.
The DocumentFaceRecognizer
is a special type of recognizer that only returns face image and full document image of the scanned document. It does not extract document fields like first name, last name, etc. This generic recognizer can be used to obtain document images in cases when specific support for some document type is not available.
You can find information about usage context at the beginning of this section.
The AustraliaDlFrontRecognizer
and AustraliaDlBackRecognizer
are used for scanning Australian driver's license front and back side.
You can find information about usage context at the beginning of this section.
For all recognizers, you can find information about usage context at the beginning of this section.
The AustriaIdFrontRecognizer
and AustriaIdBackRecognizer
are used for scanning the front and back side of Austrian identity card.
The AustriaPassportRecognizer
is used for scanning the data page of Austrian passport.
The AustriaCombinedRecognizer
scans one of the following document types:
- back side of Austrian ID after scanning the front side and combines data from both sides
- Austrian passport
For scanning the front side of Austrian driver's license, EudlRecognizer
is used.
The ColombiaIdFrontRecognizer
and ColombiaIdBackRecognizer
are used for scanning the front and back side of Colombian identity card.
For all recognizers, you can find information about usage context at the beginning of this section.
The CroatiaIdFrontRecognizer
and CroatiaIdBackRecognizer
are used for scanning the front and back side of Croatian identity card.
The CroatiaCombinedRecognizer
scans back side of Croatian ID after scanning the front side and combines data from both sides.
The CyprusIdFrontRecognizer
and CyprusIdBackRecognizer
are used for scanning the front and back side of Cyprus identity card.
For all recognizers, you can find information about usage context at the beginning of this section.
The CzechiaIdFrontRecognizer
and CzechiaIdBackRecognizer
are used for scanning the front and back side of Czech identity card.
The CzechiaCombinedRecognizer
scans back side of Czech ID after scanning the front side and combines data from both sides.
The EgyptIdFrontRecognizer
is used for scanning front side of Egyptian identity card.
You can find information about usage context at the beginning of this section.
For all recognizers, you can find information about usage context at the beginning of this section.
The GermanyIdFrontRecognizer
and GermanyIdBackRecognizer
are used for scanning the front and back side of German identity card issued after 1 November 2010.
The GermanyOldIdRecognizer
is used for scanning front side of German identity card issued between 1 April 1987 and 31 October 2010.
The GermanyPassportRecognizer
is used for scanning the data page of German passport.
The GermanyCombinedRecognizer
scans one of the following document types:
- back side of new German ID after scanning the front side and combines data from both sides
- front side of old German ID
- German passport
For scanning the front side of German driver's license, EudlRecognizer
is used.
The HongKongIdFrontRecognizer
is used for scanning front side of Hong Kong identity card.
You can find information about usage context at the beginning of this section.
The IndonesiaIdFrontRecognizer
is used for scanning front side of Indonesian identity card.
You can find information about usage context at the beginning of this section.
For all recognizers, you can find information about usage context at the beginning of this section.
The JordanIdFrontRecognizer
and JordanIdBackRecognizer
are used for scanning the front and back side of Jordanian identity card.
The JordanCombinedRecognizer
scans back side of Jordanian ID after scanning the front side and combines data from both sides.
The KuwaitIdFrontRecognizer
and KuwaitIdBackRecognizer
are used for scanning the front and back side of Kuwaiti identity card.
For all recognizers, you can find information about usage context at the beginning of this section.
The MyKadFrontRecognizer
and MyKadBackRecognizer
are used for scanning the front and back side of Malaysian MyKad card.
The IkadRecognizer
is used for scanning front side of Malaysian iKad (immigrator) card.
The MyTenteraRecognizer
is used for scanning front side of Malaysian MyTentera card.
The MalaysiaDlFrontRecognizer
is used for scanning front side of Malaysian driver's license.
The MoroccoIdFrontRecognizer
and MoroccoIdBackRecognizer
are used for scanning front and back side of the Morocco identity card.
You can find information about usage context at the beginning of this section.
The NewZealandDlFrontRecognizer
is used for scanning front side of New Zealand driver's license.
You can find information about usage context at the beginning of this section.
For all recognizers, you can find information about usage context at the beginning of this section.
The PolandIdFrontRecognizer
and PolandIdBackRecognizer
are used for scanning the front and back side of Polish identity card.
The PolandCombinedRecognizer
scans back side of Polish ID after scanning the front side and combines data from both sides.
The RomaniaIdFrontRecognizer
is used for scanning front side of Romanian identity card.
You can find information about usage context at the beginning of this section.
For all recognizers, you can find information about usage context at the beginning of this section.
The SerbiaIdFrontRecognizer
and SerbiaIdBackRecognizer
are used for scanning the front and back side of Serbian identity card.
The SerbiaCombinedRecognizer
scans back side of Serbian ID after scanning the front side and combines data from both sides.
For all recognizers, you can find information about usage context at the beginning of this section.
The SingaporeIdFrontRecognizer
and SingaporeIdBackRecognizer
are used for scanning the front and back side of Singapore identity card.
The SingaporeCombinedRecognizer
scans back side of Singapore ID after scanning the front side and combines data from both sides.
The SingaporeDlFrontRecognizer
is used for scanning the front side of driving license in Singapore.
The SingaporeChangiEmployeeIdRecognizer
is used for scanning the ID card of the Singapore Changi airport employee.
For all recognizers, you can find information about usage context at the beginning of this section.
The SlovakiaIdFrontRecognizer
and SlovakiaIdBackRecognizer
are used for scanning the front and back side of Slovak identity card.
The SlovakiaCombinedRecognizer
scans back side of Slovak ID after scanning the front side and combines data from both sides.
For all recognizers, you can find information about usage context at the beginning of this section.
The SloveniaIdFrontRecognizer
and SloveniaIdBackRecognizer
are used for scanning the front and back side of Slovenian identity card.
The SloveniaCombinedRecognizer
scans back side of Slovenian ID after scanning the front side and combines data from both sides.
The SpainDlFrontRecognizer
is used for scanning front side of Spanish driver's license.
The SwedenDlFrontRecognizer
is used for scanning front side of Swedish driver's license.
You can find information about usage context at the beginning of this section.
For all recognizers, you can find information about usage context at the beginning of this section.
The SwitzerlandIdFrontRecognizer
and SwitzerlandIdBackRecognizer
are used for scanning the front and back side of Swiss identity card.
The SwitzerlandPassportRecognizer
is used for scanning the data page of Swiss passport.
The SwitzerlandDlFrontRecognizer
is used for scanning the front side of the Swiss driver's license.
The UnitedArabEmiratesIdFrontRecognizer
and UnitedArabEmiratesIdBackRecognizer
are used for scanning the front and back side of United Arab Emirates identity card.
The UnitedArabEmiratesDlFrontRecognizer
is used for scanning the front side of the United Arab Emirates driver's license.
For scanning the front side of UK driver's license, EudlRecognizer
is used.
For scanning the PDF417 barcode from the US / Canada driver's license, UsdlRecognizer
is used.
The UsdlCombinedRecognizer
can also be used for scanning the PDF417 barcode from the back side of US / Canada driver's license after scanning the full document image and face image from the front side.
Field by field
scanning feature is designed for scanning small text fields which are called scan elements. Elements are scanned in the predefined order. For each scan element, specific Parser
that will extract structured data of interest from the OCR result is defined. Focusing on the small text fields which are scanned one by one enables implementing support for the free-form documents because field detection is not required. The user is responsible for positioning the field of interest inside the scanning window and the scanning process guides him. When implementing support for the custom document, only fields of interest has to be defined.
Field by field
scan can be performed by using provided FieldByFieldScanActivity
and FieldByFieldOverlayController
.
For preparing the scan configuration, FieldByFieldBundle
is used. It holds the array of FieldByFieldElements
passed to its constructor and it is responsible for transferring them from one Activity
to another, just like the RecognizerBundle
transfers Recognizers
.
FieldByFieldElement
holds a combination of Parser
used for data extraction, its title and message which are shown in the UI during the scan. For all available configuration options please see the javadoc.
When FieldByFieldBundle
is prepared, it should be used for creating the FieldByFieldUISettings
which accepts FieldByFieldBundle
as a constructor argument and can be used to additionally tweak the scanning process and UI. For the list of all available configuration options, please see javadoc.
For starting the FieldByFieldScanActivity
, the ActivityRunner.startActivityForResult should be called with the prepared FieldByFieldUISettings
.
When the scanning is done and control is returned to the calling activity, in onActivityResult
method FieldByFieldBundle.loadFromIntent should be called. FieldByFieldBundle
will load the scanning results to the Parser
instances held by its elements.
The Processors
and Parsers
are standard processing units within BlinkID SDK used for data extraction from the input images. Unlike the Recognizer
, Processor
and Parser
are not stand-alone processing units. Processor
is always used within Recognizer
and Parser
is used within appropriate Processor
to extract data from the OCR result.
Processor
is a processing unit used within some Recognizer
which supports processors. It processes the input image prepared by the enclosing Recognizer
in the way that is characteristic to the implementation of the concrete Processor
.
For example, BlinkInputRecognizer
encloses a collection of processors which are run on the input image to extract data. To perform the OCR of the input image, ParserGroupProcessor
is used. Also, ImageReturnProcessor
can be used to obtain input image. Another example is DetectorRecognizer
which supports Templating API
. It uses processors to extract data from the fields of interest on the scanned document.
Processor
architecture is similar to Recognizer
architecture described in The Recognizer concept section. Each instance also has associated inner Result
object whose lifetime is bound to the lifetime of its parent Processor
object and it is updated while Processor
works. If you need your Result
object to outlive its parent Processor
object, you must make a copy of it by calling its method clone()
.
It also has its internal state and while it is in the working state during recognition process, it is not allowed to tweak Processor
object's properties.
To support common use cases, there are several different Processor
implementations available. They are listed in the next section.
This section will give a list of Processor
types that are available within BlinkID SDK and their purpose.
The ImageReturnProcessor
is used for obtaining input images. It simply saves the input image and makes it available after the scanning is done.
The appearance of the input image depends on the context in which ImageReturnProcessor
is used. For example, when it is used within BlinkInputRecognizer
, simply the raw image of the scanning region is processed. When it is used within the Templating API
, input image is dewarped (cropped and rotated).
The image is returned as the raw Image type. Also, processor can be configured to encode saved image to JPEG.
The ParserGroupProcessor
is the type of the processor that performs the OCR (Optical Character Recognition) on the input image and lets all the parsers within the group to extract data from the OCR result. The concept of Parser
is described in the next section.
Before performing the OCR, the best possible OCR engine options are calculated by combining engine options needed by each Parser
from the group. For example, if one parser expects and produces result from uppercase characters and other parser extracts data from digits, both uppercase characters and digits must be added to the list of allowed characters that can appear in the OCR result. This is a simplified explanation because OCR engine options contain many parameters which are combined by the ParserGroupProcessor
.
Because of that, if multiple parsers and multiple parser group processors are used during the scan, it is very important to group parsers carefully.
Let's see this on an example: assume that we have two parsers at our disposal: AmountParser
and EMailParser
. AmountParser
knows how to extract amount's from OCR result and requires from OCR only to recognize digits, periods and commas and ignore letters. On the other hand, EMailParser
knows how to extract e-mails from OCR result and requires from OCR to recognize letters, digits, '@' characters and periods, but not commas.
If we put both AmountParser
and EMailParser
into the same ParserGroupProcessor
, the merged OCR engine settings will require recognition of all letters, all digits, '@' character, both period and comma. Such OCR result will contain all characters for EMailParser
to properly parse e-mail, but might confuse AmountParser
if OCR misclassifies some characters into digits.
If we put AmountParser
in one ParserGroupProcessor
and EMailParser
in another ParserGroupProcessor
, OCR will be performed for each parser group independently, thus preventing the AmountParser
confusion, but two OCR passes of the image will be performed, which can have a performance impact.
ParserGroupProcessor
is most commonly used Processor
. It is used whenever the OCR is needed. After the OCR is performed and all parsers are run, parsed results can be obtained through parser objects that are enclosed in the group. ParserGroupProcessor
instance also has associated inner ParserGroupProcessor.Result
whose state is updated during processing and its method getOcrResult()
can be used to obtain the raw OCRResult
that was used for parsing data.
Take note that OCRResult
is available only if it is allowed by the BlinkID SDK license key. OCRResult
structure contains information about all recognized characters and their positions on the image. To prevent someone to abuse that, obtaining of the OCRResult
structure is allowed only by the premium license keys.
Parser
is a class of objects that are used to extract structured data from the raw OCR result. It must be used within ParserGroupProcessor
which is responsible for performing the OCR, so Parser
is not stand-alone processing unit.
Like Recognizer
and all other processing units, each Parser
instance has associated inner Result
object whose lifetime is bound to the lifetime of its parent Parser
object and it is updated while Parser
works. When parsing is done Result
can be used for obtaining extracted data. If you need your Result
object to outlive its parent Parser
object, you must make a copy of it by calling its method clone()
.
It also has its internal state and while it is in the working state during recognition process, it is not allowed to tweak Parser
object's properties.
There are a lot of different Parsers
for extracting most common fields which appear on various documents. Also, most of them can be adjusted for specific use cases. For all other custom data fields, there is RegexParser
available which can be configured with the arbitrary regular expression.
AmountParser
is used for extracting amounts from the OCR result. For available configuration options and result getters please check javadoc.
DateParser
is used for extracting dates in various formats from the OCR result. For available configuration options and result getters please check javadoc.
EMailParser
is used for extracting e-mail addresses from the OCR result. For available result getters please check javadoc.
IbanParser
is used for extracting IBAN (International Bank Account Number) from the OCR result. For available configuration options and result getters please check javadoc.
LicensePlatesParser
is used for extracting license plate content from the OCR result. For available result getters please check javadoc.
RawParser
is used for obtaining string version of raw OCR result, without performing any smart parsing operations. For available result getters please check javadoc.
RegexParser
is used for extracting OCR result content which is in accordance with the given regular expression. Regular expression parsing is not performed with java's regex engine. Instead, it is performed with custom regular expression engine. Due to differences between parsing normal strings and OCR results, this parser does not support some regex features found in Java's regex engine, like backreferences. See setRegex(String) method javadoc for more information about what is supported.
For available configuration options and result getters please check javadoc.
TopUpParser
is used for extracting TopUp (mobile phone coupon) codes from the OCR result. There exists TopUpPreset
enum with presets for most common vendors. Method setTopUpPreset(TopUpPreset) can be used to configure parser to only return codes with the appropriate format defined by the used preset.
For the list of all available configuration options and result getters please check javadoc.
VinParser
is used for extracting VIN (Vehicle Identification Number) from the OCR result. For available configuration options and result getters please check javadoc.
This section discusses the setting up of DetectorRecognizer
for scanning templated documents. Please check Templating API whitepaper and BlinkID-TemplatingSample
sample app for source code examples.
Templated document is any document which is defined by its template. Template contains the information about how the document should be detected, i.e. found on the camera scene and information about which part of the document contains which useful information.
Before performing OCR of the document, BlinkID first needs to find its location on a camera scene. In order to perform detection, you need to define Detector.
You have to set concrete Detector
when instantiating the DetectorRecognizer
as a parameter to its constructor.
You can find out more information about detectors that can be used in section List of available detectors. The most commonly used detector is DocumentDetector
.
Detector
produces its result which contains document location. After the document has been detected, all further processing is done on the detected part of the input image.
There may be one or more variants of the same document type, for example for some document there may be old and new version and both of them must be supported. Because of that, for implementing support for each document, one or multiple templating classes are used. TemplatingClass
is described in The Templating Class component section.
TemplatingClass
holds all needed information and components for processing its class of documents. Templating classes are processed in chain, one by one. On first class for which the data is successfully extracted, the chain is terminated and recognition results are returned. For each input image processing is done in the following way:
-
Classification
ProcessorGroups
are run on the defined locations to extract data.ProcessorGroup
is used to define the location of interest on the detected document andProcessors
that will extract data from that location. You can find more aboutProcessorGroup
in the next section. -
TemplatingClassifier
is run, after the classification processor groups are executed (if they exist), to decide whether the currently scanned document belongs to the current class or not. Its classify method simply returnstrue
orfalse
. If the classifier returnsfalse
, recognition is moved to the next class in the chain, if it exists. You can find more aboutTemplatingClassifier
in this section. -
If the
TemplatingClassifier
has decided that currently scanned document belongs to the current class, non-classificationProcessorGroups
are run to extract other fields of interest.
In templating API ProcessorGroup
is used to define the location of the field of interest on the detected document and how that location should be processed by setting following parameters in its constructor:
-
Location coordinates relative to document detection which are passed as
Rectangle
object. -
DewarpPolicy
which determines how the perspective will be corrected for the current location (i.e. how image dewarp will be performed). You can find a description of eachDewarpPolicy
, its purpose and recommendations when it should be used to get the best results in List of available dewarp policies section. -
Collection of processors that will be executed on the prepared chunk of the image for current document location. You can find more information about processors in The Processor concept section.
Concrete DewarpPolicy
determines how the perspective will be corrected for the specific location of interest (i.e. how image dewarp will be performed). Here is the list of available dewarp policies with linked javadoc for more information:
-
- defines the exact height of the dewarped image in pixels
- usually the best policy for processor groups that use a legacy OCR engine
-
- defines the desired DPI (Dots Per Inch)
- the height of the dewarped image will be calculated based on the actual physical size of the document provided by the used detector and chosen DPI
- usually the best policy for processor groups that prepare location's raw image for output
-
- defines the maximum allowed height of the dewarped image in pixels
- the height of the dewarped image will be calculated in a way that no part of the image will be up-scaled
- if the height of the resulting image is larger than maximum allowed, then the maximum allowed height will be used as actual height, which effectively scales down the image
- usually the best policy for processors that use neural networks, for example, DEEP OCR, hologram detection or NN-based classification
TemplatingClass
enables implementing support for a specific class of documents that should be scanned with templating API. Final implementation of the templating recognizer consists of one or more templating classes, one class for each version of the document.
TemplatingClass
contains two collections of ProcessorGroups
and a TemplatingClassifier
.
The two collections of processor groups within TemplatingClass
are:
-
The classification processor groups which are set by using the setClassificationProcessorGroups method.
ProcessorGroups
from this collection will be executed before classification, which means that they are always executed when processing comes to this class. -
The non-classification processor groups which are set by using the setNonClassificationProcessorGroups method.
ProcessorGroups
from this collection will be executed after classification if the classification has been positive.
A component which decides whether the scanned document belongs to the current class is TemplatingClassifier
. It can be set by using the setTemplatingClassifier method. If it is not set, non-classification processor groups will not be executed. Instructions for implementing the TemplatingClassifier
are given in the next section.
Each concrete templating classifier implements the TemplatingClassifier
interface, which requires to implement its classify
method that is invoked while evaluating associated TemplatingClass
.
Classification decision should be made based on the processing result which is returned by one or more processing units contained in the collection of the classification processor groups. As described in The ProcessorGroup component section, each processor group contains one or more Processors
. There are different Processors
which may enclose smaller processing units, for example, ParserGroupProcessor
maintains the group of Parsers
. Result from each of the processing units in that hierarchy can be used for classification. In most cases Parser
result is used to determine whether some data in the expected format exists on the specified location.
To be able to retrieve results from the various processing units that are needed for classification, their instances must be available when classify
method is called.
TemplatingRecognizer
can be parcelized and run on the different activity from the one within it is created, so it also implements Parcelable
interface (TemplatingClassifier
interface extends Parcelable
). Here comes the tricky part of the templating classifier implementation.
In cases when TemplatingRecognizer
is serialized and deserialized via Parcel
, all processing component instances are different than originally created ones that were used during recognizer definition. So it is important to take care of this while implementing classification in cases when deparcelized processing units are used in classify
method.
When classify
method is called, processing units that are needed for classification can be obtained from the given TemplatingClass
, passed as the method argument. For that purpose there are following helper classes available:
-
ProcessorParcelization
is utility class which helps to obtain the reference to the capturedProcessor
from theTemplatingClass
instance, after the parcelization. For more information see javadoc. -
ParserParcelization
is utility class which helps to obtain the reference to the capturedParser
from theTemplatingClass
instance, after the parcelization. For more information see javadoc.
For the complete source code sample, please check Templating API whitepaper and BlinkID-TemplatingSample
.
When recognition is done, results can be obtained through processing units instances, such as: Processors
, Parsers
, etc. which are used for configuring the TemplatingRecognizer
and later for processing the input image.
In cases when TemplatingRecognizer
needs to be serialized and deserialized when it is passed to scan activity, TemplatingRecognizer
knows how to serialize and deserialize all contained components. When control is returned from the scan activity and RecognizerBundle.loadFromIntent
is called, all kept processing unit instances are updated with the scanning results.
Extracting additional fields of interest from machine-readable travel documents by using Templating API
MrtdRecognizer
is Templating API recognizer which means that it can be configured to extract additional fields of interest, which are outside of the Machine Readable Zone, from the scanned Machine Readable Travel Document. Please check Templating API whitepaper and BlinkID-TemplatingSample
sample app for source code examples.
All stated in the Scanning generic documents with Templating API section which explains Templating API for the DetectorRecognizer
is also valid here. The only difference is document detection part which does not need to be configured. MrtdRecognizer
internally uses MrtdDetector
which first detects Machine Readable Zone and then extends detection to the full document.
Detector
is a processing unit used within some Recognizer
which supports detectors, such as DetectorRecognizer
. Concrete Detector
knows how to find the certain object on the input image. Recognizer
can use it to perform object detection prior to performing further recognition of detected object's contents.
Detector
architecture is similar to Recognizer
architecture described in The Recognizer concept section. Each instance also has associated inner Result
object whose lifetime is bound to the lifetime of its parent Detector
object and it is updated while Detector
works. If you need your Result
object to outlive its parent Detector
object, you must make a copy of it by calling its clone()
method.
It also has its internal state and while it is in the working state during recognition process, it is not allowed to tweak Detector
object's properties.
When detection is performed on the input image, each Detector
in its associated Result
object holds the following information:
-
DetectionCode
that indicates the type of the detection (FAIL, FALLBACK or SUCCESS) and can be obtained with thegetDetectionCode
method. -
DetectionStatus
that represents the status of the detection which can be obtained with thegetDetectionStatus
method. -
each concrete detector returns additional information specific to the detector type
To support common use cases, there are several different Detector
implementations available. They are listed in the next section.
DocumentDetector
is used to detect card documents, cheques, A4-sized documents, receipts and much more.
It accepts one or more DocumentSpecifications
. DocumentSpecification
represents a specification of the document that should be detected by using edge detection algorithm and predefined aspect ratio.
For the most commonly used document formats, there is a helper method DocumentSpecification.createFromPreset(DocumentSpecificationPreset)
which creates and initializes the document specification based on the given DocumentSpecificationPreset. For more information about DocumentSpecification
, please see javadoc.
For the list of all available configuration methods see DocumentDetector
javadoc, and for available result content see DocumentDetector.Result
javadoc.
MRTDDetector
is used to perform detection of Machine Readable Travel Documents (MRTD).
Method setSpecifications
can be used to define which MRTD documents should be detectable. It accepts the array of MrtdSpecifications
. MrtdSpecification
represents specification of MRTD that should be detected. It can be created from the MrtdSpecificationPreset
by using MrtdSpecification.createFromPreset(MrtdSpecificationPreset)
method.
If MrtdSpecifications
are not set, all supported MRTD formats will be detectable.
For the list of all available configuration methods see MRTDDetector
javadoc, and for available result content see MRTDDetector.Result javadoc.
When creating your own SDK which depends on BlinkID, you should consider following cases:
- BlinkID licensing model
- ensuring final app gets all classes and resources that are required by BlinkID
Application licenses are bound to application's package name. This means that each app must have its own license in order to be able to use BlinkID. This model is appropriate when integrating BlinkID directly into app, however if you are creating SDK that depends on BlinkID, you would need separate BlinkID license for each of your clients using your SDK. This is not practical, so you should contact us at help.microblink.com and we can provide you a library license.
At the time of writing this documentation, Android does not have support for combining multiple AAR libraries into single fat AAR. The problem is that resource merging is done while building application, not while building AAR, so application must be aware of all its dependencies. There is no official Android way of "hiding" third party AAR within your AAR.
This problem is usually solved with transitive Maven dependencies, i.e. when publishing your AAR to Maven you specify dependencies of your AAR so they are automatically referenced by app using your AAR. Besides this, there are also several other approaches you can try:
- you can ask your clients to reference BlinkID in their app when integrating your SDK
- since the problem lies in resource merging part you can try avoiding this step by ensuring your library will not use any component from BlinkID that uses resources (i.e. built-in activities, fragments and views, except
RecognizerRunnerView
). You can perform custom UI integration while taking care that all resources (strings, layouts, images, ...) used are solely from your AAR, not from BlinkID. Then, in your AAR you should not referenceLibBlinkID.aar
as gradle dependency, instead you should unzip it and copy its assets to your AAR’s assets folder, itsclasses.jar
to your AAR’s lib folder (which should be referenced by gradle as jar dependency) and contents of its jni folder to your AAR’s src/main/jniLibs folder. - Another approach is to use 3rd party unofficial gradle script that aim to combine multiple AARs into single fat AAR. Use this script at your own risk and report issues to its developers - we do not offer support for using that script.
- There is also a 3rd party unofficial gradle plugin which aims to do the same, but is more up to date with latest updates to Android gradle plugin. Use this plugin at your own risk and report all issues with using to its developers - we do not offer support for using that plugin.
BlinkID is distributed with both ARMv7, ARM64, x86 and x86_64 native library binaries.
ARMv7 architecture gives the ability to take advantage of hardware accelerated floating point operations and SIMD processing with NEON. This gives BlinkID a huge performance boost on devices that have ARMv7 processors. Most new devices (all since 2012.) have ARMv7 processor so it makes little sense not to take advantage of performance boosts that those processors can give. Also note that some devices with ARMv7 processors do not support NEON instruction sets, most popular being those based on NVIDIA Tegra 2. Since these devices are old by today's standard, BlinkID does not support them. For the same reason, BlinkID does not support devices with ARMv5 (armeabi
) architecture.
ARM64 is the new processor architecture that most new devices use. ARM64 processors are very powerful and also have the possibility to take advantage of new NEON64 SIMD instruction set to quickly process multiple pixels with a single instruction.
x86 architecture gives the ability to obtain native speed on x86 android devices, like Asus Zenfone 4. Without that, BlinkID will not work on such devices, or it will be run on top of ARM emulator that is shipped with device - this will give a huge performance penalty.
x86_64 architecture gives better performance than x86 on devices that use 64-bit Intel Atom processor.
However, there are some issues to be considered:
- ARMv7 build of native library cannot be run on devices that do not have ARMv7 compatible processor (list of those old devices can be found here)
- ARMv7 processors do not understand x86 instruction set
- x86 processors understand neither ARM64 nor ARMv7 instruction sets
- however, some x86 android devices ship with the builtin ARM emulator - such devices are able to run ARM binaries but with performance penalty. There is also a risk that builtin ARM emulator will not understand some specific ARM instruction and will crash.
- ARM64 processors understand ARMv7 instruction set, but ARMv7 processors do not understand ARM64 instructions.
- NOTE: as of year 2018, some android devices that ship with ARM64 processor do not have full compatibility with ARMv7. This is mostly due to incorrect configuration of Android's 32-bit subsystem by the vendor, however Google has announced that as of August 2019 all apps on PlayStore that contain native code will need to have native support for 64-bit processors (this includes ARM64 and x86_64) - this is in anticipation of future Android devices that will support 64-bit code only, i.e. that will have ARM64 processors that do not understand ARMv7 instruction set.
- if ARM64 processor executes ARMv7 code, it does not take advantage of modern NEON64 SIMD operations and does not take advantage of 64-bit registers it has - it runs in emulation mode
- x86_64 processors understand x86 instruction set, but x86 processors do not understand x86_64 instruction set
- if x86_64 processor executes x86 code, it does not take advantage of 64-bit registers and use two instructions instead of one for 64-bit operations
LibBlinkID.aar
archive contains ARMv7, ARM64, x86 and x86_64 builds of native library. By default, when you integrate BlinkID into your app, your app will contain native builds for all processor architectures. Thus, BlinkID will work on ARMv7, ARM64, x86 and x86_64 devices and will use ARMv7 features on ARMv7 devices and ARM64 features on ARM64 devices. However, the size of your application will be rather large.
If your final app is too large because of BlinkID, you can decide to create multiple flavors of your app - one flavor for each architecture. With gradle and Android studio this is very easy - just add the following code to build.gradle
file of your app:
android {
...
splits {
abi {
enable true
reset()
include 'x86', 'armeabi-v7a', 'arm64-v8a', 'x86_64'
universalApk true
}
}
}
With that build instructions, gradle will build four different APK files for your app. Each APK will contain only native library for one processor architecture and one APK will contain all architectures. In order for Google Play to accept multiple APKs of the same app, you need to ensure that each APK has different version code. This can easily be done by defining a version code prefix that is dependent on architecture and adding real version code number to it in following gradle script:
// map for the version code
def abiVersionCodes = ['armeabi-v7a':1, 'arm64-v8a':2, 'x86':3, 'x86_64':4]
import com.android.build.OutputFile
android.applicationVariants.all { variant ->
// assign different version code for each output
variant.outputs.each { output ->
def filter = output.getFilter(OutputFile.ABI)
if(filter != null) {
output.versionCodeOverride = abiVersionCodes.get(output.getFilter(OutputFile.ABI)) * 1000000 + android.defaultConfig.versionCode
}
}
}
For more information about creating APK splits with gradle, check this article from Google.
After generating multiple APK's, you need to upload them to Google Play. For tutorial and rules about uploading multiple APK's to Google Play, please read the official Google article about multiple APKs.
If you will not be distributing your app via Google Play or for some other reasons you want to have single APK of smaller size, you can completely remove support for certain CPU architecture from your APK. This is not recommended due to consequences.
To remove certain CPU architecture, add following statement to your android
block inside build.gradle
:
android {
...
packagingOptions {
exclude 'lib/<ABI>/libBlinkID.so'
}
}
where <ABI>
represents the CPU architecture you want to remove:
- to remove ARMv7 support, use
exclude 'lib/armeabi-v7a/libBlinkID.so'
- to remove x86 support, use
exclude 'lib/x86/libBlinkID.so'
- to remove ARM64 support, use
exclude 'lib/arm64-v8a/libBlinkID.so'
- NOTE: this is not recommended. See this notice.
- to remove x86_64 support, use
exclude 'lib/x86_64/libBlinkID.so'
You can also remove multiple processor architectures by specifying exclude
directive multiple times. Just bear in mind that removing processor architecture will have side effects on performance and stability of your app. Please read this for more information.
This section assumes that you have set up and prepared your Eclipse project from LibBlinkID.aar
as described in chapter Eclipse integration instructions.
If you are using Eclipse, removing processor architecture support gets really complicated. Eclipse does not support APK splits and you will either need to remove support for some processors or create several different library projects from LibBlinkID.aar
- each one for specific processor architecture.
Native libraries in eclipse library project are located in subfolder libs
:
libs/armeabi-v7a
contains native libraries for ARMv7 processor architecturelibs/x86
contains native libraries for x86 processor architecturelibs/arm64-v8a
contains native libraries for ARM64 processor architecturelibs/x86_64
contains native libraries for x86_64 processor architecture
To remove a support for processor architecture, you should simply delete appropriate folder inside Eclipse library project:
- to remove ARMv7 support, delete folder
libs/armeabi-v7a
- to remove x86 support, delete folder
libs/x86
- to remove ARM64 support, delete folder
libs/arm64-v8a
- NOTE: this is not recommended. See this notice.
- to remove x86_64 support, delete folder
libs/x86_64
However, removing a processor architecture has some consequences:
- by removing ARMv7 support BlinkID will not work on devices that have ARMv7 processors.
- by removing ARM64 support, BlinkID will not use ARM64 features on ARM64 device
- also, some future devices may ship with ARM64 processors that will not support ARMv7 instruction set. Please see this note for more information.
- by removing x86 support, BlinkID will not work on devices that have x86 processor, except in situations when devices have ARM emulator - in that case, BlinkID will work, but will be slow and possibly unstable
- by removing x86_64 support, BlinkID will not use 64-bit optimizations on x86_64 processor, but if x86 support is not removed, BlinkID should work
Our recommendation is to include all architectures into your app - it will work on all devices and will provide best user experience. However, if you really need to reduce the size of your app, we recommend releasing separate version of your app for each processor architecture. It is easiest to do that with APK splits.
If techniques explained in paragraph Reducing the final size of your app did not reduce the size enough for your convenience, you have the ability to create customised build of BlinkID which will contain only features that you plan to use. Using customised build of BlinkID can reduce your app size by more than 60% with respect to app size when using the generic build.
In order to create customised build of BlinkID, you first need to download the static distribution of BlinkID. A valid production licence is required in order to gain access to the download link of BlinkID static distribution. Once you have a valid production licence, please contact our support team and ask them to provide you with the download link. After they give you access to the static distribution of BlinkID, you will be able to download it from you account at Microblink Developer Dashboard.
The static distribution of BlinkID is a large zip file (several hundred megabytes) which contains static libraries of BlinkID's native code, all assets and a script which will create the customised build for you.
In order to create customised build of BlinkID, you will need following tools:
- Android development tools and SDK
- Android NDK - best if installed from Android Studio's package manager
- NDK CMake toolchain - you have to install that from Android Studio's package manager
- Java - for running both Android Studio and provided gradle script which will create customised build
- you must use the exact same version of NDK that we used to build the static libraries. Using different NDK version will either result with linker errors or will create non-working binary. Our script will check your NDK version and will fail if there is a version mismatch.
-
Obtain the static distribution of BlinkID by contacting us
-
Download the zip from link that you will be provided
-
Unzip the file into an empty folder
-
Edit the file
static-distrib/enabled-features.cmake
- you should enable only features that you need to use by setting appropriate variables to
ON
. - the list of all possible feature variables can be found in
static-distrib/features.cmake
- for each
feature_option
command, first parameter defines the feature variable, and the second is the description of the feature, i.e. what it provides. Other parameters are information for script to work correctly.
- for each
- you should not edit any file except
enabled-features.cmake
(except if instructed so by our support team) to ensure creation of customised build works well
- you should enable only features that you need to use by setting appropriate variables to
-
In folder LibBlinkID, create file
local.properties
with following entries:sdk.dir=/path/to/your/android-sdk-folder ndk.dir=/path/to/your/android-sdk-folder/ndk-bundle
- importing the project into android studio should do that automatically for you
-
Open terminal and navigate to LibBlinkID folder.
-
Execute command
./gradlew clean assembleRelease
-
After several minutes (depedending of CPU speed of your computer), customised build will appear as
LibBlinkID/build/outputs/aar/LibBlinkID-release.aar
. Use that AAR in your app instead of the default one.
- Follow the steps 1.-4. as in command line version (see above)
- Import the
static-distrib/LibBlinkID
project into Android Studio - Under
cpp
section of imported module, make sure that all required JNI static libraries are correctly referenced- if they are not, edit the
enabled-features.cmake
to correct which features need to be included in build and then selectBuild -> Refresh Linked C++ Projects
in Android Studio menu
- if they are not, edit the
- Open
Build Variants
pane and make surerelease
is selected for moduleLibBlinkID
- In Android Studio menu, select
Build -> Build APK
- After several minutes (depedending of CPU speed of your computer), customised build will appear as
LibBlinkID/build/outputs/aar/LibBlinkID-release.aar
. Use that AAR in your app instead of the default one.
Attempt to use feature within your app which was not enabled in customised build will result with your app crashing at the moment it tries to use that feature.
Getting UnsatisfiedLinkError
when using customised build, while everything works OK with generic build
This happens when your app is trying to use feature which was not enabled in customised build. Please make sure that you enable features that you need and not use unnecessary features within your app.
App crashing when scanning starts with log message Failed to load resource XX. The program will now crash.
This means that a required resource was not packaged into final app. This usually indicates a bug in our gradle script that makes the customised build. Please contact us and send your version of enabled-features.cmake
and crash log.
You probably have a typo in enabled-features.cmake
. CMake is very sensitive language and will throw an non-understandable error if you have a typo or invoke any of its commands with wrong number of parameters.
If you are combining BlinkID library with some other libraries that contain native code into your application, make sure you match the architectures of all native libraries. For example, if third party library has got only ARMv7 and x86 versions, you must use exactly ARMv7 and x86 versions of BlinkID with that library, but not ARM64. Using these architectures will crash your app in initialization step because JVM will try to load all its native dependencies in same preferred architecture and will fail with UnsatisfiedLinkError
.
In case of problems with integration of the SDK, first make sure that you have tried integrating it into Android Studio by following integration instructions. Although we do provide Eclipse ADT integration instructions, we officially do not support Eclipse ADT anymore. Also, for any other IDEs unfortunately you are on your own.
If you have followed Android Studio integration instructions and are still having integration problems, please contact us at help.microblink.com.
In case of problems with using the SDK, you should do as follows:
If you are getting "invalid license key" error or having other license-related problems (e.g. some feature is not enabled that should be or there is a watermark on top of camera), first check the ADB logcat. All license-related problems are logged to error log so it is easy to determine what went wrong.
When you have to determine what is the license-relate problem or you simply do not understand the log, you should contact us help.microblink.com. When contacting us, please make sure you provide following information:
- exact package name of your app (from your
AndroidManifest.xml
and/or yourbuild.gradle
file) - license that is causing problems
- please stress out that you are reporting problem related to Android version of BlinkID SDK
- if unsure about the problem, you should also provide excerpt from ADB logcat containing license error
If you are having problems with scanning certain items, undesired behaviour on specific device(s), crashes inside BlinkID or anything unmentioned, please do as follows:
-
enable logging to get the ability to see what is library doing. To enable logging, put this line in your application:
com.microblink.util.Log.setLogLevel(com.microblink.util.Log.LogLevel.LOG_VERBOSE);
After this line, library will display as much information about its work as possible. Please save the entire log of scanning session to a file that you will send to us. It is important to send the entire log, not just the part where crash occurred, because crashes are sometimes caused by unexpected behaviour in the early stage of the library initialization.
-
Contact us at help.microblink.com describing your problem and provide following information:
- log file obtained in previous step
- high resolution scan/photo of the item that you are trying to scan
- information about device that you are using - we need exact model name of the device. You can obtain that information with any app like this one
- please stress out that you are reporting problem related to Android version of BlinkID SDK
Here is a list of frequently asked questions and solutions for them and also a list of known problems in the SDK and how to work around them.
In demo everything worked, but after switching to production license I get InvalidLicenseKeyException
as soon as I construct specific Recognizer
object
Each license key contains information about which features are allowed to use and which are not. This exception indicates that your production license does not allow using of specific Recognizer
object. You should contact support to check if provided license is OK and that it really contains all features that you have purchased.
Whenever you construct any Recognizer
object or any other object that derives from Entity
, a check whether license allows using that object will be performed. If license is not set prior constructing that object, you will get InvalidLicenseKeyException
. We recommend setting license as early as possible in your app, ideally in onCreate
callback of your Application singleton.
When my app starts, I get exception telling me that some resource/class cannot be found or I get ClassNotFoundException
This usually happens when you perform integration into Eclipse project and you forget to add resources or native libraries into the project. You must alway take care that same versions of both resources, assets, java library and native libraries are used in combination. Combining different versions of resources, assets, java and native libraries will trigger crash in SDK. This problem can also occur when you have performed improper integration of BlinkID SDK into your SDK. Please read how to embed BlinkID inside another SDK.
This error happens when you try to integrate multiple Microblink SDKs into the same application. Multiple Microblink SDKs cannot be integrated into the same application, and there is no need for that because SDKs are organized in the way that each SDK is feature superset of the smaller SDK, except the PDF417
SDK which is the smallest SDK. For example BlinkID
SDK contains all features from the BlinkInput
SDK. Relations between SDKs are: PDF417
⊆ BlinkInput
⊆ BlinkID
⊆ PhotoPay
.
This error happens when JVM fails to load some native method from native library. If performing integration into Eclipse project make sure you have the same version of all native libraries and java wrapper. If performing integration into Android studio and this error happens, make sure that you have correctly combined BlinkID SDK with third party SDKs that contain native code. If this error also happens in our integration demo apps, then it may indicate a bug in the SDK that is manifested on specific device. Please report that to our support team.
Make sure that after adding your callback to MetadataCallbacks
you have applied changes to RecognizerRunnerView
or RecognizerRunner
as described in this section.
I've removed my callback to MetadataCallbacks
object, and now app is crashing with NullPointerException
Make sure that after removing your callback from MetadataCallbacks
you have applied changes to RecognizerRunnerView
or RecognizerRunner
as described in this section.
In my onScanningDone
callback I have the result inside my Recognizer
, but when scanning activity finishes, the result is gone
This usually happens when using RecognizerRunnerView
and forgetting to pause the RecognizerRunnerView
in your onScanningDone
callback. Then, as soon as onScanningDone
happens, the result is mutated or reset by additional processing that Recognizer
performs in the time between end of your onScanningDone
callback and actual finishing of the scanning activity. For more information about statefulness of the Recognizer
objects, check this section.
I am using built-in activity to perform scanning and after scanning finishes, my app crashes with IllegalStateException
stating Data cannot be saved to intent because its size exceeds intent limit
.
This usually happens when you use Recognizer
that produces image or similar large object inside its Result
and that object exceeds the Android intent transaction limit. You should enable different intent data transfer mode. For more information about this, check this section. Also, instead of using built-in activity, you can use RecognizerRunnerFragment
with built-in scanning overlay.
This usually happens when you attempt to transfer standalone Result
that contains images or similar large objects via Intent and the size of the object exceeds Android intent transaction limit. Depending on the device, you will get either TransactionTooLargeException, a simple message BINDER TRANSACTION FAILED
in log and your app will freeze or your app will get into restart loop. We recommend that you use RecognizerBundle
and its API for sending Recognizer
objects via Intent in a more safe manner (check this section for more information). However, if you really need to transfer standalone Result
object (e.g. Result
object obtained by cloning Result
object owned by specific Recognizer
object), you need to do that using global variables or singletons within your application. Sending large objects via Intent is not supported by Android.
onOcrResult()
method in my OcrCallback
is never invoked and all Result
objects always return null
in their OCR result getters
In order to be able to obtain raw OCR result, which contains locations of each character, its value and its alternatives, you need to have a license that allows that. By default, licenses do not allow exposing raw OCR results in public API. If you really need that, please contact us and explain your use case.
Complete API reference can be found in Javadoc.
For any other questions, feel free to contact us at help.microblink.com.