Extraction of machine-readable zone information from passports, visas and id-cards via OCR
.. image:: https://travis-ci.org/konstantint/PassportEye.svg?branch=master :target: https://travis-ci.org/konstantint/PassportEye
The package provides tools for recognizing machine readable zones (MRZ) from scanned identification documents. The documents may be located rather arbitrarily on the page - the code tries to find anything resembling a MRZ and parse it from there.
The recognition procedure may be rather slow - around 10 or more seconds for some documents. Its precision is not perfect, yet seemingly decent as far as test documents available to the developer were concerned - in around 80% of the cases, whenever there is a clearly visible MRZ on a page, the system will recognize it and extract the text to the best of the abilities of the underlying OCR engine (Google Tesseract).
The failed examples seem to be most often either clearly badly scanned documents, where text is way too blurred, or, more seriously, some types of IDs (Romanian being one example), where the MRZ is too close to the remaining part of the card - a situation not accounted for too well by the current algorithm.
The simplest way to install the package is via
$ pip install PassportEye
scikit-image, among other things. The installation of those requirements, although automatic, may take time or fail sometimes for various reasons (e.g. lack of necessary libraries). If this happens, consider installing the dependencies explicitly from the binary packages, such as those provided by the OS distribution or the "wheel" packages. Another convenient option is to use a Python distribution with pre-packaged
matplotlibbinaries (Anaconda Python being a great choice at the moment).
In addition, you must have the
Tesseract OCR_ installed and added to the system path: the
tesseracttool must be accessible at the command line.
PassportEye requires Python version 3.6 or higher.
On installation, the package installs a standalone tool
mrzinto your Python scripts path. Running::
will process a given filename, extracting the MRZ information it finds and printing it out in tabular form. Running
mrz --jsonwill output the same information in JSON. Running
mrz --save-roiwill, in addition, extract the detected MRZ ("region of interest") into a separate png file for further exploration. Note that the tool provides a limited support for PDF files -- it attempts to extract the first DCT-encoded image from the PDF and applies the recognition on it. This seems to work fine with most scanner-produced one-page PDFs, but has not been tested extensively.
If your Tesseract installation has the "legacy"
*.traineddatamodels installed (in its
tessdatadirectory), consider running::
$ mrz --legacy
This will enable the "legacy" recognizer which, despite the name, seems to work better for MRZ recognition. If you do not know whether you have the relevant files, just try running the command above and see whether you get an error.
In order to use the recognition function in Python code, simply do::
>> from passporteye import read_mrz >> mrz = read_mrz(image_file)
Where image_file can be either a path to a file on disk, or a byte stream containing image data.
The returned object (unless it is None, which means no ROI was detected) contains the fields extracted from the MRZ along with some metainformation. For the description of the available fields, see the docstring for the
passporteye.mrz.text.MRZclass. Note that you can convert the object to a dictionary using the
If you want to have the ROI reported alongside the MRZ, call the
read_mrzfunction as follows::
>> mrz = read_mrz(image_file, save_roi=True)
The ROI can then be accessed as
mrz.aux['roi']-- it is a numpy ndarray, representing the (grayscale) image region where the OCR was applied.
Finally, in order to use the "legacy recognizer", pass the
--oem 0extra command line argument to Tesseract as follows::
>> mrz = read_mrz(image_file, extra_cmdline_params='--oem 0')
For more flexibility, you may instead use a
MRZPipelineobject, which will provide you access to all intermediate computations as follows::
>> from passporteye.mrz.image import MRZPipeline >> p = MRZPipeline(file, extra_cmdline_params='--oem 0') >> mrz = p.result
The "pipeline" object stores the intermediate computations in its
datadictionary. Although you need to understand the underlying algorithm to make sense of it, sometimes it may provide for insightful visualizations. This code, for example, will plot the binarized version of the original image which is used in the algorithm to extract ROIs alongside the boxes corresponding to the extracted ROIs::
>> imshow(p['img_binary']) >> for b in p['boxes']: .. plot(b.points[:,1], b.points[:,0], c='b') .. b.plot()
If you plan to develop or debug the package, consider installing it by running::
$ pip install -e .[dev]
This will install the package in "editable" mode and add a couple of useful extras (such as
pytest). You can then run the tests by typing::
At the root of the source distribution.
The command-line script
evaluate_mrzcan be used to assess the performance of the current recognition pipeline on a set of sample images: this is useful if you want to see the effects of changes to the code. Just run::
$ evaluate_mrz -j 4
-j 4would request to use 4 cores in parallel). The same script may be used to run the recognition pipeline on a given directory of images, sorting successes and failures, see
evaluate_mrz -hfor options.
Feel free to contribute or report issues via Github: https://github.com/konstantint/PassportEye
Copyright: 2016, Konstantin Tretyakov. License: MIT