Need help with layout-parser?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

Layout-Parser
2.6K Stars 231 Forks Apache License 2.0 166 Commits 31 Opened issues

Description

A Unified Toolkit for Deep Learning Based Document Image Analysis

Services available

!
?

Need anything else?

Contributors list

# 22,876
Python
Jupyter...
Shell
object-...
155 commits
# 218,434
C++
Jupyter...
C
pytest
1 commit

Layout Parser Logo

A unified toolkit for Deep Learning Based Document Image Analysis

PyPI - Downloads


What is LayoutParser

Example Usage

LayoutParser aims to provide a wide range of tools that aims to streamline Document Image Analysis (DIA) tasks. Please check the LayoutParser demo video (1 min) or full talk (15 min) for details. And here are some key features:

  • LayoutParser provides a rich repository of deep learning models for layout detection as well as a set of unified APIs for using them. For example,

Perform DL layout detection in 4 lines of code

  import layoutparser as lp
  model = lp.AutoLayoutModel('lp://EfficientDete/PubLayNet')
  # image = Image.open("path/to/image")
  layout = model.detect(image) 

  • LayoutParser comes with a set of layout data structures with carefully designed APIs that are optimized for document image analysis tasks. For example,

Selecting layout/textual elements in the left column of a page

  image_width = image.size[0]
  left_column = lp.Interval(0, image_width/2, axis='x')
  layout.filter_by(left_column, center=True) # select objects in the left column 

Performing OCR for each detected Layout Region

  ocr_agent = lp.TesseractAgent()
  for layout_region in layout: 
      image_segment = layout_region.crop(image)
      text = ocr_agent.detect(image_segment)

Flexible APIs for visualizing the detected layouts

  lp.draw_box(image, layout, box_width=1, show_element_id=True, box_alpha=0.25)

Loading layout data stored in json, csv, and even PDFs

  layout = lp.load_json("path/to/json")
  layout = lp.load_csv("path/to/csv")
  pdf_layout = lp.load_pdf("path/to/pdf")

  • LayoutParser is also a open platform that enables the sharing of layout detection models and DIA pipelines among the community.
    Check the LayoutParser open platform

Submit your models/pipelines to LayoutParser

Installation

After several major updates, layoutparser provides various functionalities and deep learning models from different backends. But it still easy to install layoutparser, and we designed the installation method in a way such that you can choose to install only the needed dependencies for your project:

pip install layoutparser # Install the base layoutparser library with  
pip install "layoutparser[layoutmodels]" # Install DL layout model toolkit 
pip install "layoutparser[ocr]" # Install OCR toolkit

Extra steps are needed if you want to use Detectron2-based models. Please check installation.md for additional details on layoutparser installation.

Examples

We provide a series of examples for to help you start using the layout parser library:

  1. Table OCR and Results Parsing:

    layoutparser
    can be used for conveniently OCR documents and convert the output in to structured data.
  2. Deep Layout Parsing Example: With the help of Deep Learning,

    layoutparser
    supports the analysis very complex documents and processing of the hierarchical structure in the layouts.

Contributing

We encourage you to contribute to Layout Parser! Please check out the Contributing guidelines for guidelines about how to proceed. Join us!

Citing
layoutparser

If you find

layoutparser
helpful to your work, please consider citing our tool and paper using the following BibTeX entry.
@article{shen2021layoutparser,
  title={LayoutParser: A Unified Toolkit for Deep Learning Based Document Image Analysis},
  author={Shen, Zejiang and Zhang, Ruochen and Dell, Melissa and Lee, Benjamin Charles Germain and Carlson, Jacob and Li, Weining},
  journal={arXiv preprint arXiv:2103.15348},
  year={2021}
}

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.