default search action
17th ICDAR 2023: San José, CA, USA - Workshops Part II
- Mickaël Coustaty, Alicia Fornés:
Document Analysis and Recognition - ICDAR 2023 Workshops - San José, CA, USA, August 24-26, 2023, Proceedings, Part II. Lecture Notes in Computer Science 14194, Springer 2023, ISBN 978-3-031-41500-5
VINALDO
- Omar Alhubaiti, Irfan Ahmad:
Typefaces and Ligatures in Printed Arabic Text: A Deep Learning-Based OCR Perspective. 5-18 - Fréjus A. A. Laleye, Loïc Rakotoson, Sylvain Massip:
Leveraging Knowledge Graph Embeddings to Enhance Contextual Representations for Relation Extraction. 19-31 - Eliott Thomas, Dipendra Sharma Kafle, Ibrahim Souleiman Mahamoud, Aurélie Joseph, Mickaël Coustaty, Vincent Poulain D'Andecy:
Extracting Key-Value Pairs in Business Documents. 32-46 - Thibault Douzon, Stefan Duffner, Christophe Garcia, Jérémy Espinas:
Long-Range Transformer Architectures for Document Understanding. 47-64 - Ibrahim Souleiman Mahamoud, Mickaël Coustaty, Aurélie Joseph, Vincent Poulain D'Andecy, Jean-Marc Ogier:
KAP: Pre-training Transformers for Corporate Documents Understanding. 65-79 - Nehal Yasin, Imran Siddiqi, Momina Moetesum, Sadaf Abdul-Rauf:
Transformer-Based Neural Machine Translation for Post-OCR Error Correction in Cursive Text. 80-93 - Karolina Konopka, Michal Turski, Filip Gralinski:
Arxiv Tables: Document Understanding Challenge Linking Texts and Tables. 94-107 - Dipendra Sharma Kafle, Eliott Thomas, Mickaël Coustaty, Aurélie Joseph, Antoine Doucet, Vincent Poulain D'Andecy:
Subgraph-Induced Extraction Technique for Information (SETI) from Administrative Documents. 108-122 - Alejandro Peña, Aythami Morales, Julian Fiérrez, Javier Ortega-Garcia, Marcos Grande, Iñigo Puente, Jorge Córdova, Gonzalo Cordova:
Document Layout Annotation: Database and Benchmark in the Domain of Public Affairs. 123-138 - Karima Boutalbi, Visar Sylejmani, Pierre Dardouillet, Olivier Le Van, Kavé Salamatian, Hervé Verjus, Faiza Loukil, David Telisson:
A Clustering Approach Combining Lines and Text Detection for Table Extraction. 139-145
WML
- Mohamed Trabelsi, Hüseyin Uzunalioglu:
Absformer: Transformer-Based Model for Unsupervised Multi-Document Abstractive Summarization. 151-166 - Fahimeh Alaei, Alireza Alaei:
A Comparison of Demographic Attributes Detection from Handwriting Based on Traditional and Deep Learning Methods. 167-179 - Omid Motamedisedeh, Faranak Zagia, Alireza Alaei:
A New Optimization Approach to Improve an Ensemble Learning Model: Application to Persian/Arabic Handwritten Character Recognition. 180-194 - Sheikh Mohammad Jubaer, Nazifa Tabassum, Md. Ataur Rahman, Mohammad Khairul Islam:
BN-DRISHTI: Bangla Document Recognition Through Instance-Level Segmentation of Handwritten Text Images. 195-212 - Panagiotis Kaddas, Basilis Gatos, Konstantinos Palaiologos, Katerina Christopoulou, Konstantinos Kritsis:
Text Line Detection and Recognition of Greek Polytonic Documents. 213-225 - Lalita Kumari, Sukhdeep Singh, Vaibhav Varish Singh Rathore, Anuj Sharma:
A Comprehensive Handwritten Paragraph Text Recognition System: LexiconNet. 226-241 - Daichi Haraguchi, Seiichi Uchida:
Local Style Awareness of Font Images. 242-256 - Ayush Roy, Palaiahnakote Shivakumara, Umapada Pal, Hamam Mokayed, Marcus Liwicki:
Fourier Feature-based CBAM and Vision Transformer for Text Detection in Drone Images. 257-271 - Giorgos Sfikas, George Retsinas, Basilis Gatos:
Document Binarization with Quaternionic Double Discriminator Generative Adversarial Network. 272-284 - Chun Chieh Chang, Leibny Paola García-Perera, Sanjeev Khudanpur:
Crosslingual Handwritten Text Generation Using GANs. 285-301 - Timothée Neitthoffer, Aurélie Lemaitre, Bertrand Coüasnon, Yann Soullard, Ahmad Montaser Awal:
Knowledge Integration Inside Multitask Network for Analysis of Unseen ID Types. 302-317
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.