Most existing OCR methods focus on alphanumeric characters due to the popularity of English and numbers, as well as their corresponding datasets. On extending the characters to more languages, recent methods have shown that training different scripts with different recognition heads can greatly improve the end-to-end recognition accuracy compared to combining characters from all languages in the same recognition head. However, we postulate that similarities between some languages could allow sharing of model parameters and benefit from joint training. Determining language groupings, however, is not immediately obvious. To this end, we propose an automatic method for multilingual text recognition with a task grouping and assignment module using Gumbel-Softmax, introducing a task grouping loss and weighted recognition loss to allow for simultaneous training of the models and grouping modules. Experiments on MLT19 lend evidence to our hypothesis that there is a middle ground between combining every task together and separating every task that achieves a better configuration of task grouping/separation.
|Title of host publication||Computer Vision – ECCV 2022 Workshops, Proceedings|
|Editors||Leonid Karlinsky, Tomer Michaeli, Ko Nishino|
|Publisher||Springer Science and Business Media Deutschland GmbH|
|Number of pages||17|
|State||Published - 2023|
|Event||17th European Conference on Computer Vision, ECCV 2022 - Tel Aviv, Israel|
Duration: 23 Oct 2022 → 27 Oct 2022
|Name||Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)|
|Conference||17th European Conference on Computer Vision, ECCV 2022|
|Period||23/10/22 → 27/10/22|
Bibliographical notePublisher Copyright:
© 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.
- Multilingual text recognition
- Task grouping