Midv-550
Existing public benchmarks (e.g., [1], IDDoc [2], SROIE [3]) either contain a limited number of document classes, provide only coarse bounding‑box annotations, or lack realistic mobile acquisition conditions. Consequently, progress in robust MIV systems has been hindered by a mismatch between training data and real‑world deployment scenarios.
Geometric refinement (enforcing known field layout) reduces out‑of‑order predictions by 12 % and improves the MRZ IoU substantially. | OCR Model | Avg. CER (all fields) | MRZ CER | Name‑field CER | |-----------|----------------------|---------|----------------| | CRNN (ResNet‑34) | 0.074 | 0.058 | 0.089 | | TrOCR‑large | 0.058 | 0.042 | 0.074 | | TrOCR‑large + Data Aug (baseline) | 0.045 | 0.032 | 0.058 | MIDV-550
A composite score is reported for overall ranking. 5. Experimental Results 5.1 Document Detection | Model | mAP@0.5 | Inference (ms / img) | |-------|---------|----------------------| | Faster R‑CNN (ResNet‑101) | 0.89 | 128 | | EfficientDet‑D4 | 0.92 | 71 | | YOLOv8‑x (baseline) | 0.95 | 38 | Existing public benchmarks (e