40

Multimodal ML: Quantifying the Improvement of Calorie Estimation Through Image-Text Pairs

Main:7 Pages
4 Figures
Bibliography:3 Pages
1 Tables
Appendix:2 Pages
Abstract

This paper determines the extent to which short textual inputs (in this case, names of dishes) can improve calorie estimation compared to an image-only baseline model and whether any improvements are statistically significant. Utilizes the TensorFlow library and the Nutrition5k dataset (curated by Google) to train both an image-only CNN and multimodal CNN that accepts both text and an image as input. The MAE of calorie estimations was reduced by 1.06 kcal from 84.76 kcal to 83.70 kcal (1.25% improvement) when using the multimodal model.

View on arXiv
Comments on this paper