Author:
Chen Xiang,Johnson Evelyn,Kulkarni Aditya,Ding Caiwen,Ranelli Natalie,Chen Yanyan,Xu Ran
Abstract
Deep learning models can recognize the food item in an image and derive their nutrition information, including calories, macronutrients (carbohydrates, fats, and proteins), and micronutrients (vitamins and minerals). This technology has yet to be implemented for the nutrition assessment of restaurant food. In this paper, we crowdsource 15,908 food images of 470 restaurants in the Greater Hartford region on Tripadvisor and Google Place. These food images are loaded into a proprietary deep learning model (Calorie Mama) for nutrition assessment. We employ manual coding to validate the model accuracy based on the Food and Nutrient Database for Dietary Studies. The derived nutrition information is visualized at both the restaurant level and the census tract level. The deep learning model achieves 75.1% accuracy when compared with manual coding. It has more accurate labels for ethnic foods but cannot identify portion sizes, certain food items (e.g., specialty burgers and salads), and multiple food items in an image. The restaurant nutrition (RN) index is further proposed based on the derived nutrition information. By identifying the nutrition information of restaurant food through crowdsourced food images and a deep learning model, the study provides a pilot approach for large-scale nutrition assessment of the community food environment.
Funder
University of Connecticut
Subject
Food Science,Nutrition and Dietetics
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献