Hostname: page-component-78c5997874-94fs2 Total loading time: 0 Render date: 2024-11-14T05:19:28.721Z Has data issue: false hasContentIssue false

Man or machine? Will the digital transition be able to automatize dietary intake data collection?

Published online by Cambridge University Press:  26 April 2019

Bent Egberg Mikkelsen*
Affiliation:
Aalborg University AC Meyersvænge 15 DK-2450 Copenhagen SV, DenmarkEmail: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Type
Editorial
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author 2019

Data collection in studies of dietary intake has traditionally relied on labour-intensive methods requiring inputs from the individual studied. As a result, such studies are costly and time-consuming. But the emergence of digital technologies has sparked a new interest in methods and approaches that can be used for automated collection and processing of data on food intake. New sensor technologies, smarter interfaces and progress in artificial intelligence (AI) research seem to be boosting the field and bringing the field forward.

Information and communication technology (ICT) assistance for dietary data collection has been available for a number of years, including both web- and application (app)-based solutions where participants can enter dietary information directly interfaced to a database. Most replace a simple paper-and-pen approach with a keyboard or a smart screen but do not radically change the workload required by the participant. What the digital transition offers is the smart application of new sensors and devices that can ‘think and sense’ themselves and thus remove partly or fully the workload and responsibility for data acquisition from the participant. As such, we can distinguish between (i) more simple ICT-assisted self-reporting dietary data acquisition and (ii) automated or semi-automated dietary data acquisition. The first type covers ICT-based solutions where paper-based methods are replaced with screen-based ones, examples of which have been reported( Reference Moulos, Maramis and Ioakimidis 1 , Reference Timon, van den Barg and Blain 2 ). The second type covers methods that automatize the data entry by using sensors and/or AI to reduce workload, such as the chest-worn micro camera computer called eButton( Reference Jia, Chen and Yue 3 ) and the mini-suitcase format Dietary Intake Monitoring System (DIMS) used for bedside food intake monitoring in hospitals( Reference Ofei, Holst and Rasmussen 4 ).

Scholars are testing the reach of new technologies, sensors and devices for the study of food intake in experimental laboratory settings such as the Foodscape lab at Aalborg University( Reference Mikkelsen and Ofei 5 ), the Fake Food Buffet lab at the Technical University of Zürich( Reference Bucher, van der Horst and Siegrist 6 ) and the Restaurant of the Future at Wageningen University( Reference Hinton, Brunstom and Faya 7 ). Along with the progress made in the Technology Assisted Dietary Assessment (TADA) project( Reference Zhu, Mariappan and Boushey 8 , Reference Hinton, Brunstom and Faya 7 ), the efforts to develop automatic and semi-automatic approaches have brought about devices such as the Mandometer Technology( Reference Sabin, Bergh and Olofsson 9 ), the eButton( Reference Sun, Fernstrom and Jia 10 ), the DIMS( Reference Ofei, Dobroczynsky and Holst 11 ) and the Splendid approach( Reference Papapanagiotou, Diou and Zhou 12 ).

This issue of Public Health Nutrition presents eight studies on semi-automated dietary data acquisition and ICT-assisted self-reported dietary data acquisition. They fall in two groups: (i) applications for use in research settings and (ii) applications meant to be used by consumers in real-life food environments. They deal with important aspects including proof of principle over validity, reliability, user-friendliness and feasibility.

Applications for use in research settings

Four of the papers deal with studies carried out in lab or health-care settings and explore applications using advanced imaging and language processing relying on AI.

Jia et al. ( Reference Jia, Li and Qu 13 ) set out to develop an AI-based algorithm which can automatically detect food items from images acquired by an egocentric wearable camera for dietary assessment, the chest-worn eButton. The eButton categorizes pictures into food-related ones and uses AI to create tags of the pictures. The study tested the algorithm on two data sets, and results suggest that the approach has the potential to automatically identify foods from low-quality, wearable camera-acquired images with reasonable accuracy. That in turn seems to be able to reduce both the burden of data processing and privacy concerns. However, the estimation of volume/weight is an issue if results are to be translated into nutritional values.

The reliability and validity of the eButton are studied in the paper by Beltran et al.( Reference Beltran, Dadabhoy and Ryan 14 ), which examines the use of a digital screen-based wire mesh procedure to determine food amounts. The eButton wire mesh is a method that attempts to solve the issue of estimation of portion sizes. The estimation is done on screen images of foods by an assessor who fits a mesh from a mesh library on to each food. The authors found good reliability and validity comparing size estimations from two types of practitioners, although a difference between dietitians and engineers was found. The mesh approach seems to be promising to estimate amounts, and it could be anticipated that training a machine to perform this is a possible way forward to automatize the process.

The study by Ofei et al. ( Reference Ofei, Mikkelsen and Scheller 15 ) examines the validity of the DIMS technology, developed for monitoring of food intake among hospital patients, by comparing it with the weighed food method. The DIMS has previously been shown to reduce workload and offers an easy way to determine pre- and post-serving portion sizes and type estimation, although the post-serving conditions have been more challenging since leftovers are often mixed together( Reference Ofei, Dobroczynski and Mikkelsen 16 ). However, the results of the study showed a significant correlation between the two methods and suggest that the DIMS is a valid, novel, easy-to-use alternative for monitoring of dietary intake in hospital settings.

Mezgec et al. ( Reference Mezgec, Eftimov and Bucher 17 ) present a study using a two-step process to estimate food intake. The authors use a deep learning approach to picture-recognition of images representing choices from the ‘fake food buffet’ to tag images, and a subsequent language processing approach to match the tags with entries in a food composition database. The results of the study show that the accuracy of the deep learning model trained to tag fake-food images as well as of the matching of the resulting words with the food composition database was satisfactory. The study shows that the feasibility of semi-automatically creating a description of food items that links it to, for instance, the FoodEx2 language classification system offered by the European Food Safety Authority. This means that a direct link can be created to relevant food composition databases.

Applications for use in consumer settings

Four of the papers relate to studies of smartphone-based applications. Two of them deal with approaches that use the camera to collect pictures for further processing in a research context, whereas the two other studies examine applications meant for consumer-targeted nutrition information that display information on foods sourced from databases.

Yang et al. ( Reference Yang, Jia and Bucher 18 ) present a picture-based approach and explore the utility of a new smartphone-based imaging method that works without a fiducial marker. Instead it applies virtual reality, automated training and a standard food unit to estimate the portion size. The imaging approach estimates food volumes that need to be converted to amounts post-process. The method was found to be valid and a training sequence improved estimation accuracy significantly.

The study by Prinz et al. ( Reference Prinz, Bohn and Kern 19 ) examines the validity of a mobile camera by comparing it with weighed methods, and also evaluates user satisfaction. The results show a significant correlation between the two methods in estimation of energy, macronutrient and fibre intakes, although the authors found a systematic bias with increasing levels of intake. With respect to user friendliness, a large majority of participants were satisfied with the photo-based method and had few technical challenges.

Braz and Baena de Moraes Lopes’ study( Reference Braz and Baena de Moraes Lopes 20 ) aims to verify the reliability of information, the sources of information used and the user opinions of sixteen free mobile apps with nutritional information available in Brazil. They found that the accuracy, in the case of energy, ranged from 0 to ~57 %. Not surprisingly the authors concluded that the apps are not useful for nutritional guidance.

Food composition is increasingly available for brands and calculating nutritional value through barcodes and Universal Product Codes has become technically available, along with app-based solutions. Maringer et al. ( Reference Maringer, Wisse-Voorwinden and van ’t 21 ) study the option of getting in-shop access to nutritional values through a scanning approach. More specifically, they examine the quality of the results obtained from labelled food product databases using a barcode scanner to link the food to the database value. The results of the study showed that energy values could be retrieved in almost all cases. For nutrients, however, availability and accuracy varied greatly across the apps. Since open access to data about branded foods is increasing, this approach to converting food intake to energy intake is a promising avenue to explore further.

Discussion

The papers all together clearly illustrate the width and the scope of current research efforts at the intersection of food and digital technology. They cover new applications of smartphones that can be used in a consumer environment as well as cutting-edge technologies and hardware. The studies cover important aspects of validity, reliability, convenience and feasibility.

First and foremost, imaging and language processing technologies are an important part of the solutions reported in this issue. The papers suggest that computer-based tools that help us make sense of how we see and how we speak about food are at the core of these scientific efforts. The task of dietary assessment can be described in simple terms as ‘measuring the type and amounts of a dietary record and linking it to an authoritative table that can return the energy and the nutrient content’. But in practice it represents a huge technological and scientific challenge.

Besides images and language, current research is investigating the use of proxies and indicators of intake in order to find specialized hardware solutions that can assist in specifying types and amounts of food. These include scales, jawbone motion sensors, chewing and swallowing audio sensors, fork motion sensors, hand movement sensors, spectrophotometry vision, electromyography vision and near-field communication( Reference Ofei, Dobroczynski and Mikkelsen 16 , Reference Amft and Troster 22 Reference Kyritsis, Tatli and Diou 27 ). These research directions can advance the science of automated dietary assessment assuming the necessary cross-disciplinarity across nutrition, food and ICT is established. At Aalborg University we have established a strategic interdepartmental cooperation called the Digital Foodscape Lab that aims to bridge the gap across studies of food and nutrition, mobile devices, communication, robotics and mediology. Such cooperation makes it possible to develop and test prototypes of applications in real-world environments under realistic conditions, with the assistance of both students and early career researchers, while keeping costs at an acceptable level.

A synthesis of the eight papers points in this issue to directions and priorities for future research needs, including:

  • automatic or semi-automatic portion size estimation;

  • correct estimation of leftovers and plate waste;

  • creation of more direct links to foods for which nutritional contents are already known, especially since they might appear in a branded food composition table;

  • better computer vision technologies and algorithms for classifying foods;

  • smarter ways to directly monitor foods purchased at the point of sale;

  • research on how to address privacy concerns and full compliance with the General Data Protection Regulation( 28 ); and

  • closer cooperation internationally across research groups.

Digital solutions for better health care is a trending topic at regional, national and EU levels. The European Strategic Forum on Research Infrastructures (ESFRI) is currently addressing some of the challenges specifically related to food, and food, nutrition and health is expected to be a topic in the next ESFRI roadmap. One of the contexts that might provide infrastructure to facilitate such work is the Food, Nutrition and Health Research Infrastructure (FNHRI) suggested by the EU Richfields design study( Reference Bogaardt, Geelen and Zimmermann 29 ). The FNHRI will try to improve access to data and strengthen the infrastructure between groups and labs working with digital-assisted solutions.

Acknowledgements

Financial support: This work received no specific grant from any finding agency in the public, commercial or not-for-profit sectors. Conflict of interest: The author is the co-developer of the DIMS technology. Authorship: B.E.M. is the sole author of this manuscript. Ethics of human subject participation: Not applicable.

References

1. Moulos, I, Maramis, C, Ioakimidis, I et al. (2015) Objective and subjective meal registration via a smartphone application. In New Trends in Image Analysis and Processing – ICIAP 2015 Workshops. Lecture Notes in Computer Science no. 9281, pp. 409–416. Cham: Springer.Google Scholar
2. Timon, CM, van den Barg, R, Blain, RJ et al. (2016) A review of the design and validation of web- and computer-based 24-h dietary recall tools. Nutr Res Rev 29, 268280.10.1017/S0954422416000172Google Scholar
3. Jia, HC, Chen, Y, Yue, Z et al. (2014) Accuracy of food portion size estimation from digital pictures acquired by a chest-worn camera. Public Health Nutr 17, 16711681.Google Scholar
4. Ofei, KT, Holst, M, Rasmussen, HH et al. (2015) Effect of meal portion size choice on plate waste generation among patients with different nutritional status. An investigation using Dietary Intake Monitoring System (DIMS). Appetite 91, 157164.Google Scholar
5. Mikkelsen, BE & Ofei, KO (2017) Measuring food behaviour the smart way – case insights from the implementation of Foodscapelab. In Exploring Future Foodscapes, Proceedings of 10th International Conference on Culinary Arts and Sciences, Aalborg University, Copenhagen, Denmark, 5–7 July 2017, pp. 268–278 [BE Mikkelsen, KT Ofei, TDO Tvedebrink et al., editors]. Copenhagen: AAU Captive Food Studies Group.Google Scholar
6. Bucher, T, van der Horst, K & Siegrist, M (2012) The fake food buffet – a new method in nutrition behaviour research. Br J Nutr 107, 15531560.Google Scholar
7. Hinton, EC, Brunstom, JF, Faya, SH et al. (2013) Using photography in ‘The Restaurant of the Future’. A useful way to assess portion selection and plate cleaning? Appetite 63, 3135.Google Scholar
8. Zhu, F, Mariappan, A, Boushey, CJ et al. (2008) Technology-assisted dietary assessment. Proc SPIE Int Soc Opt Eng 6814, 681411.Google Scholar
9. Sabin, MA, Bergh, C, Olofsson, B et al. (2006) A novel treatment for childhood obesity using Mandometer® technology. Int J Obes (Lond) pp S203.Google Scholar
10. Sun, M, Fernstrom, JD, Jia, W et al. (2010) A wearable electronic system for objective dietary assessment. J Am Diet Assoc 110, 4547.Google Scholar
11. Ofei, KT, Dobroczynsky, M, Holst, M et al. (2014) The Dietary Intake Monitoring System (DIMS) – an innovative device for capturing patient’s food choice, food intake and plate waste in a hospital setting. In Proceedings of Measuring Behavior 2014: 9th International Conference on Methods and Techniques in Behavioral Research (Wageningen, The Netherlands, 27–29 August 2014), pp. 94–98 [AJ Spink, EL van den Broek, L Loijens et al., editors]. Wageningen: Noldus Information Technology BV.Google Scholar
12. Papapanagiotou, V, Diou, C, Zhou, L et al. (2017) The SPLENDID chewing detection challenge. In Proceedings of 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Seogwipo, South Korea, 11–15 July 2017, pp. 817–820. New York: Institute of Electrical and Electronics Engineers.Google Scholar
13. Jia, W, Li, Y, Qu, R et al. (2019) Automatic food detection in egocentric images using artificial intelligence technology. Public Health Nutr 22, 000000.Google Scholar
14. Beltran, A, Dadabhoy, H, Ryan, C et al. (2019) Reliability and validity of food portion size estimation from images using manual flexible digital virtual meshes. Public Health Nutr 22, 000000.Google Scholar
15. Ofei, KT, Mikkelsen, BE & Scheller, RA (2019) Validation of a novel image-weighed technique for monitoring food intake and estimation of portion size in hospital settings: a pilot study. Public Health Nutr 22, 000000.Google Scholar
16. Ofei, KT, Dobroczynski, MT & Mikkelsen, BE (2016) Using DIMS for real-time monitoring of patient dietary intake and plate waste: a pilot study at Herlev Hospital. In Proceedings of Measuring Behavior 2016: 10th International Conference on Methods and Techniques in Behavioral Research (Dublin, Ireland, 25–27 May 2016), pp. 103–104 [A Spink, G Riedel, L Zhoo et al., editors]. Dublin, Aberdeen and Wageningen: School of Computing, Dublin City University, The Insight Centre for Data Analytics, University of Aberdeen and Noldus.Google Scholar
17. Mezgec, S, Eftimov, T, Bucher, T et al. (2019) Mixed deep learning and natural language processing method for fake-food image recognition and standardization to help automated dietary assessment. Public Health Nutr 22, 000000.Google Scholar
18. Yang, Y, Jia, W, Bucher, T et al. (2019) Image based food portion size estimation using a smartphone without a fiducial marker. Public Health Nutr 22, 000000.Google Scholar
19. Prinz, N, Bohn, B, Kern, A et al. (2019) Feasibility and relative validity of a digital photo-based dietary assessment: results from the Nutris-Phone study. Public Health Nutr 22, 000000.Google Scholar
20. Braz, VN & Baena de Moraes Lopes, MHB (2019) Evaluation of mobile applications related to nutrition. Public Health Nutr 22, 000000.Google Scholar
21. Maringer, M, Wisse-Voorwinden, N, van ’t, Veer, et al. (2019) Food identification by barcode scanning in the Netherlands: a quality assessment of labelled food product databases underlying popular nutrition applications. Public Health Nutr 22, 000000.Google Scholar
22. Amft, O & Troster, G (2009) On-body sensing solutions for automatic dietary monitoring. IEEE Pervasive Comput 8, 6270.Google Scholar
23. Sazonov, ES, Schuckers, SAC, Lopez-Meyer, P et al. (2009) Toward objective monitoring of ingestive behavior in free-living population. Obesity (Silver Spring) 17, 19711975.Google Scholar
24. Jasper, PW, James, MT, Hoover, AW et al. (2016) Effects of bite count feedback from a wearable device and goal setting on consumption in young adults. J Acad Nutr Diet 116, 17851793.Google Scholar
25. Huang, Q, Wang, W & Zhang, Q (2017) Your glasses know your diet: dietary monitoring using electromyography sensors. IEEE Internet Things J 4, 705712.10.1109/JIOT.2017.2656151Google Scholar
26. Burrows, TL, Rollo, M, Williams, R et al. (2017) A systematic review of technology-based dietary intake assessment validation studies that include carotenoid biomarkers. Nutrients 9, E140.Google Scholar
27. Kyritsis, K, Tatli, CL, Diou, C et al. (2017) Automated analysis of in meal eating behavior using a commercial wristband IMU sensor. Conf Proc IEEE Eng Med Biol Soc 2017, 28432846.Google Scholar
28. EU Commission (2016) Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679 (accessed January 2019).Google Scholar
29. Bogaardt, M-J, Geelen, A, Zimmermann, K et al. (2018) Designing a research infrastructure on dietary intake and its determinants. Nutr Bull 43, 301309.Google Scholar