Article on “Monitoring Dietary Behavior with a Smart Dining Tray”

In a recent article in IEEE Pervasive Computing, Bo Zhou and colleagues take a broader view of the idea of monitoring personal diet, or, as they term it “Monitoring Dietary Behavior” [1].   (This article is one of several interesting articles in this issue of Pervasive Computing dedicated to “Pervasive Food”.)

The authors note that “eating” can be measured in a lot of ways, including “spotting diet-related gestures with motion sensors; detecting chewing and swallowing with in-ear or neck-worn microphones, textile capacitive collar sensors, or neck-mounted EMG electrodes; using instrumented cutlery; and using camera-based food analysis.” (p. 47) These technologies offer different information, and have their own limitations.

Zhou’s group is experimenting with another technique an array of pressure sensors in a “smart table cloth”, and another array under a tray, The arrays can detect when the person touches the food or tray, as when they cut or scoop up food.

From the readings, the investigators were able to reliably determine the location of objects are present (cup, bowl, plate). The signal stream from the sensors is used to recognize gestures (“dietary behaviors”), including, “stir”, “scoop”, “cut”, “poke”, or pick up/set down drinking glass (p. 49)

The article discusses the machine learning methods used. A decade ago I was supporting projects to do similar kinds of gesture recognition, so I can tell you that this is not trivial to do.

The investigators went further, to explore heuristics that could infer from these gestures how much of each dish was eaten. This inference is less precise than a direct weighing of the food or imaging what is eaten, but might be done less obtrusively.

The article is careful to explain limitations. For example, the 1 cm grid is good for localizing in 2D, but not accurate for estimating weight—for a given array, there is a trade off between measuring these two factors. They also tell us that a classifier trained for an individual person is highly accurate, but generic classifiers are less accurate. In other words, generic software will probably not be very accurate, but if a person is willing to teach his tray to understand his own style, then it can be very effective.

I would note that there may be quite a bit more variability in behavior than indicated in this study. Some people eat some foods with their fingers (e.g., pizza), some people feed others, and there may be considerable age (children?) and even cultural differences. All this means that the accuracy of a “generic” recognizer may be less than satisfactory.

Obviously, this particular technology could be combined with other sensors to provide more accurate data. It also would be a natural technology to integrate with diet monitoring software, to automatically enter an estimate what was consumed.

In earlier posts I commented on ‘Yumit’, which I tagged as “another bad idea”. My gripe with Yumit was it’s limited information (it mainly measures the weight of the food) and, above all, the associated mobile device app and “game”, which I did not like.

Zhou et al have done a good, solid technical study. But, I’m less convinced of the imagined uses of the techniques, which the authors do not really consider. They aim to “automatically monitor eating habits to improve dietary tracking” (p. 46), but this paper does not consider what this tracking would be used for. As in the case of Yumit, I think there is a lot of thought still needed about how to use this technology in ways that empower people, but do not subject them to bullying.

I’m particularly concerned about prospective uses by authorities to enforce “correct” eating on people. For example, what if your insurance company demands to monitor what you eat, potentially adjusting coverage if you do not follow their behavioral guidelines? Or suppose that children in school are monitored, and graded off for incorrect eating? The point is, of course, where does the data go, and who decides how it is used?

Considering comments by participants about potential privacy issues, they note that snooping on diet is relatively innocuous, compared to all the other intrusions. Maybe. They also comment that the lower accuracy of the “generic” recognizer might be a good feature from the point of view of privacy, because it provides less accurate data about the individual. Again, maybe.


  1. Bo Zhou, Jingyuan Cheng, P. Lukowicz, A. Reiss, and O. Amft, Monitoring Dietary Behavior with a Smart Dining Tray. Pervasive Computing, IEEE, 14 (4):46-56,  2015.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s