Category Archives: Psychology

Reconstructing the “First Flower”

If there is anything I love as much as birds, butterflies, and bees, it must be flowers. They are everywhere, and they are beautiful. (They are, after all, all about s*x.)

Flowers and flowering plants emerged over 130 millions years ago, during the Cretaceous period. Flowers emerged during the height of dinosaur times, though it isn’t certain how dinosaurs and flowers may have co-evolved. I like to think that dinosaur predation shaped the evolution of flowering plants, but who knows?

One of the great mysteries of evolution is how flowers began. There are so many flowers, with so many diverse features and designs. There must have been a “first flower”, but what was it like?

This summer the eFLOWER project (A framework for understanding the evolution and diversification of flowers) has published a new study that examines this question [2] .This large group of collaborators augment studies of fossil remains and genetic patterns among living plants with a mathematical model of the evolution of flowers.

The study is based on a large dataset of current and fossil flowers, which has over 13,000 traits. Using information about molecular dating and fossils, they examine possible evolutionary tress. Many, many possible trees.

If I understand the method correctly, the analysis generated possible ‘ancestors’ based on the relationships among current and fossil flowers, and then tested candidates by running thousands and millions of simulated generations of evolution. (I don’t fully understand these computations

This is a large computation!

The result of this heroic effort is a reconstruction of the ‘first flower’, which is bisexual and spirally whirled.

herve.sauquet@u-psud.fr, juerg.schoenenberger@univ Image caption 3D model of the ancestral flower reconstructed by the new study, showing multiple whorls of petal-like organs, in sets of threes.

This finding is interesting because the fossil record shows a radiation of different flowers that share some, but not all these features. In other words, the adaptations would amount to losing features of the ancestral flower.

Our results suggest two different evolutionary pathways for the reduction in number of whorls in early angiosperm evolution.

The authors speculate on possible advantages in such reductions, perhaps supporting increased specialization.

This idea would answer questions about how one kind of flower could evolve into a radically different structure (they all evolve from a common ‘super’ flower). Of course, we now want to know how this first flower might have evolved from ‘pre flower’ plants.

I’m sure this will be a controversial conclusion.

For one thing, it’s a gigantic amount of math, based on data and assumptions that must be examined carefully. I imagine that it will be difficult to independently replicate this computation.

This result calls into question generally held theories based on other methods. Reexamination of the earlier work may or may not yield a new consensus.

It will be interesting to see if additional fossil evidence can be found that documents more of the actual flowers of that period.

It is worth pointing out that this study has generated a visualization of a completely hypothetical flower, which has never existed as far as we know. The wonders of computational science!


  1. eFLOWER. eFLOWER: A framework for understanding the evolution and diversification of flowers. 2017, http://eflower.myspecies.info/.
  2. Hervé Sauquet, Maria von Balthazar, Susana Magallón, James A. Doyle, Peter K. Endress, Emily J. Bailes, Erica Barroso de Morais, Kester Bull-Hereñu, Laetitia Carrive, Marion Chartier, Guillaume Chomicki, Mario Coiro, Raphaël Cornette, Juliana H. L. El Ottra, Cyril Epicoco, Charles S. P. Foster, Florian Jabbour, Agathe Haevermans, Thomas Haevermans, Rebeca Hernández, Stefan A. Little, Stefan Löfstrand, Javier A. Luna, Julien Massoni, Sophie Nadot, Susanne Pamperl, Charlotte Prieu, Elisabeth Reyes, Patrícia dos Santos, Kristel M. Schoonderwoerd, Susanne Sontag, Anaëlle Soulebeau, Yannick Staedler, Georg F. Tschan, Amy Wing-Sze Leung, and Jürg Schönenberger, The ancestral flower of angiosperms and its early diversification. Nature Communications, August 1 2017. https://www.nature.com/articles/ncomms16047

 

Close Reading Apps: Brilliantly Executed BS

One of the maddening things about the contemporary Internet is the vast array of junk apps—hundreds of thousands, if not many millions—that do nothing at all, but look great. Some of them are flat out parodies, some are atrocities, many are just for show (no one will take us seriously if we don’t have our own app). But some are just flat out nonsense, in a pretty package. (I blame my own profession for creating such excellent software development environments.)

The only cure for this plague is careful and public analysis of apps, looking deeply into not only the shiny surface, but the underlying logic and metalogic of the enterprise. This is a sort of “close reading” of software, analogous to what they do over there in the humanities buildings.  Where does the app come from? What does it really do, compared to what they say it does? Whose interests are served?

Today’s example are two apps that pretend to do social psychology: Crystal (“Become a better communicator”) and Knack (“for unlocking the world’s potential”).

[Read Whole Article]

“The technology of touch”

I have frequently blogged about haptics (notably prematurely declaring 2014 “the year of remote haptics”), which is certainly a coming thing, though I don’t think anyone really knows what to do with it yet.

A recent BBC report  “From yoga pants to smart shoes: The technology of touch”  brought my attention to a new product from down under, “Nadi X”, “fitness tights designed to correct your form”. Evidently, these yoga pants are programmed to monitor your pose, and offer subtle guidance toward ideal position via vibrations in the “smart pants”.

(I can’t help but recall a very early study on activity tracking, with the enchanting title, “What shall we teach our pants?” [2]  Apparently, this year the answer is, “yoga”.)

Source: Wearable Experiments Inc.
Source: Wearable Experiments Inc.

It’s not totally clear how this works, but it is easy to imagine that the garment can detect your pose, compute corrections, and issue guidance in the form of vibrations from the garment. Given the static nature of yoga, detecting and training for the target pose will probably work, at least for beginners. I’d be surprised if even moderately experienced practitioners would find this much help, because I don’t know just how refined the sensing and feedback really will be.  (I’m prepared to be surprised, should they choose to publish solid evidence about how well this actually works.)

Beyond the “surface” use as a tutor, the company suggests a deeper effect: it may be that this clothing not only guides posture but can create “a deeper connection with yourself”. I would interpret this idea to mean, at least in part, that the active garment can promote self-awareness, especially awareness of your body.

I wonder about this claim. For one thing, there will certainly be individual differences in perception and experience. Some people will get more out of a few tickles in their trousers than others do. Other people may be distracted or pulled away from sensing their body by the awareness of their garment groping them.

Inevitably, touch is sensual, and quickly leads to, well, sex. I’m too old not to be creeped out by the idea of my clothing actively touching me, especially under computer control. Even worse, when the computer (your phone) is connected to the Internet, so we can remotely touch each other via the Internet.

Indeed, the same company that created Nadi X created a product called “fundawear” which they say is, “the future of foreplay” (as of 2013).  Sigh. (This app is probably even more distracting than texting while driving….)

Connecting your underwear to the Internet—what could possibly go wrong? I mean, everything is private on your phone, right?  No one can see, or will ever know. Sure.

I’m pretty sure fundawear will “work”, though I’m less certain of the psychological effects of this kind of “remote intimacy”.  Clearly, this touching is to physical touching like video chat is to face to face. Better than nothing, perhaps, but most people will prefer to be in person.

Looking at the videos, it is apparent that the haptics have pretty limited variations. Only a few areas can buzz you, and the interface is pretty limited, so there are only so many “tunes” you can play. The stimulation will no doubt feel mechanical and repetitive, and probably won’t wear very well. Sex can be many things, but it shouldn’t become boring.

(As a historical note, I’ll point out that, despite their advertising claims, this is scarcely the first time this idea has ever been done. The same basic idea was demonstrated by MIT students no later than 2009 [1], and I’ll bet there have been many variations on this theme.  And the technology is improving rapidly.)


This is a very challenging and interesting area to explore. After following developments for the last decade and more, I remain skeptical about how well any sensor system can really communicate body movement beyond the most trivial aspects of posture.

My own observation is that an interesting source of ideas comes from the intersection of art and wearable technology. In this case, I argue that, if you want to learn about “embodied” computing, you really should work with trained dancers.

For example, you could do far worse than considering the works of Sensei Thecla Schiphorst, a trained computer scientist and dancer, whose experiments are extremely creative and very well documented [4].

One of the interesting points that I have learned from Sensei Thecla and other dancers and choreaographers, is how much of the experience of movement is “inside”, and not easily visible to the computer (or observer). I.e., the “right” movement is defined by how it feels, not by the pose or path of the body. Designing “embodied” systems needs to think “from the inside out”, to quote Schiphorst.

In her work, Schiphorst has explored various “smart garments” which reveal and augment the body and movement of one person, or connect to the body of another person.

Since those early says, these concepts have now appeared in many forms, some interesting, and many not as well thought out as Sensei Thecla.


  1. Keywon Chung, Carnaven Chiu, Xiao Xiao, and Pei-Yu Chi, Stress outsourced: a haptic social network via crowdsourcing, in CHI ’09 Extended Abstracts on Human Factors in Computing Systems. 2009, ACM: Boston, MA, USA. p. 2439-2448.
  2. Kristof Van Laerhoven and Ozan Cakmakci. What shall we teach our pants? In Digest of Papers. Fourth International Symposium on Wearable Computers, 2000, 77-83. http://tmg-trackr.media.mit.edu/publishedmedia/Papers/390-Stress%20OutSourced%20A%20Haptic/Published/PDF
  3. Thecla Schiphorst, soft(n): toward a somaesthetics of touch, in Proceedings of the 27th international conference extended abstracts on Human factors in computing systems. 2009, ACM: Boston, MA, USA. http://www.sfu.ca/~tschipho/softn_alt_chi.pdf
  4. Thecla Henrietta Helena Maria Schiphorst, THE VARIETIES OF USER EXPERIENCE: BRIDGING EMBODIED METHODOLOGIES FROM SOMATICS AND PERFORMANCE TO HUMAN COMPUTER INTERACTION, in Center for Advanced Inquiry in the Integrative Arts (CAiiA). 2009, University of Plymouth: Plymouth. http://www.sfu.ca/~tschipho/PhD/PhD_thesis.html

Bonus video: Sensei Thecla’s ‘soft(n)’ [3].  Exceptionally cool!

 

Cliff Kuang on UX Design for Self-Driving Cars

With news every week about yet more self-driving cars (not to mention Uber’s repeated robotic middle finger to the whole world), it is interesting to read Cliff Kuang’s article in FastCo-Design about “The Secret UX Issues That Will Make (Or Break) Self-Driving Cars” (originally published in February).

The main point of the piece is that for driverless cars to succeed, it is that not getting lost and not killing people isn’t enough. People must want to use them, and, most importantly, feel safe and relaxed.

The goal isn’t to replace the unpleasantness of driving with the unpleasantness of riding in a robot car, it is to replace driving with having a nice ride.  Current efforts fall short on this.

Illustrating this point, Kuang  describes a video of a man trying his self-driving car.

“He hasn’t replaced driving with, say, watching a movie or relaxing—instead, he’s replaced the stress of driving with something worse. He looks at the road, he looks at the wheel, he looks at his hands. He’s scared. And he’s smart to be scared.”

This is a horrible experience, even if the technology works flawlessly.  And there are many such videos on YouTube.

And, as Kuang says, this is a design problem.

In contrast to the YouTube horror shows, he recounts an experience with a self-driving Audi: “The car, by design, was calming me before any worries could surface.

Kuang interviewed Brian Lathrop, who leads Audi’s development effort about how they are designing the experience of operating a care that drives itself. The Audi group is striving to design a self driving car that you can trust.

He boils down the design philosophy to ‘3+1’ things that the human rider/operator needs to know:

  • Who is driving (me or the car)?
  • What is it going to do next?
  • What is the car seeing?
  • When does control transition between me and the car?

The article describes the careful design that tells you what is going on, what is coming soon, and what is possible for you to do. The controls and feedback are prominent and designed to be calming. (They eschew red or green lights, which unconsciously signal ‘right’ or ‘wrong’.) The experience is said to quickly become “boring”—which is actually what they are shooting for.

Another theme in their design is to “retrofit” familiar technology, rather than make up completely new metaphors. For example, one concept uses the familiar steering wheel, pulling it away forward to signal automatic driving, enabling the human to grab the wheel and pull it back to take control. The idea is to feel comfortable and “obvious”.

Lathrop’s training in psychology and experience designing aircraft cockpit controls has taught him to be concerned above all that the human user not be confused about the state of the system.This is what causes air disasters, and will also cause car crashes.

A person operating an automated car needs to clearly understand the state of the car at all times (this is what the 3+1 principles are about). Following this principle, the Audi  has displays that show a diagram of the nearby traffic—to show that the car sees what you see—and indications that a turn is going to happen.

Part of the challenge is to manage human expectations for the technology, both as operators and, in the case of cars, as pedestrians faced with automated vehicles. Expectations are conditioned by a combination of personal experience, by subtle  behavior, and by messages about the capabilities of the system. For example, Tesla’s decision to call the systems ‘Autopilot’ sets expectations far beyond the capability of the current technology.  And a robot car that behaves “politely” enjoys the confidence of pedestrians (rightly or wrongly).

I think it is instructive that this group at Audi has been working for more than four years, patiently learning how to do it right, and how to make the ride “boring”. This contrasts with “the Silicon Valley mindset of just dropping beta tests upon an unsuspecting populace” (and in the case of Uber, shoving them down the throat of the populace). This “beta dropping” is, as Kuang says, “not only naive, but also counterproductive.

My own reaction as I read this article, was “phew!”  It’s a relief that some grown ups are working on the problem.


  1. Cliff Kuang, The Secret UX Issues That Will Make (Or Break) Self-Driving Cars, in Co.design. 2016. https://www.fastcodesign.com/3054330/innovation-by-design/the-secret-ux-issues-that-will-make-or-break-autonomous-cars

 

(PS.  Wouldn’t “Just Dropping Beta” be a good name for a band?)

Robot Wednesday

Google Translate Advances Toward Interlingual Model

When I was young and learning Anthropology and Psycholinguistics, we learned that computer translation, like speech understanding, was essentially impossible, if only because we hadn’t the foggiest notion of how people do these things—so how could we program a computer to do it?

In my lifetime, this certainty has disappeared, though we still haven’t a clue how people do it. First, speech generation and then recognition became extremely reliable, based on probabilistic computational models that successfully mimic human behavior without mimicking humans. It shouldn’t work, but it does.

By the way, similar advances are happening in the analysis of emotions and other non-verbal behavior.  Computers are getting very good at inferring human emotion from sensors, even though we humans have little clue how they do it.

In the past two decades, language translation has become feasible for computers, via various learning processes. Again, these computer translations are very good, but the method has nothing to do with how people understand language (as far as we know). Again, it shouldn’t work, but it does.

This fall researchers at Google have deployed Google’s Neural Machine Translation a system that not only learns to translate between two languages, but learns how to translate between many languages at the same time [1]. A side effect of this process is the ability to, at least sort of, translate between two languages for which there is no sample data (termed “zero shot” translation).

In this sense, the system is learning something about “human language” in general, which linguists and psychologists have been seeking to understand for centuries, without clear success. Wow!

The basic idea is to use large samples of translations (i.e., from humans), to learn enough to translate examples not in the training data. Interestingly, the system works from sentences, i.e., the data is a collection of sentences with corresponding sentences in the second language. Given that a sentence is neither an atomic unit of meaning, nor a complete context for the meaning, it is interesting that the learning works so well from this data. For that matter, there isn’t always a one-to-one translation between sentences in two languages. Theoretically, there isn’t any a priori reason this method should work at all, but it does!

This approach works for one pair of languages, e.g., English to Spanish, but doing it one at a time means that you need N^2 translators for N languages. There are thousands of human languages, and Google currently translates for about 100.

The new system gloms all that into a single model, tagging each example with what the target language is.  (This aspect of the method is trivial!) With these tags, the model learns to translate everything into everything. Cool!

Why wasn’t this done before now? Scale. The combined model and dataset is absurdly large, and takes corresponding computing resources to handle it. The training step for the experiment reported in [1] takes weeks to run on 100 GPUs, which means it would have been impossible even a decade ago.

While the scale is impressive, and the notion of doing many to many learning in a single model is cool, the big headline is that this method seems to (somehow) learn to translate between languages that it has no direct examples of. So, when it learns from a English to Spanish sample, and a Portuguese to English example, the resulting neural model can also do a “transitive” Portuguese to Spanish translation about as well as a model trained for those two languages.

This is cool, and remarkable for “the pleasant fact that zero-shot translation works at all”, it is also “the first demonstration of true multilingual zero-shot translation.” ([1], p. 8)

This unprecedented result leads to pretty basic questions about just what is going on here. How does this “zero shot” translation work? In particular, we wonder if the model is actually learning some kind of abstract, general meta language, and “interlingua”. And if so, how can we understand this interlingua?

The paper offers only the first look at these questions, with some data that offers “hints at a universal interlingua representation”. My own view is that the data suggests that the answer may be complicated, in that the model is likely learning more than one kind of translation. But there is certainly much to study here!

Considering that this sort of machine translation was generally considered to be flat out impossible a few decades ago, and considering that linguists have been fruitlessly searching for an interlingua for centuries, this work is truly remarkable.

As I commented above, it is yet another case where computational methods have achieved performance roughly equivalent to human cognition, even though it is obviously not a model of how human cognition and language works.

When you think about it, this is one of the most remarkable areas of intellectual advance of the early twenty first century. I suspect that, as dinosaurs like me die off, there will be a remarkable synthesis of the immense, laboriously hand-made, legacy of language theory and neurolinguistics, with these empirically derived computational models. The result will be an elegant meta theory of what “human language” actually is, with an understanding of the “design decisions” are incorporated into human nervous systems (and specific computer models), and a concomitant story about how these evolved and relate to our kindred species on Earth.


  1. Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean, Google’s Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation. Google, 2016. https://arxiv.org/abs/1611.04558
  2. Sam Wong, (2016) Google Translate AI invents its own language to translate with. New Scientist, https://www.newscientist.com/article/2114748-google-translate-ai-invents-its-own-language-to-translate-with/

 

“Nature” Walks Change Your Brain?

This summer Gregory Bratiman and colleagues at Stanford published results from a study of brain activity in a urban settings. They compared measurements in two conditions, after a 90 minute walkin in a “natural” setting and a 90 minute walk in an urban setting.

They found less self reported “rumination” (often a flag for depression of other mental distress) and neuro imaginging to measure regional cerebral blood flow (rCBF). They conclude that “Our results indicate that nature experience reduced rumination and sgPFC activation.” (p. 8568 )

With the important he implication that:

Given the documented link between rumination and risk for depression and other psycho- logical illnesses, the reduction in rumination among those with the nature experience suggests one possible mechanism by which urbanization—which reduces opportunities for nature experi- ence—may be linked to mental illness.” (p. 8568 )

The authors note that this study fits into a body of literature that finds mental benefits to “natural” environments (e.g., ‘Blue Mind‘) and detriments form urban environments, and also a growing belief by some that these benefits reflect long term changes in brain functions.

This is hyped in the media, with bogus headlines such as, “How Walking in Nature Changes the Brain“.

I remain skeptical.

Let me be clear: I have no doubt at all that living in cities is stressful, and most likely psychologically harmful for many people. And I have not doubt that there are many no-urban environments that people like, and which are probably beneficial for people in many ways. (If nothing else, getting away from the city, well, gets you away from the city.)

But I have problems with this study and similar ones that are looking at single mechanisms—not coincidently, mechanisms we can measure with current technology—and telling stories about how these are the mediator between “urban life” and mental disorder.

With all of the things going on in a city, visual situation, noise, smell, crowds of people, and massive amounts of social interaction, We know that there are lots of “stressors”, dangers, traffic, and social signals that require attention and arousal almost constantly. There is little doubt that all parts of the brain are really active at all times.

So it is shooting fish in a barrel to find high levels brain activity, whatever tha means. For that matter, it’s not especially difficult to find patterns in self-report questions either.

So problem number one is that the variables measured are only a very limited slice of whatever is going on in people’s heads, and there is no particular reason that these are especially important.

But I have bigger problems with the study.

First of all, I have trouble with his terminology, consistently referring to urban green space as a “natural” setting. This is clearly a loaded phrase, and one that is not only arguable, but conceals a lack of clarity. What is the definition of “natural” or at least non-urban environment? If you are going to claim that there is psychological difference in these two environments, then what is it that distinguishes them?

His choice arbitrary, and both of the settings are urban, to my eyes. Actually, his “urban” setting is hellishly hyper-urban, while we have little clear picture of the “natural setting”. Are they walking on trails? Are there pavermed roads with cars passing? Can you hear traffic? Just how “non-urban” is this “natural” area?

A third issue is the subjects. In particular, other than the note that they are all city dwellers, there is no control for their own past experience and perception of “urban” or “natural” environments. This is an important point because it is well known that people habituate to environments, and have dramatically different concepts of what is a “natural” environment depending on their personal history.

It is extremely likely that such history could be a major factor in the enjoyment and benefits of urban green spaces in several ways. Past good experiences could predispose one to quickly relax and attend to nature. Past bad experiences could induce anxiety, perhaps different from the everyday tension, but far from calm. The situation could be familiar, or it could be novel, either of which could create a short term pleasant experience. (Similar arguments apply to the “urban” setting.)

For that matter, what would the comparable effect of any 90 minute break from ordinary routine? A game of golf? A walk through a quiet museum? A nap? And what were the participants taken away from? Were they getting time off from work, or were they giving up their own leisure time? There is so much more context to this walk besides the setting.

Regardless, I have to reserve judgment about the supposed effects of any 90 minute walk. Certainly, I would wonder if any such effects last long, or persist once the person returns to normal activities.

I also note that the participants were given phones and instructed to take pictures. In other words, this was a “tourism” task, and not unstructured strolling. The participants must have been thinking about taking pictures, which raises questions about what they were attending to in the environment.

It isn’t clear whether the participants had their own phones or were connected to the internet. I assume not—that would really throw a lot of confounding variables into the game. But even if they only had a phone to take pictures, they were attending to a small mobile screen. We know this is psychologically powerful, pulling attention from the environment, natural or not. And screen use itself may be liked to behavioral and neural changes.

In the end, I’m not especially convinced of the broader conclusions. The purported links between mental illness and these measures is no more than suggestive. The link between the environment and the measurements is very weak because of the numerous confounding factors mentioned above. I’m particularly skeptical of the entire notion that nebulously defined mental illnesses with some correlation to “living in a city” are mediated by specific neurological changes. “Living in a city” is a complicated and life long activity, and there are many, many things going on in the body and brain, all at once.

All that said, I certainly expect that spending time in urban green spaces (let’s avoid the loose term “natural”) makes people happier and healthier. We don’t really need to have this kind of alleged neurological connection to want more green space and nicer urban spaces, or to want to get out of the city for our own sanity.


  1. Gregory N. Bratman, J. Paul Hamilton, Kevin S. Hahn, Gretchen C. Daily, and James J. Gross, Nature experience reduces rumination and subgenual prefrontal cortex activation. Proceedings of the National Academy of Sciences, 112 (28):8567-8572, July 14, 2015 2015. http://www.pnas.org/content/112/28/8567.abstract

Brain Games

Last month Cyrus Foroughi and colleagues at George Mason University published a small but provocative study, “Placebo effects in cognitive training”. The basic experiment tested the effects of “cognitive training” on “fluid intelligence’. This is, of course, “brain training”, which is a multi-milliion dollar industry, and claims to have scientific support.

The manipulation Foroughi et al did was to use different recruitment materials, which deliberately manipulated placebo effects by attracting believers. The idea is that if you strongly believe that the “brain training” will work, than there sill be at least temporary gains, simply from this belief. If the materials promise results, and attract people who believe in the promise, then this might have a major effect on the observed studies.

The results show that simply by using alternative advertising flyers, a short term effect can be seen. So, the people who responded to a flyer about “brain training”, with the assertion that studies have shown that this works, showed improved intelligence scores after training. Other people who responded to a neutral advertisement showed no gains from the same training.

Uh oh! It only works if you tell them confidently that it will work. A classic placebo effect.

The authors note that the published literature does not control for this potential effect, and generally does not report how subjects were recruited. This calls into question how much, if any, o the reported gains in “fluid intelligence” should be attributed to the training, and how much to expectations and selective recruitment.

This study gathered some buzz, as it is (quite rightly) seen as a serious challenge to the claims of the “brain training” industry.

I’m not too surprised by this finding, or by the strong probability that cognitive training is mostly bogus. Anyone who has tried to do experimental social psychology knows how tricky and pervasive placebo effects are. Any study that doesn’t take care will have serious problems.

In any case, I don’t really think that anyone knows much about “intelligence”, “fluid” or otherwise. And we certainly don’t have any detailed understanding of how “cognition” works in the brain. Any advertisements talking about about “brain training”  or “brain health” are obviously bogus on their face.

Do cognitive activities improve cognitive abilities? Sure. It’s called “practice” and “learning” and so on.

Are there magical games you can play for a few minutes that will change general cognitive abilities for long periods of time? I doubt it.

In particular, I’m very skeptical that little games on the web or a mobile device will have deep effects on your “intelligence”. Of course, spending lots of time with a screen can have significant effects on vision, attention, and social life among other things.  So if there are cognitive gains (which I doubt), they should be considered against the potential damage that computer use may be causing.


  1. Cyrus K. Foroughi,, Samuel S. Monfort, Martin Paczynski, Patrick E. McKnight, and P. M. Greenwood, Placebo effects in cognitive training. Proceedings of the National Academy of Sciences, 113 (27):7470-7474, July 5, 2016 2016. http://www.pnas.org/content/113/27/7470.abstract