When is a purported consensus not in fact a consensus?
On October 15, 2014, the Stanford Institute for Longevity and the Max Planck Institute for Human Development released a purported “Consensus on the Brain Training Industry from the Scientific Community” , which broadly asserted that there was no evidence that any cognitive training regimen can improve cognitive function, and was signed by 75 academics, mostly psychologists.
Then on December 17, 2014, a second group of doctors and scientists published a responding open letter, “COGNITIVE TRAINING DATA”  (henceforth, CTD), which stated: “Given our significant reservations with the [Stanford] statement, we strongly disagree with your assertion that it is a “consensus” from the scientific community.” As the press release issued in connection with that response  (referenced in [3a]) notes: “The [responding] letter is signed by 127 doctors and scientists, many of whom are luminaries in the field of neuroplasticity – the discipline that examines the brain’s ability to change. Signatories include members of the National Academy of Sciences, members of the Institute of Medicine, department chairs and directors of programs and institutes, as well as scientists who are founders of neuroscience companies. The signatories include scientists from 18 countries around the world.”
That answers the opening question: The purported consensus in the Stanford letter is not a consensus of the scientific community, despite it’s claim to be such.
Given the level of difference, and the significance to ordinary people concerned with their own aging processes, it’s worth looking into this a bit.
First, who are the authorities? From  we can gain a breakdown of the scientists constituting each group:
Of the 75 scientists who signed the “anti” brain-training statement, 54 are behavioural researchers while only 11 are neuroscientific/medical researchers. This means the majority of scientists (72%) who argue brain training does not work have explored this topic from a behavioural performance point of view (for example, using explicit tests to measure memory, learning, comprehension). A minority (~15%) of “anti” brain-training scientists have explored this topic from a physiological point of view (for example, using brain scans to measure brain function, structure, connectivity).
Conversely, of the 131 scientists who signed the “pro” brain-training statement, only 29 are behavioural researchers while 88 are neuroscientific/medical researchers. This means the majority of scientists (67%) who argue brain training does work have explored this topic from a physiological point of view, while the minority (22%) have explored this topic from a behavioral performance point of view.
Second, let’s look at the character of the Stanford open letter. The focus of concern, stated at the beginning of the Stanford letter, is:
Computer-based cognitive-training software — popularly known as brain games
while the final summary statement is:
We object to the claim that brain games offer consumers a scientifically grounded avenue to reduce or reverse cognitive decline when there is no compelling scientific evidence to date that they do.
From first to last, the Stanford letter only discusses “brain games”, and admits no distinction whatever between possibly different implementations of “computer-based cognitive-training software”. That’s an extremely broad brush, indeed, covering everything from systems like Cogmed and Lumosity to the many sites with crossword puzzles or arithmetic practice, or combinations of such (e.g. Strong Brain), to sites like Posit Science/BrainHQ, founded by major neuroscientists, and boasting extensive scientific studies (cf  and [5a]). Moreover, such a broad brush would tar (second)-language learning sites as well. Yet  and related papers demonstrate the power of second languages in cognition.
Rather sloppy for presumed serious scientists.
The Stanford letter neither reviews the scientific evidence claimed by some of the sites, nor does it dig in to the details of the games. It is indeed likely that some of the “brain games” provided by some of the brain training sites do not in fact have direct specific scientific studies validating that particular brain game. It does not follow that such brain games could not be scientifically validated, only that they have not so been validated. Other games on the sites may well have scientific backing.
Again, quite sloppy for presumed serious scientists.
Third, let’s look at the presence or absence of data. The Stanford letter’s concluding statement is extremely strong:
there is no compelling scientific evidence to date that they [brain-games] do [reduce or reverse cognitive decline]
Wow! In court a judge will tell you that “Ignorance of the law is no defense.” And in every science classroom, you will be told that “Ignorance of the literature is no defense!” The Stanford letter lists only seven references, and three of them are not concerned with brain-games (The effects of cardiovascular exercise on human memory; Aerobic exercise and neurocognitive performance; Bridging animal and human models of exercise-induced brain plasticity), leaving only four references concerned with the topic of the “Consensus”. On the other hand, the CTD site  has a link  to a partial list of 132 published studies on cognitive training benefits. Now, to establish the assertion
there is no compelling scientific evidence to date that they [brain-games] do [reduce or reverse cognitive decline]
the Stanford letter would have to refute substantially all of the CTD 132 studies (and more), which it in no way even attempts to do.
Moreover, the Stanford letter would need to refute the work of Kawashima and his group (cf ), also not addressed. One can make the case that Kawashima’s 2003 publication of Train Your Brain: 60 Days to a Better Brain in Japan (cf  for the English language version) and its subsequent implementation on Nintendo DS as Brain Age: Train Your Brain in Minutes a Day! kicked off the entire world-wide brain-training phenomena. It is worth observing that Kawashima’s original research leading to Train Your Brain, as well as the current work (cf ), relies on two non-computerized tasks: elementary mathematical calculations and reading aloud. Both of these, of course, can be implemented on computers in a wide variety of ways.
Establishing negative vs positive study results. In a setting in which there are potentially many (or unlimited) ways of accomplishing a given task, how does one prove that there is no method of achieving that task? One must explicitly or implicitly examine every possible method, and show that it will not work. When there are only finitely many conceivable methods, it is potentially possible to enumerate them and demonstrate that each does not achieve the task. But when there are an unlimited number of methods — as there are in brain training settings — much more is required.
The classic, gold standard for such arguments is found in mathematics and computer science, wherein it is proved that certain algorithms cannot exist (e.g. Gödel’s theorem on non-axiomatizability of arithmetic  and Turing’s proof of the unsolvability of the halting problem ). In these settings, an infinite number of possible algorithms for the problem exist, and it must be shown that each fails. The core of the arguments is to assume that solutions do exist, and then derive a contradiction. The bedrock of the arguments is that the underlying concepts — formal arithmetic (Gödel’s theorem) or computer programs and machines (Halting problem) — are given precise definitions, enabling contradictions to be derived.
Of course, nothing in the fields of human psychology or neuroscience even approaches such gold standards of precise definition and proof. But the principle remains: if one is to assert that no method can achieve a given task, one must at least create sound arguments attempting to enumerate and deal with all possible solutions, or to argue that no such method could possibly exist, even if not with the precision of mathematics.
The four brain-game-related references cited by the Stanford letter are, to one degree or another, concerned with the transferability of training effects of particular tasks to other (presumably related) cognitive areas such as fluid intelligence; in most cases, the task trained was working memory. All four papers performed an analysis of related studies, as well as direct experiments. Broadly, the results leaned towards finding some (but not many) short-term transfer effects, with no long-term transfer effects being observed. However, all that can be inferred from these is that the single memory training method employed in these experiments does not produce any long-lasting transference.
Implicit in the language of these papers and in their being cited in the Stanford letter is the conclusion that no training method for short term memory would transfer to other brain systems. But as noted above, all that follows is that the particular training method described in each of the papers does not provoke long-lasting transfers.
The basic question here is this: Does there exist a sufficient precise definition of “working memory” (or other cognitive subsystems) to support negative inferences as described above? The evidence would suggest: No. “Working memory” is typically defined in terms of or in contrast to “short-term memory”, which is typically defined in terms such as “faculties of the human mind that can hold a limited amount of information in a very accessible state temporarily.” (cf. )
Despite the belief that working memory is deeply entwined with many cognitive systems, why or how would the training of a subsystem such as working memory have transfer effects to other brain subsystems? It might be the case for certain subsystems. But without established neurological theories of the activity of the subsystems and their neurological interaction, it seems like guesswork to assert that training one subsystem will or will not produce long-lasting transfer effects to another, much less to be able to quantify the extent of such transfer.
The implicit assertion in these four papers is that since the training methods used did not produce long-lasting transfer effects, no other training methods would either. This, of course, is suspect.
One other (somewhat simple) criticism (which may not apply to all the papers) concerns the measurement of long-term effects some time after cessation of the training. That is, looking to see if the training effects “stick” without maintenance. This seems silly, rather like giving someone reasonable athletic training for some months, then letting them stop training, and after six months of being a couch potato, measuring the effects of the athletic training.
Charitably, it would seem that the Stanford/Max Planck letter was somewhat ill-considered, and that at least some, if not many, of the signatories did not give it serious consideration before signing. That there always has been hype and hucksterism, if not outright fraud, around human development and medicine is obvious to everyone. Certainly, most of the Stanford letter signatories must have been concerned that the brain-game hype is getting overheated, and wanted to try to cool it off. However, to err as badly as shown above was just not wise. Much better to have carefully studied all the literature, developed a truly broad world-wide consortium of researchers and clinicians, and worked with regulatory authorities to develop and enforce standards of evaluation.
 Dr. Ryuta Kawashima, Train Your Brain: 60 Days to a Better Brain, 
Kumon Publishing North America, 172 pp.
 Halting problem
 Nelson Cowan What are the differences between long-term, short-term, and working memory?, in Progress in Brain Research, Volume 169, 2008, Pages 323–338.