Can a neuroscientist read your mind?

Are the contents of your mind really 'confidential' or will your thoughts one day be accessible to others?

Media reports into recent research have claimed that neuroscientists are now effectively able to perform ‘mind reading’. Such reporting inevitable raises ethical questions about what applications such research might eventually be put to, and, judging by some of the comments that the on-line versions of these articles have provoked, have alarmed some people regarding the eventual path that such research might take. But how accurate is the claim that neuroscientific techniques can read minds?

Early this year an article in the Guardian  ( http://www.guardian.co.uk/science/2012/jan/31/mind-reading-program-brain-words ) reported that:

‘Scientists have picked up fragments of people’s thoughts by decoding the brain activity caused by words that they hear.’

Reporting on the same experiment the Daily Mail ( http://www.dailymail.co.uk/sciencetech/article-2095214/As-scientists-discover-translate-brainwaves-words–Could-machine-read-innermost-thoughts.html ) claimed:

 ‘It’s a staggering development that could have tremendous implications….judges could use mind-reading machines to find out if murder suspects are telling the truth….mind reading devices might be used to eavesdrop covertly on the most private thoughts and dreams.’

The experiment in question, conducted by Dr Brian Pasley and colleagues (1) involved the recruitment of patients who were to undergo brain surgery. The researchers placed electrodes upon the auditory areas of the brain during the period when the patients’ skulls were open and their cerebral cortex exposed. They then played the patients a sequence of different words and recorded the electrical activity generated by the auditory cortex in response to this speech. Using complex modeling procedures they were able to reconstruct the spoken words solely from the neural signals recorded by the electrodes. Furthermore they were able to successfully apply this model to the electrical responses generated by a separate set of words that had not been used in creation of the model (e.g. which were in effect ‘novel’ to the model) suggesting that the model could theoretically be applied to reconstruct any speech heard by the patient.

While these results are undoubtedly impressive, has the media coverage of them been accurate? In terms of the Guardian’s report, their claim that this represents a decoding of ‘fragments of thoughts’ seems to depend on a rather broad definition of the term ‘thoughts’. What the research did was to reconstruct auditory stimuli that the auditory cortex was in the process of analysing. What has been achieved therefore is the decoding, at a detailed level, of the perceptual process, NOT the reading of internally generated thoughts. This is a significant step away from ‘decoding thoughts’ as the  process being decoded is entirely dependent on the presentation of an external stimulus. This doesn’t therefore represent ‘mind reading’ because the same result could theoretically be achieved without reference to the brain, e.g. by taking measurements from the relevant sensory organ or by just observing the sensory stimulus itself (2). Even if the research did represent mind reading, there seems little justification for the Daily Mail’s claim that the research could lead to ‘covert eavesdropping’. It should be obvious that the methodology required not only the opening up of the participant’s skull, but also the co-operation of the participant in allowing data to be taken for the construction of the model. Furthermore what is not mentioned by either article is that the reconstructed words were not actually intelligible to a human listener, but had to be ‘recognised’ via a speech recognition algorithm (an example of the reconstructed speech can be heard here:  http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001251#s5).

Actual Mind Reading?

While the results of Dr Pasley’s study required the participant’s brains to be exposed, other neuroimaging methods are not so intrusive, and could therefore be considered closer to the covert mind-reading reported by the Mail. Magnetic Resonance Imaging (MRI) allows brain activity to be measured in a non-invasive way, so that no surgery of any kind is required (although lying down in a scanner which costs millions of pounds and is the size of a small boat, is still required, making it far from ‘covert’!). MRI studies have produced some equivalent results to that of Pasley’s study, but using visual stimuli; with images (3) and short movies (4) having been reconstructed purely from data obtained from MRI scans. Of course such results don’t represent mind reading any more than Dr Pasley’s study, since they reflect a reconstruction of external sensory information. However other MRI studies have produced results that have allowed scientists to predict processes occurring within a participant’s brain that are not directly tied to the characteristic of external stimuli. A couple of studies by Yukiyasu Kamitani and Frank Tong (5,6) have shown that models can be created that allow an observer to identify to which stimulus a participant is (covertly) attending to. In effect these studies, and others like them, use the output from the perceptual processing mechanisms of the brain to identify how ‘top-down’ influences (such as expectation and attention) are driving perception. Strictly speaking they represent mindreading as although the mental processes in question are still involved in analysing external stimuli, it is not necessarily possible to garner the information provided by the MRI data in any other way (short of asking the person themselves). This is because the ‘top-down influences’ in question arise internally from the brain, rather than being a function of the external stimulus. Neuroimaging has enabled the concept of mind reading to be taken further however, into the realms of decoding mental events that don’t rely on any external stimulation at all. Recent studies have found that it is possible to decode what broad categories of objects someone is imagining, in the absence of any coincident external stimulation (7) although the performance level of the model is reasonably modest (~ 50%). Similarly, it also appears that the results of basic decision making processes can be identified from brain activity, with decisions relating to which button to press and when to press it (8) and whether a participant in lying (9) being decipherable using models constructed in a similar way to those already described. Interestingly the neural information that allows these decisions to be decoded occurs many seconds BEFORE the decision has actually been made, highlighting how conscious actions are likely driven by brain processes that are outside conscious awareness, rather than being the result of conscious ‘free will’. Most recently such work has been extended to more complex scenarios, with MRI data being used to predict at what point in solving an algebraic problem a child is at, and whether they are performing the calculation correctly (10).

The possibility of covert mind reading?

Clearly the aforementioned examples reflect mind reading, but do they represent the top of a ‘slippery slope’ that will lead to technology that will allow the sort of covert eavesdropping envisioned by the Daily Mail? The first impediment to such technology is the process of neuroimaging itself. MRI scanners are far from being portable enough to allow forced or covert application of brain scanning. Furthermore MRI scanning involves the production of a large magnetic field and the firing of electromagnetic pulses towards the object being imaged, both functions that would be totally impractical outside a controlled, isolated environment. Other neuroimaging methods, such as EEG, function by recording the electrical remnants of brain activity from outside the skull, and are therefore cheaper and more portable than MRI. However they lack the spatial resolution that would be required for any sophisticated mind reading application, and in any case they are extremely sensitive to external noise, again making them unsuitable for use outside of controlled environments.

Even if we assume that future technological advances would allow systems to be developed that would enable covert collection brain activity data, would such technology enable your innermost thoughts to be deciphered? There are a number of reasons to doubt that this would be possible. Current mind reading models are only able to distinguish between very broad categories of thoughts, or between very coarse categories of decisions (e.g. lie/truth, attending to one or other stimulus). To be able to read the specific details of an individual’s thoughts you would need models that distinguished between the literally billions of different things that someone could be thinking about, and the multitude of different decisions that they could make. To even create such models would involve the co-operation of individuals in a data collection process that would take an incalculable length of time. Even if such data were collected, and the subsequent required level of computation to create accurate models were possible, the ability to generalize such models to the brain activity of other individuals would rely on an assumption that every person’s brain being identical in terms of where different individual thoughts and memories are stored. This seems extremely unlikely, and is in fact counter to what we know about individual differences in brain anatomy and function. Thus while it is possible to aggregate data across participant to produce mind-reading for coarse decisions, it would be impossible to replicate such a method to distinguish between more subtle categories of thought. Even in situations where co-operation of the participant is attained, and only a coarse distinction between different psychological states is required, such mind reading techniques are problematic. Taking the example of the mooted ‘MRI Lie detector’ such a system will always be somewhat unreliable because, just like the current physiological lie detectors, they could be easily deceived if the participant can train themselves to act as if the truth is a lie (or vice versa). This is because the brain activity which is associated with lying most likely relates to the emotional and cognitive processes involved in creating a false story, rather than to lying per se. It follows that simply engaging in these same emotional and cognitive processes while telling the truth should produce neural activity which mimics that produced by a lie. If even the decoding of simple decisions can be subverted easily, it would seem impossible that attempts at more subtle discriminations of different thoughts would not be subject to even greater uncertainty. Finally it is important to note that all the forms of mind reading reviewed here are the result of probabilistic calculations. The parts of the brain that are deemed active at a certain point in time are the result of statistical computations as to whether a small signal is reflective of task-related neural activity or noise. Likewise the classification of such activity as belonging to one category of thought/decision over another is also based off probabilistic inference. There is no certainty in such a process; in fact it is fraught with uncertainty.

To conclude it seems very unlikely that neuroimaging methods will ever be able to perform the sort of mind reading predicted by scare stories in the press. In some cases such methods may not even represent a particular improvement on the sort of mind reading applications that already exist. What the mind reading research discussed in this article does allow is a greater understanding of how the brain works, which in turn provides insight into how the brain achieves the myriad feats it performs so frequently with apparent ease. The most fruitful practical application of such knowledge is likely to be in the treatment of patients with brain damage. For example the limited mind reading functions possible from existing neuroimaging methods may allow technology to be developed that would allow patients who suffer from brain damage to the extent that they cannot communicate using their peripheral nervous system, some primitive form of communication through their brain activity. In contrast your private thought and memories are likely to remain safe from the prying eyes of neuroscientists!

Image (top right) courtesy of Idea Go:  http://www.freedigitalphotos.net/images/view_photog.php?photogid=809

References

(1) Pasley BN, David SV, Mesgarani N, Flinker A, Shamma SA, et al. (2012) Reconstructing Speech from Human Auditory Cortex. PLoS Biol 10(1): e1001251. doi:10.1371/journal.pbio.1001251 http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001251

(2) Tong, F. & Pratte, M.S. (2012) Decoding Patterns of Human Brain Activity. Annual Review of Psychology, 63: 483-509.  http://www.ncbi.nlm.nih.gov/pubmed/21943172

(3)  Miyawaki, Y. Uchida, H. et al (2008) Visual Image Reconstruction from Human Brain Activity using a Combination of Multi-scale Local Image Decoders.. Neuron 60, 915–929, http://iopscience.iop.org/1742-6596/197/1/012021

(4)  Nishimoto, S., Vu, A.T., et al (2011) Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies. Current Biology 21, 1641–1646 http://www.sciencedirect.com/science/article/pii/S0960982211009377

(5) Kamitani Y, Tong F. 2005. Decoding the visual and subjective contents of the human brain. Nat. Neurosci. 8:679–85  http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1808230/

(6) Kamitani Y, Tong F. 2006. Decoding seen and attended motion directions from activity in the human visual cortex. Curr. Biol. 16:1096–102 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1635016/

(7) Reddy, L., Tsuchiya, N. & Serre, T. (2010). Reading the mind’s eye: Decoding category information during mental imagery. Neuroimage. 50(2) 818-825  http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2823980/

(8) Soon CS, Brass M, Heinze HJ, Haynes JD. 2008. Unconscious determinants of free decisions in the human brain. Nat. Neurosci. 11:543–45  http://www.nature.com/neuro/journal/v11/n5/full/nn.2112.html

(9) Davatzikos C, Ruparel K, Fan Y, Shen DG, Acharyya M, et al. 2005. Classifying spatial patterns of brain activity with machine learning methods: application to lie detection. NeuroImage 28:663–68  http://www.sciencedirect.com/science/article/pii/S1053811905005914

(10) Anderson, J.R. (2012) Tracking Problem Solving by Multivariate Pattern Analysis and Hidden Markov Model algorithms. Neuropsychologia, 50(4) 487-498. http://www.sciencedirect.com/science/article/pii/S0028393211003605

 

Rob Hoskin

Received a PhD from the Neuroscience Department of Sheffield University. Views expressed in blog posts do not necessarily represent the views of the Science Brainwaves organisation. https://twitter.com/Hoskin_R

109 thoughts to “Can a neuroscientist read your mind?”

  1. This unique blog post, “Can a neuroscientist read your mind?
    | Science Brainwaves” displays the fact that you really know just what exactly u r writing
    about! I really completely agree. Thanks ,Danilo

  2. Hello Rob, thank you for this very informative blog post. I agree that the press has been overstating what the researchers can do with their technology. However, some of the researchers themselves have been explicitly saying that they are close to being able to read the more specific contents of people’s thoughts. For example, Marcel Just, a researcher from Carnegie Mellon university, claims that scientists will soon be able to read more specific thoughts such as “I hate so and so” just by decoding brain activity. How much better do you think this technology will get at reading more specific thoughts?

    1. Colin

      I think it’s very unlikely that the technology will be able to read specific thoughts, certainly in any way that would be useable. It might become possible, with the extensive co-operation of a participant, to eventually create an algorithm that could identify what that particular participant was thinking. However this algorithm would never be generalisable to anyone else because at the neural level required to reconstruct specific thoughts, each person differs both anatomically and psychologically.

          1. Also, if we all differ anatomically in the brain, would you be unable to generalize data if it was collected by electrodes as well? (like in the Pasley experiment, except decoding something like “internal monolougue” rather than auditory perception and perhaps even the visual content of a thought)

          2. Hi, yeah in my opinion the same applies regardless of the method of ‘mind reading’ or the thing attempting to be read. Mind reading might be generalisable for tasks that are very general (i.e. is a person thinking about a place or a person). Anything remotely specific is going to run into severe problems regarding differences in anatomy and psychological perspectives/

          3. So, in your view, the more specific the thought is, the less these decoding methods can be generalized from person to person. I assume this would apply to specific words or sentences that a person imagines, correct?

          4. Yes that’s correct. Even if you could map from an individual’s brain to a ‘template’ brain at a neuron-to-neuron level (very unlikely) the same neurons would perform different functions for different people. This is based on the fact that most mental processes are at least partially ‘learnt’ rather than being fully inherent (i.e. functional at birth).

            The paper you linked to was looking at the breakdown of visual feature detection (i.e. how visual perception is achieved through grouping together simpler visual percepts) – a mental process that is presumably much more inherent than most conceptual thought.

  3. Wise and intelligent mind will find out that there’s something wrong happening inside brain activity. And will make a conflict between other’s thoughts turned into voice inside head.
    It is like a self talk but it is not. It is connection between brains and chat program.
    There is more totally stupid and usless thoughts that may cause wrong interpretation by other person. It is even worse if you watch something and someone else is reading it by own interpretation not seeing this but listening to unconcious thoughts on a different brainwave level. There is many brainwaves so they should be deleted in such conditions somehow in order to read only the most important thoughts. Mind is not something that is to end. Mind is constant and makes many problems. Nothing is in 100% sure if operated subject is 24/h a day. You may listen to thoughts that can be so much stupid than may be recognised as if you are retarded mind. Seriously. This technology should be used only in special cases.
    Creating dubble voice, creating 3 voices at the same time by chacking is not fair. Should by stopped if is used. It is because this may cause mental problems with communication between 2 people. No matter what you thing, this is possible. If not yet then sooner or later you will see that it is risky operation and not always usefull. It is good if you check someones health state or if you want to get an information if someone is on drugs or on alcohol or watches porn everyday. As I have written above. There is many not valuable single thoughts that in not well recognised, may destroy whole conversation. I’m sorry for my English.
    I believe that most of it is clear enought to know what are we dealing with.
    It is serious technology that will revolutionise the world.
    It is like talking to angels, so to speak. You do not see this person (that’s not friendly interpreted situation by mind) and you keep chatting by voice in your head. Questin / answer.
    Like self talk. You do not harm aloud, you just think and listen other’s thoughts.
    Weird. However may be usefull someday.
    Keep continue it. This will help in some cases.
    Do not use it to terrorise others who are normal and need others hand to solve personal problems. This may be usefull for mentaly sick people. To find the true source of problem and to try to solve it. It is does not work, then or mind is clever or a person seems to be natural terrorist. Whatever the opint of operation is.
    Once again. I’m sorry about my English.
    Just need more practice.
    Have a good one 😉

  4. And one thing. A patient may be sometimes so clever that may create self from a third person and talk to you. Remember that.
    1 person,
    second,
    third
    May be interprated wrong. It is about the way of thinking.
    The patient may think that your intentions are to harm. Not to solve or help.
    It is very important.
    If mind is suspicios suddenly about the conversation, then may create own text by using third person or even second. Not by own (First person point of view).

  5. Maybe the next step should be to read what others see everyday personally.
    This would help a lot by using the same technology.
    I sure thet this will be possible. Translating picture seen in mind and turned into readable text or even picture.
    Nevermind. My imagination went way to far 🙂
    Bye! 🙂

  6. Hi again Rob. I was just reading a blog post talking about “mind reading” and it made me think of this article. Basically the author was stating that using encoding rather than decoding models can allow accurate mind reading of individuals (using neural networks). This is beyond my personal understanding, but would any of your criticisms here apply to an encoding approach to mind reading as well?
    Here is the link to the blog post: http://neuwritesd.org/2015/10/22/deep-neural-networks-help-us-read-your-mind/

    1. Hi, the paper they are referring to is basically reports an improved computational methods of predicting brain responses. The same caveats regarding ‘mind reading’ I make in my post still apply to it. I think the blog post author is probably trying to make the research sound exciting by introducing the concept of ‘mind reading’ but this is a bit misleading if you take the common-usage meaning of the term mind reading (note that the journal article itself makes no mention of mind reading).

      I’m not saying that the research itself isn’t interesting or useful. They are trying to improve our understanding of visual processing in the brain, and also of how in general processing is achieved via communication between different brain areas.

      1. Rob, I apologize for filling your comment section with questions like this. I’m a philosophy major with a keen interest in bioethics and this is one of those kinds of topics that raise huge red flags for me. I’ve listened to a few experts on this topic and I never really get answers that are consistent. On one hand, I’ve heard similar responses to yours. On the other, I’ve heard respected researchers say that we should seriously be worried (for example, Jack Gallant, one of the leading forces in the decoding field, has stated that he can see a prototype for a universal language decoder being developed in the next 10-15 years since language is something that is much more statistically constrained than something like visual imagery and hence, fairly easy to decode, though I wonder if a device like that would even be possible given your point that our own neurobiology is unique at a certain level). As someone with a somewhat limited understanding of neuroscience, I’m not sure what to make of all of this…

        1. Colin

          I’m not sure what a Universal Language decoder has to do with mind reading – is that not just something for translating spoken language? If there are some specific ‘red flags’ that you are worried about then it might be best to ask the people who are claiming that they will be possible exactly HOW they will be possible. What sort of things were you thinking might be possible?
          As some reassurance, consider this:
          Let’s ignore the fact that people’s brains are different anatomically
          Let’s ignore the fact that getting even statistically uncertain brain data at the moment requires a massive magnet, cooled with liquid nitrogen, that makes a deafening sound, a costs millions of pounds, and requires the participant to be still to work (i.e. it is not very covert).

          Lets instead just consider the word ‘tree’. We can reasonably assume that we do not have a inherent knowledge of what a tree is – it is a learnt concept. How you and I learnt about the word tree and it’s meaning will have differed significantly. The age at which we learnt about it, the existing knowledge that we had when we learnt it, the information we were exposed to in order to learn about it will have differed between us. Likewise how the concept of a tree develops in our knowledge will be shaped by our unique experience. Thus the brain signal for the word ‘tree’ (or for any other word) will differ between people. Thus a machine that could read specific thoughts of people is impossible, because the signals that generate the same thought (even when that thought is just one word) will vary between people (and most likely within the same person depending on context).

          You will notice that the applications that scientists actually mention involve making very general categorisations of thoughts (i.e. ‘reading’ what broad emotion someone is experiencing, whether someone is lying or not). Anything much more specific runs into the problem outlined above. All also involve the co-operation of the individual having their mind ‘read’.

          1. I guess “universal language decoder” was the wrong term. He was referring to a device that could translate internal monologue into text or something of that nature. But you thoroughly answered my questions about this! Thank you for the explanation!

          2. It’s possible that it they could come up with something to help communication for individual people with ‘locked in syndrome’ although it would require extensive co-operation of the patient given the issues I’ve mentioned previously (which are also mentioned in the article). I’m not sure however whether it would be possible to gain meaningful consent from such a patient for such a procedure.
            The sentence (presumably inserted by the journalist) “One more benign use would see brain activity used to assess whether political messages have been effectively communicated to the public” is nonsense, unless they are talking about running controlled experiments rather than randomly ‘scanning’ unwitting members of the public. If they are referring to running controlled experiments I’m not sure how neuroimaging would be better at assessing the communication of political messages than just asking people directly.

          3. Additionally, given the vast amount of words in the English language alone, couldn’t multiple words activate the same part of a person’s brain? That seems like a rather large barrier to developing an accurate decoding technology.

          4. There’s something that is confusing to me though. The authors state that they found that the semantic maps were similar across all of the participants. It seems that the researchers at this point are only able to distinguish between semantic categories like “colors” and “emotions” rather than very specific words, but If the representation of semantic information in the brain is very similar across people from a similar culture, why do you think there would be fundamental barriers to eventually creating technology that could be generalized from person to person from a similar background. You previously told me due to our own unique experiences, the brain signal for something like a specific word will differ from person to person, but do these findings suggest that generalization could be possible between people from a similar culture? For example, if four American women from different families who were born and raised in an upper-class neighborhood in New York City, would generalization be an issue, or would even the unique experiences of each of those women prevent generalization? I’m sure I’m misunderstanding several things here…

          5. The key word is ‘similar’ rather than ‘same’ – also they will be talking about the broad ‘blobs’ of significant activation(s) you see on MRI images, not anywhere near being down to individual neuron level. All we have is broad categories (‘colour’, ‘anger’, ‘faces’) etc assigned to relatively large areas of the cortex – there’s no real basis for generalisability in any detail. Also there is the issue of individual differences in brain anatomy (which are surprisingly large). Finally it is wrong to think that just because people are from the same ‘culture’ it will mean that their experiences will be similar enough to produce exact replica brain structures. Psychology is an extremely murky science, and the complexities are frequently skated over by both researchers and journalists in order to ‘sell’ the science to others.

          6. Hi

            It doesn’t look like there’s much that’s generalisable in that study because they used the same participants to create the algorithm and test to test it – so is shows you can somewhat interpret someones brain responses to faces based off their own previous data of very similar stimuli, but not that you can predict other people’s responses based off that data. Nevertheless I would imagine that very gross characteristics (e.g. gender and race) would probably be generalisable.

          7. Not sure how it would be useful in a court case. Just asking someone for a description would be more accurate than trying to reconstruct a face from someone’s thoughts.

  7. Hi Rob.
    If you could respond to this post as soon as you get it, that would mean the world to me.

    For the past 6 months or so I have been suffering from intense paranoia (I have anxiety) regarding these mind reading studies, many of which you have already mentioned in the article.

    I have recently come across something that seems to contradict the statement of everyone’s brain functioning differently. In this story: http://www.cbsnews.com/news/how-technology-may-soon-read-your-mind/ (which I’m sure was already featured above) it claims that when someone thinks of an object, say, a hammer, that it is similar enough neurologically that when the thought of a hammer occurs in someone else’s brain, it can be identified.

    I will quote from the article:

    “Are you saying that if you think of a hammer, that your brain is identical to my brain when I think of a hammer?” Stahl asked.

    “Not identical. We have idiosyncrasies. Maybe I’ve had a bad experience with a hammer and you haven’t, but it’s close enough to identify each other’s thoughts.”

    One thing I have noticed in all of these studies, however, is that a personal algorithm is needed for each individual in order for their minds to be ‘read’. Although this story so conveniently left out that detail.

    So, my question is, if I were to have my brain scanned in an fmri, but have never done so while thinking of a hammer, and I thought of one anyways, would they be like “oh, this person’s thinking of a hammer! I know because someone else came in here a couple of weeks ago and thought of one too.” Or, contrary to the article, would my brain be different enough that they couldn’t tell I was thinking of a hammer, unless I had had it already recorded?

    A. No, they would not be able to tell that you were thinking of a hammer based on the data of someone else’s brain, unless you got your own brain scanned first.

    B. Yes, they could tell you were thinking of a hammer based on the data of someone else’s brain data.

    Just to let you know, I have asked a different neurologist before, and he responded with B.

    I appreciate your time and consideration. Thank you so much.

    1. Hi
      I’d go with A. There is no way that they could identify with any certainty whether you were thinking of something as specific as a hammer from a brain scan using only other people’s data. WIthout an algorithm tailored specifically to your brain patterns I would think they could only identify a very broad category of what you thinking about, if at all (i.e. you are thinking about a ‘face’, or you are thinking a sad thought, rather than knowing which face, or what thought you are thinking). It’s also important to note the following:
      1) Most of these mind reading studies are actually done with people viewing pictures, rather than with them ‘thinking’ about an image.
      2) Most of these studies use a limited number of images, and the alorigthm often just has to pick one of those images to match with the participants brain pattern, rather than replicate the image whole. Where they do try to replicate the image whole, it often doesn’t look very close to the original (it’s impressive from a technical standpoint, but not from the stand point of how useable such an image would be).
      3) The whole process involves statistical techniques which produce a hit rate which is not 100%.
      I think the idea with these studies is to allow people with medical conditions to ‘communicate’ using thoughts – i.e. people with ‘locked in’ syndrome. The idea that this technology could be used for widespread mind reading of the population is science fiction.

  8. Also, I have a couple other questions, if you don’t mind.

    1. Are neural patterns the same when you think them as when you speak them? In other words, if my brain activity were recorded in an fmri while I spoke something specific, and I later thought the same exact thing, only in my head, would the data be similar enough to tell what I thought?

    2. In one story, several participants were shown pictures of 200 faces. Later, they were shown another 30. Based on the data of the first 200, they were able to physically reconstruct the images of the last 30. I’m sure this story was already mentioned, but did they do that by analyzing the brain patterns with the picture or what? What about the reconstructed videos in a different story?

    3. Do you think telepathy is possible and will it happen anytime soon?

    4. What about mind control?

    5. Could brain chips record your neural activity and thus be used to ‘read’ your mind covertly?

    If you do end up answering these questions, that would mean so much. Thank you.

    1. 1) I think there is some ‘shared’ activity between speaking a word and thinking it
      2) I think someone might have linked that paper in the thread above (see the conversation between myself and Colin). From memory what they did was use the data from the first 200 to deconstruct what activation was generated by different aspects of the faces (i.e. basic contours, skin tone, gender, eye position etc). They then used that information to try and identify which of these aspects were present in the new faces being viewed. They tried to produce ‘reconstructions’ of the new face images from the brain data by trying to match the brain data pattern identified for each aspect (from the first set of faces) to the activation that happened for the second set of faces.
      3) Do you mean machines ‘reading our minds’? Human Telepathy isn’t possible. What ‘appears’ to be telepathy is actually achieved through standard senses, even if the person doing it is unaware. This is how magicians ‘appear’ to do telepathy, by reading body language. In terms of machines reading our minds I don’t think that is possible. What is usually left out of these discussions is that the MRI machines used to produce these brain images are 1) vastly expensive, 2) require the full co-operation of the participant (any substantial movement of the head during an fMRI scan will make the data unusable) 3) are not capable of being portable 4) fMRI data analysis relies on statistical techniques that are not necessarily very valid – there has been a recent ‘controversy’ in MRI research about some errors in the standard algorithms that fMRI researchers use see:
      https://www.sciencebasedmedicine.org/new-study-questions-fmri-validity/
      4) See answer to 3). If people are worried about mind control, they need to stop worrying about covert methods using MRI machines etc, and should be more concerned by overt methods – i.e. political bias in the mainstream media, the preponderance and increasingly subtle methods of advertising.
      5) How would a brain chip be inserted covertly? Even if someone could somehow insert a brain chip covertly, it would be very difficult to get it to send any information back through the skull without the signal being lost or disrupted by the skull itself or the CSF that surrounds the brain.

  9. Very informative. The reason why I mentioned telepathy and mind control was because (even though I do realize it remains mostly in the realm of science fiction) there have been experiments done to try and replicate the overall premise of both, although I wouldn’t really classify them as either.

    From what I remember (it would take hours to search through my browsing history), there was an experiment where two people were set up to play a game of 20 questions. Both were hooked up to a computer, and promted to type out questions to ask one another, to which they responded by focusing on a YES or a NO that flashed on the screen. If the answer was yes, then the asker, who was sitting beside a TMS, would see a phosphene in their head. I’m assuming it’s impossible to send detailed words or pictures directly to the brain, correct? Even by directly simulating the brain?

    In the book, Physics of the Impossible, it mentions an experiment where two people were hooked up to the Internet. One was playing a video game, but instructed not to move his hand in order to press the button. The computer, however, read his motor signals, and it was sent over to the other computer where it activated a TMS that stimulated the brain of the other person, causing him to press down the button. Your opinion?

    1. Hi

      You can use TMS to stimulate brain sites, but this is not telepathy or really mind control in any meaningful sense because it requires a great deal of set up, and a strong magnetic coil to be held over a precise location of the head in order to get the desired effect – in the cases you mention a flash in the visual field, or an involuntary arm movement. It should also be noted that these effects are quite crude because TMS can only be aimed at a relatively large area of the cortex (an area of millions of neurons). You can read more about TMS here: https://en.wikipedia.org/wiki/Transcranial_magnetic_stimulation

  10. Also, I think I read earlier in a conversation with Colin that “colors, emotions, and faces” ect. were generalizable in a very broad sense (I may be wrong). Does that for sure mean that those categories can be detected in an fmri without a personal algorithm? Even if specific details about these categories (and others for that matter) cannot be decoded, I still find this somewhat unnerving, even though I’m not planning on going into an fmri anytime soon.

    Many famous scientists in the mind reading field, including Jack Gallant and Marcel Just, claim to foresee a future where mind reading such as that portrayed in science fiction (like reconstructing pictures from the imagination, reading out entire sentences from the brain, ect.) is plausible. Both of them are very intellegent, so what prompts them to make such far fetched claims when there are no possible ways of these events occurring? Do you think that some of the statements made by these people (like everyone thinking of a hammer nearly identically) are highly exaggerated?

    Oh, and about the brain chips – bear with me, I have a hyperactive imagination with little knowledge of how the world works. If someone were to develop electrodes or other neural activity recorders small enough to be covertly injected into the optic nerve, the auditory nerve, or the spinal cord, is it possible that the data could be remotely sent to a transceiver where it could be analyzed, thus allowing someone to spy on your neural activity?

    I apologize if I am wasting your time. I am just very curious about how the brain works.

    1. 1) You could potentially decode broad categories like colour/ face vs house/ happy vs sad, although to be honest given the amount of noise in fMRI data, you’d need to ensure that the participant was concentrating on definitely thinking what they were supposed to be thinking in order to get decodeable data. In most of the studies mentioned the scientists know when during the scan the person doing the task, and know when decoding that the stimulus being viewed/imagined is from a very narrow range of potential stimuli. This makes decoding brain activity via an algorithm much easier – applying the same to ‘free-form’ thinking would be a lot more difficult. What do you find unnerving about this, given that it relies entirely on the co-operation of the participant, and the ability to decode only lasts for the period the person is in the scanner?

      2) It’s probably worth finding out exactly what they are suggesting. I suspect they are talking about taking a huge number of brain scans from the same individual and getting them during the scans to imagine very specific things, and over time build up an algorithm that would work specifically for that one person, and them alone – I suspect they are thinking about helping people with ‘locked in’ syndrome to communicate via thoughts, although the time and expense required to produce such an algorithm for just one person (which wouldn’t be transferable to other people) would make it pretty impractical as a treatment.

      3) How would you covertly injected into something into the optic nerve? How would the device then travel to the correct location in the brain? If the device is so small that it can be injected, how would it be able to take signals from the entire brain? How would any signals it sent back get through the skull and CSF without becoming unreadable? There is no chance what you suggest could happen.

      1. So, to summarize everything you said, the only form of generalized mind reading that might be foreseeable in the future is a very broadened scope of categories. But even then, an algorithm would still be needed to decode these. So if I were to get an fmri scan at this very moment, with no prior algorithms or would-be present viewing of stimuli, all of my thoughts would be safe. Correct?

        Also, even if broadened categories did becomes

        I don’t know, the whole mind reading thing just unnerves me. Now, I highly support neuroscience in general. I get neurofeedback a few times a week and it has worked wonders. As a sufferer of mental illness, it provides me with much hope knowing that there are hardworking men and women trying to find a cause for these ailments. But at the same time, I can also see potential for abuse of this technology if used in the wrong hands, such as neuromarketing or MRI Lie Detectors. A lot of the mind reading experiments strike me as such. I do realize they could highly benefit those with locked-in syndrome, and give us a better understanding of the brain, but I can’t seem shake off the unnatural vibe. As someone else already said, it raises a red flag.

        1. Yes that’s correct. There is no scope for a generalisable algorithm from MRI data that could work on anything other than broad categories. People’s brain’s are too different, and how specific information is held in the brain will vary between people because of differences in upbringing/experience etc. Also the resolution of MRI is very coarse, the ‘voxels’ which make up MRI images will contain millions of neurons each, and MRI isn’t capable of clearly reading signal from one voxel – preprocessing is usually performed where the signal from each voxels is ‘smoothed’ across neighbouring voxels, thus the true resolution of MRI is far weaker than one voxel.
          Your thoughts are safe – no-one is going to force you to have an MRI scan, and even if they did , they couldn’t force you to think in a structured way that would make interpretating the resulting data easy, and even if they could, you could disrupt the data collection by moving your head if you wanted to (note: if you actually have an MRI scan of the head as part of a research study or clinical procedure, DON’T move your head, as it genuinely will ruin the data collection)!

          1. One thing I’ve noticed is that for every mind reading experiment, the participant must be shown a picture or told to think a certain thought while their brain activity is recorded. Thus, an algorithm is created so when the thought is repeated, it can be recognized. I know that this must be done specially for every individual involved in the experiments, since everyone’s brain is so unique. So when you mentioned a generalizable algorithm, did you mean that in order for the the broad thoughts to be decoded, they would still have to be recorded a first time, and it would still ONLY work on that particular person? As it would be too difficult to decode free-from thoughts unless stimuli (such as pictures or specific instructions) were involved? If I’m misunderstood on some parts, I apologize.

            And so, in speculation, a person who got into an fmri who had no previous personal algorithms tailored to their thoughts, happen to think randomly of something that was definitely generalizable. But without the influence of pictures or specific instructions, that thought would be unrecognizable?

            What kind of broad categories of thoughts do you think could become generalizable in the future (besides the ones you already mentioned)? Or even now, if there’s any. How long would it take?

          2. When I’ve mentioned generalisable, I mean generalisable across people – i.e. the ability for an algorithm created from data from a group of people to be applied to a different group of people. Broad categories of perception/thought are generalisable across people, because it appears that roughly similar stimuli are processed is roughly the same area in everyone (although this might not be completely true, as it tends to only be Europeans/northern Americana that have been scanned, so MRI results achieved with such participants might not be fully generalisable to Africans for example).
            As an example – the perception of faces seems to be driven by an area called the fusiform gyrus (FG). If someone is thinking of a face I suspect that the FG area will activate. As this area is in roughly the same position in all individuals, it MIGHT be possible to identify when a person is thinking of a face during an MRI scan by seeing when their Fusiform gyrus increases in activation. Thus it may be possible to produce an algorithm that can identify when a person is thinking of a face during an MRI scan. However the FG also activates during a lot of other cognitive processes so it would become difficult for an algorithm to identify accurately ‘face thinking’ unless the algorithm; 1) knows WHEN during the scan you might have been thinking about a face (this is usually achieved during ‘mind-reading’ experiments by asking the participant to think of a particular thing at a specific time) and 2) can assume that during this time you aren’t thinking about something else that activates the FG. Thus the mind-reading analogy is kind of false because the algorithm is having to be told manually when in the scan to look for the activation. This is what makes ‘mind-reading’, even broad categories of thought, so difficult during free form thinking, when the above constraints on thought aren’t applicable.
            This above applies for broad categories (i.e. faces). When you get to specific thoughts (i.e. a specific face) the mind reading process is impossible without a wholly personalised algorithm (i.e. an algorithm that is NOT generalisable across people at all). Lets say you wanted to work out during an MRI scan when someone was thinking of President Obama’s face. When someone thinks of the face, the FG will activate – but that activation will be indistinguishable from the activation generated by thinking of any face (or maybe any famous persons face) because the resolution of MRI is so poor (each voxel is several million neurons). The individual array of neurons that fire when you think of President Obama are not distinguishable from those that fire for other faces using any brain imaging technique. Thus even with an algorithm just designed for one person the absolute mind reading of specific details is nigh on impossible. Now lets consider an attempt to generalise a specific algorithm to someone else. Each person will have the face of President Obama encoded in a different array of neurons (assuming they even know what he looks like). This is because each person will have learnt about Obama at a different time of their life, and will have different life experience etc that will mean that their brain, and how information is organised within it, will vary significantly. On top of that peoples brains are different shapes, and have different neuron densities. Thus a generalisable algorithm for specific detail cannot be possible because people’s brains are different. The algorithm for a specific detail in one person (which may not be possible to create with 100% accuracy anyway) will not apply to anyone else.

  11. I may have already asked some of these questions, so if any of them are repeated, it’s for extra clarity. Thus far, your answers have been so helpful. Thank you for your time and consideration.

    1. It is a well known fact that autistic brains are wired quite differently compared to the average person. For example, the fusiform gyrus might not even activate when people with autism are shown faces. Assuming that most of (if not all) the people involved in the mind reading experiments were non-autistic, would this create a problem in making a generalizable algorithm that detected broad categories of thought, particularly for people on the spectrum?

    2. In this article, https://en.m.wikipedia.org/wiki/Jos%C3%A9_Manuel_Rodriguez_Delgado it explains that electrodes implanted in the brain can remotely control the actions of living creatures, such as halting a charging bull, or making people’s arms or hands move a certain way. Although I realize that other, more limited methods can be applied to do the same (specifically the TMS), the questions is, will the complete and involuntary control over an individual, (i.e. motor or verbal control) as shown in science fiction, ever become a reality?

    3. The first story on this article/page mentions decoding spoken words from an individual’s brain. If I understand this correctly, the scientists had to analyze the brain activity directly WITH the recording of the original words (meaning, the scientists had to be aware of was being said) in order for the words to be decoded, similar to the face decoding experiment? They didn’t just look at the brain activity and magically decode the words just like that?

    4. Is it a plausible scenario that someone’s sense of hearing could be hijacked or spied on if electrodes were implanted in an individual’s auditory cortex?

    5. Although I still understand that specific thoughts cannot be decoded unless a personal algorithm is created for a particular individual, it continues to bother me that some sources claim that everyone’s brain is close enough to identify specific thoughts, such as the very first link I sent (the one about the hammer). That article claims that it was a new discovery at the time it was published in 2009, so I’m not sure if that means anything.
    https://en.m.wikipedia.org/wiki/Thought_identification There was also a story I read a few months ago that went something like this: a Scottish person and a French person both thought of a horse in their native languages, and apparently their brain activity was close enough for the scientists to tell that they were thinking of a horse. I would assume that they still needed a personal algorithm. Would you happen to know of any explanation for this? Something in these experiments that may have been left out?

    1. 1) People with neurological and psychological disorders and always excluded from being subjects in MRI studies of the brain, unless the study is specifically looking at the disorder they have. I’m not an expert on the autistic brain but I would imagine that when we are talking about broad categories their brains would be identical to that of a non-autistic person, although there may be some categories (e.g. emotion recognition) where the activation is not as strong. Side note: MRI studies of the brain are always performed on right-handed people, as the organisation of left-handed people’s brains is less consistent and therefore threatens the consistency of any findings.
      2) Well we can already control someone’s movement involuntarily using implanted electrodes. However as with the rest of mind reading the results relate to very broad categories of movement (normally just spasmodic movements of a particular large muscle). Basically these studies tend to target an area of the brain that control one particular (large) muscle group, and they cause an activation of the brain area and thus a contraction of the muscle group. Repeating the same process to invoke a more delicate movement (i.e. clicking your fingers, walking) would be a lot more difficult as it would involve multiple brain areas firing in a specific series). This might be possible with carefully inserting many electrode and testing to ‘find’ the right sequence of firing. In terms of remotely, non-invasively doing it (I presume this is what you mean by ‘science fiction’) that would be impossible because you would never be able to target specific brain areas from a distance because of the interference that would be generated by the skull, CSF, hair and whatever else was between the signal generator and the person being targeted.
      3) Yes the already knew the words being used (in fact the auditory stimulus played to the participants were presumably created by the researchers).
      4) Possibly, although you’d probably need a lot of electrodes to cover all the auditory cortex in enough detail, and even then you’d probably need an algorithm created specifically for the person being implanted. If you want to ‘spy’ on what someone is hearing, a much simpler method would just be to follow them around, or use a concealed microphone to record the external sounds they are hearing. After all, people only ‘hear’ external auditory stimuli, unless they hallucinate.
      5) I’m not aware of the horse study. I suspect that instead of ‘reading’ the thought, the decoding is just selecting one stimulus (a horse) from a very limited group of other stimulus. Indeed I think the link you provided says the decoding algorithm had to only choose one of 10 images. So it’s just matching the brain activity to 10 predicted patterns for 10 objects (probably created using the same participants data) and selecting which one is closest.

      1. 1. Would someone who is left-eyed dominant be considered a lefty, even if they’re ambidextrous or write primarily with their right hand? Or vise versa?

        2. I understand that it would be difficult to control the more fine motor movements, but I’m guessing that full-blown mind-control (like controlling an avatar in a computer game) would become a possibility if all the right conditions you described were met? Also, to what extent are motor signals generalizable, considering they have mind controlled drones and headsets available to the public?

        3. So, just to confirm the obvious, in both of the experiments previously mentioned in the very last question, there would still need to be a p.a. (personal algorithm) for specific thoughts to be identified, even if they are ‘similar’ enough to another person’s data, so that they can be compared and set to an image from a highly limited selection?

        A. Apparently there was an exhibit at the world fair (don’t know when or which one, sorry) where people could type using their thoughts by looking at letters on a screen, though supposedly one volunteer had a condition where he couldn’t move his eyes. I’m assuming that this would also require a p.a.?

        B. Just wondering, did you say that ‘house’ was a possible generalizable thought in a previous reply, or was that just a random example?

        1. 1) They only use right-handed people. I think the presumption is that (some) left handed people have brains that are organised in the opposite way round to right handed people (i.e. functions in the right hemisphere for RHanders are in the left hemisphere) whereas others have brains organised in the same way to right handed people. See for an example of finding left handed people having different brain organisation.
          2) It would be impossible for the reasons I gave before. What are mind controlled drones? Why would you want to control a drone by your mind when you could just use a remote handset? Motor organisation is likely to be more generalisable than cognitive functions, because (presumably) more of them are inherent rather than learnt. However different anatomy of brains between different people would still make generalisation at a very specific level impossible.
          3) I don’t think the link you gave provided details of metholodolgy for the study, but the basic point still stands. You can’t generalise specific thoughts across people.
          4) Yeah that would be using a P.A. I think one of the goals of this sort of research is to create some sort of mechanism whereby people who have ‘locked in’ syndrome can communicate (crudely) via their patterns of brain activity after a p.a. is created for them.
          5) The general category of ‘buildings’ is generalisable, but not individual, specific buildings without a P.A

          1. 1. Reflecting back on the first story I first sent, https://www.google.com/url?sa=t&source=web&cd=1&ved=0ahUKEwjChYPf9qHPAhWBMz4KHeAxBnYQyCkIDDAA&url=http%3A%2F%2Fm.youtube.com%2Fwatch%3Fv%3D8jc8URRxPIg&usg=AFQjCNEjnP42NfbO3X_8_TNPYLAv4VPaug there’s a part towards the end where the speaker mentions that they’re working on a terahertz laser that would shine a couple millimeters into someone’s head to see if they were lying, say, in an airport. Based on what you’ve previously said about covert mind reading being impossible due to the hair, skull, and the CSF blocking the signal, I’m guessing the same would apply to this if it’s ever created?

            2. Can images of what a person is seeing only be reconstructed if that particular person has a P.A.?

  12. 3. It is only possible to reconstruct a picture of what a person is seeing with an external stimulus, correct? Specifically, you can’t reconstruct an image from memory or imagination?

    4. Apparently some people believe that if nanomachines are ever invented, they’ll be used to covertly spy on brain activity. Same with portable MRIs. What’s your opinion on these matters?

    1. 1. That seems a far fetched claim. He doesn’t mention who “they” are, and there is no detail as to how such a device would work – how would the “receptors” be placed in a way that could read the signal reflected back? How would the reflected light not dissipate (through the skull, CRF or just air)? How would they ensure that the person being ‘read’ didn’t move, or that someone else didn’t get between the laser and the person, or between the person’s reflected light and the sensor? It’s just an unsubstantiated claim.

      2) You could probably create a general ‘fuzzy’ image of the general shape of what they were looking at without a PA. To generate an exact replica you’d need a PA.
      3) It’d be a lot harder, and probably impossible practically to reconstruct from memory or imagination even with a PA unless you could severely limit the range of things the person was imagining. In studies of reconstructing from imagination they ask the participant to imagine one of a set of stimulus they have already just shown the participant – i.e. they already know what the thing the participant is imagining looks like.
      4) Portable MRIs are just MRI machines in big lorries they drive around to patients rather than the patients having to go to hospital – they use them in war zones a bit I think. They have the same limitations as normally MRI machines so no they couldn’t be used. I think I’ve already answered about Nanomachines above – it wouldn’t be practically possible.

      1. 1. I don’t know much about radio waves, but I do know that terahertz can penetrate through things with low-water content and through several millimeters of skin and reflect back. It might also be used to make 3D models of teeth in orthodontics. Judging from this information, I’m guessing it probably CAN’T reach through to the brain, but I have no idea.

        2. As with most mind reading studies, the scientists would still have to be aware of the picture(s) that were being seen even without the PA, correct?

        A. Would you have to compare the brain activity WITH the picture in order to reconstruct it, or would you just use the raw brain data?

        3. I believe I actually meant to say handheld MRIs. What’s your opinion on that?

        1. 1) There are optical imaging techniques that uses a large scanner to shine lights at a particular point in the head and then try to measure the scattering of the light particles from the reflected light to infer brain structure, however they require the sensors to be very close to the light source and the head to work (for the reasons mentioned in my previous response). Also they are subject to the same limitations in terms of generalisability as MRI
          2) What happens in these studies is they use scan to create a PA that identifies the brain response to various low-level individual image features (e.g. lines at various angles, curves, colours etc). They then try to use that PA to interpret brain activity to a picture, to see how whether the P.A can distinguish the response to one image from another. What usually happens is the PA can produce a very fuzzy image and then it chooses which of the small batch of images that are used in the experiment, the fuzzy images is closest to. If you can’t constrict your search via a small number of images, you are left with the fuzzy image, which won’t necessarily be very details.

          3) Handheld MR don’t have the same resolution as full MR scanners, and are just used to perform very broad medical imaging (i.e. identifying blood-clots / tumours etc).

  13. 1. I’m slightly confused about the picture reconstruction. You said that without a PA you could create a general fuzzy image, but then you said that WITH a PA you would also have a fuzzy image unless you choose between a set of other images. Would you still have to identify the brain response to the low level features first? Are you referring to the same fuzzy image in both answers, or am I missing something?

    2. If electrodes can pick up brain activity when placed on the scalp (bypassing the hair, skull, and CSF), couldn’t they technically pick up signals from further away if they were made to amplify the signal?

    1. 1) The algorithm creates a fuzzy image from the brain responses and uses that to ‘choose’ which of the small set of images the fuzzy image most resembles – this gives the high success rates you see in these studies (e.g. 90%) because the algorithm simply has to distinguish the brain response to a small set of images, it doesn’t have to reconstruct an image wholesale. Once you remove the constraint of the image only being one of a small number of images, the ‘fuzzy’ reconstructed image is a lot less useful.

      2) Amplification has to come from the original signal, which comes from the brain. The brain doesn’t/can’t amplify it’s own signals, so you would need some inserted device within the brain to amplify the signals, which in turn would 1) not be covert, 2) probably massively disrupt cognition. Also brain signals are created through the passage of neurotransmitters between neurons. They are a limited number of neurotransmitters in each neural connection, so the scope for ‘amplification’ would be severely limited.

    1. Hi, when they talk about semantic information, they effectively mean ‘conceptual categories’ rather than individual concepts (i.e. ‘people’ vs ‘places’; ‘actions’ vs ‘feelings;). It wouldn’t be possible to do this sort of thing with specific words/concepts.
      Like all these studies they’re using a very limited range of stimuli (60 sentences in this case) which are specially selected to be distinguishable based on the conceptual categories (also note the accuracy rate was 67%, even with that limited stimuli set). They have trained the algorithm to distinguish based on these categories. What it appears to show is that these conceptual categories are (somewhat) held in distinguishable brain networks regardless of the language of the sentence, which isn’t that surprising. I was a bit surprised they appear to not be able to distinguish first language from second language activation, although it might be their algorithm wasn’t set up to detect that (I can’t see the full text of the paper).

      1. “What it appears to show is that these conceptual categories are (somewhat) held in distinguishable brain networks regardless of the language of the sentence.”

        Yeah, that’s what I understood from this. I was just a little confused about how this immediately applies to brain interfaces (other than for something very rudimentary).

  14. 1. I understand that implantable electrodes detect brain signals to a much better degree as opposed to the ones you place on the scalp (EEG). For this reason, implants are often used for brain-computer interfaces so people are able to control wheelchairs, prognostic limbs, ect. However, this study claims to be the first to create an algorithm for controlling a robotic arm using only scalp electrodes http://www.uh.edu/news-events/stories/2015/March/0331BionicHand.php . Since EEG can’t pick up brain activity anywhere near as well as neural implants, what do think are the limits of this approach? What can and can’t scalp electrodes detect?

    2. A few studies claim that the large array of invasive electrodes used for epilepsy research aren’t necessary, and that they could be far smaller. http://www.bbc.com/news/science-environment-12990211 . Since patients undergoing these surgeries are often voluntary subjects for mind reading experiments, wouldn’t they still require the electrodes to be spread over the majority of a targeted region if they wanted to study, say, the brain’s reaction to a full range of extensive auditory or visual stimuli?

    3. Apparently a company called Neurosky invented electrodes that can detect brain waves several millimeters from the scalp or through fabric. http://developer.neurosky.com/docs/lib/exe/fetch.php?media=thinkcap:thinkcap_headset_user_manual.pdf . I also read somewhere that they’re thinking of putting brain wave sensors inside the headrests of cars to detect drowsiness. What do you make of this? What do you think is the farthest away that brainwaves can (or ever will) be detected and read accurately?

    1. 1. It depends on the range of movements you’d want to recreate. All you actually need is the ability for the participant to reliably produce as many distinguishable signals as their are commands that you want the prosthetic arm to be able to produce (see the bbc article you linked for where they plan to use just 4 to control a computer mouse). EEG won’t be as good at distinguishing different signals as an implantable electrode, but it would still be capable of doing it.
      2. In that study they are looking to pick up just 4 distinct signals, and then use just those 4 signals to allow the person to control a computer mouse. In the example in the article they try to use the brain patterns that results from thinking about 4 phoneme sounds. These patterns would probably be distinguishable from the generated brain activation from a small area, hence ‘only’ needing to implant electrodes within a 4mm square. The more stimuli you try to distinguish the more difficult it would be, even if you extensively train a personal algorithm of the individuals brain responses. You’d need a wider area to get (say) a large grouping of auditory thoughts, although you wouldn’t necessarily need the whole brain as auditory stimuli are processed in mainly areas of the temporal cortex.
      3. I’d be very wary about the claims made by private companies about what their EEG machines can do. Different stages of sleep produce characteristic patterns of brain signals that can be picked up by EEG. Not sure you’d be able to reliably pick them up from a car head rest though (besides if people get sleepy their heads often move forward rather than back). I’d have thought video technology / eye tracking might be a more reliable way of detecting drowsiness in drivers.

      1. Once again, I would like to emphasize how much I appreciate the time and effort you’ve put into your responses. It has helped me through a great deal of confusion trying to decifer everything myself. Thank you very much.

        I found this article claiming that brain connectivity alone can predict brain fuction. It explains later in the article that it could be used for people who can’t undergo functional imaging, aka participate in event-related tasks. https://spectrumnews.org/news/brain-imaging-study-links-structure-and-function-in-face-area/ It says that it used both the functional activity and connectivity data from the first group predict the activity of the second group using only their connectivity data. However, in the official document (check Methods: fmri analysis), it explains that each individual in both groups underwent specific functional imaging tasks along with obtaining the connectivity data, which were then registered together for each individual, albeit with slightly different stimuli and measuring perimeters among both groups. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3267901/#!po=11.6822 I know faces are primarily activated in the fusiform gyrus, but to actually predict (and with such good accuracy) the literal activation point and shape of a person’s response to the stimuli based only on brain anatomy kind of confused me. So I guess what my question is, since I can’t come to a conclusion, did they have to somehow blend both the task-related functional data AND the connectivity data together to predict the functional activity for every individual (including from the second group)? Or did they use only the functional data from the first group to predict activity from the second?

      2. After doing a bit more research, I no longer need the answer to the previous question I recently posted. But in relation to it, I found a different study that used a very similar method: http://mobile.the-scientist.com/article/45762/Toward-Predicting-Personalized-Neural-Responses

        1. Because it turns out that brain connectivity scans can be used to predict functional activity, does that mean that it can be used to predict virtually any thought, behavior, or brain activation in an individual, since this method used functional activity derived from different participants prior (like a substitute P.A.) and is then applied to the connectivity data of a separate individual?

        2. Excluding the type of connectivity data used in those studies, what basic things can the overall shape or simple outward structure of an individual’s brain predict function to?

        3. Can connectivity mapping be achieved with other brain recording methods such as EEG?

        1. Hi

          The two links are attempts to use a) structural/anatomical data (link 1) and ‘resting state functional data (link 2) to predicting brain function WITHIN the same individual (e.g. you use their structural data to estimate their functional data). The idea is that previously if you wanted to know how ‘strongly’ someone’s brain reacted to faces you’d have to get them to do some sort of face task and collect functional data in the scanner. Now they are saying that it might be possible to estimate how strongly someones brain might react by looking at their structural data or their resting state data (both also collected from a MRI scan). Although this may not seem much of a benefit, as the person will still have to do a scan to get the results, it could be helpful because you might only need 1 resting state/ anatomical scan, and then use that to infer about a number of different functional properties. If you had to use functional data, then you’d probably have to do a separate scan for each functional property you were interested in. They also talk about using this information to estimate differences in ability between people – again saving time from having to test all the people on all the tasks they are interested in.

          This doesn’t really have anything to do with mind reading because 1) requires a personal scan of the individual (e.g. to get the structural data) 2) it relates to general neural performance on general tasks – e.g. how strongly a person’s brain might respond to faces, and to infer from that how well they might be able to process faces. The example in the link is viewing faces vs viewing scenes. 3) the accuracy of the model is stated as compared to other coarser models (e.g. just a group average). Even when looking at something as general as faces vs scenes the predicted activity is still noticeably different from actual activity (see page 20 of the Saygin paper for images of actual vs predicted.

          As regards your specific questions:
          1) No, as before this is only very general functions. Also note that the algorithms are only looking at predicting functional activity, which is a level of abstraction away from actual behaviour. It doesn’t seem that they’ve reported any relationship between structural connectivity and actual behavioural performance in the tasks – perhaps they tried it and found that it didn’t predict very well, or perhaps the functional comparisons they are doing are so broad that you wouldn’t expect them to predict behaviour.
          2) Don’t think you would be able to tell much from the overall shape of outward structure unless there was a large lesion or something like that. You’d need to detailed information from an anatomical MRI scan to get anything useful out
          3) You can’t do any structural/anatomical scanning with EEG or MEG. You might be able to do some sort of resting state scan, but the data wouldn’t be anywhere near as good as you’d get with MRI.

    1. Apparently the brains of 17 different people showed neural signatures that were “nearly identical” when recalling the events of a movie they watched. Does this mean that “mind reading” can be generalized after all? Or is there something I’m not getting here? (The study was published in “Nature”, but I’m not subscribed so I can’t read the whole thing for myself)

      1. No, all the points I made before still stand. I can’t get to a full text view of the paper either – I’d be interested to know statistically how they demonstrated ‘similarity’ between the different participants, and at what spatial level this was done. It looks like they are just saying that distinct scenes produce distinct patterns of brain activation that are similar across people. This isn’t surprising particular as I assume people tend to remember the same small things about particular scenes. As the article says it’s a process of “whittling down to the gist of what happened” in the scene. Those ‘gists’ will be general rather than specific – mirroring the ability to localise general visual features between people, but not specific ones.

        The language in the New Scientist article is I think quite misleading. With all media reports you are better off reading the abstract of the paper instead (http://www.nature.com/neuro/journal/vaop/ncurrent/full/nn.4450.html) for a clearer view of what was done/found. I don’t understand the relevance of the results to a ‘universal recording network’ and to the evolution of human communication. Why would the neural location of memory processes be important for the success of human communication? The brains of Left-handed people are often arranged roughly as a ‘mirror image’ of right handed people’s brains, but this has no detrimental effect on the communication between left and right handed people!

    1. Something more limited than that I imagine. I suspect they would plan to train an algorithm to identify a limited number of phrases / commands that a person with ‘locked in’ syndrome could imagine speech of, and then use some sort of brain implant to try and identify when those trained phrases are being imagined.

  15. Hi Rob, I have another couple of questions. Remember the “movie reconstruction” study from 2011? It has been found (unsuprisingly) that there is significant overlap in brain area activation for both visual perception and visual imagination. My question is, how accurate can this machine learning-based decoding eventually be for reconstructing imagined images or “movies” from a person’s brain activity? Could we ever get a crystal-clear playback of the person’s thoughts? Or is there some kind of fundamental barrier to producing that?

    My second question is, could telepathy (artificial, not of the “woo” variety) be achieved between two people using a brain-computer interface somehow with brain activity data? (ie. A brain implant in one person would record the brain activity of a particular visual thought and send the info to an implant in another person that would then somehow feed the information in the other person’s brain) I’ve heard mixed opinions about this idea for the most part. What do you think? Is this possible? If so, are there practical reasons why this wouldn’t work?

    1. 1) No, you wouldn’t be able to get anything like a crystal clear playback, even if you had an algorithm that was ‘trained’ on the individual’s data.
      2) I suppose it might be possible but only to communicate very general messages, and you’d need some way of passing the data from one implant to another (from one person to the other). WIthout using a direct physical transmission system (i.e. a cable) any signal would most likely get degraded between the two people.

      1. To what extent do you think fmri (or any forseeable brain-reading technology) could predict a person’s future actions? Having listened to some scientists speculate, I’ve heard some predictions that sound like they are straight out of the movie, “Minority report”.

        1. Minority report is science fiction, nothing like that would be remotely possible.
          I think there are some studies where they use neuroimaging to predict which answer someone is going to give on a fixed-choice task (i.e. one with a small number of pre-set answers) but that relies on the structure of the task being such that 1) there is a fixed, preset (and small) number of answers, 2) the algorithm used to predict knows pretty much exactly when the decision will be made, 3) the alogithm used to predict the answer is ‘trained’ on the specific task. 4) the task is performed in an environment which allows the brain scanning to take place.
          If you move away from those 4 prerequisites and sort of behaviour prediction using neuroimaging becomes impossible.

  16. 1. In both stories that I linked to pertaining to the prediction of brain function based only on the structural/functioal connectivity in an individual, they mentioned that they used data from a separate group of individuals prior who had both their stimulus-induced fmri scan and their connectivity scan corrolated with eachother to determine their relationship. They then used that data to try and predict functional activity in a second group of new individuals with only their connectivity scan. In order to do this, did they have to identify category-selective regions that were relatively in the same location for each person (being gereralizable) and then measure the connectivity between them? Do they use a similar method for functional activity?

    2. Just how generalizable is the connectivity method? All of the studies used broad categories for making their predictions, such as faces, objects, ect. However, this one cercor.oxfordjournals.org/content/early/2015/01/26cercor.bhu303.full.pdf+html mentions how they might one day try to predict activation for more specific categories like indvidual faces, objects, ect. (see Discussion section). But say if an individual has never heard of or even beed exposed to that specific category (e.i. a specific face) , then how would those specific connections even exist? Besides, each individual has their own idiosyncrasies based on different life experiences, correct? So would the connectivity method work only for general categories?

    3. This article seems to support that connecticity is predictiof behavior and emotions: http//journal.frontiersin.org/article/10.3389/fnhum.2015.00253/full . I know I already asked you a similar question reguarding this, but if the connections for a certain category are connected to an area where a specific emotion is known to activate, would they be able to tell how you feel about a certain thing based on the connectivity strength to the emotional area? This somewhat ties into question 2, but could connectivity be used to find out how someone might behave, make decisions, their likes or dislikes, or whether they know certain information?

    1. 1) I think they use the anatomy to make an assumption about where the category specific region is. So if from the functional data the category-specific region is near a particular white matter tract, they assume it is in the same place for the group where they only have the anatomical data.
      2) Neuroimaging isn’t capable of going to the level of individual connections so you wouldn’t be able to do it for a specific face even if the participant had seen the face before – the link to the paper you put in your comment doesn’t work so I can’t check what the authors mean.
      3) The article you link isn’t about using connectivity data to predict functional activity, it’s looking at understanding how different brain areas integrate (through connectivity) to produce perception. The category specific areas we are talking about here are very general (e.g. area that responds to faces). I suppose if you found that the areas of the brain that produce the fear response were more strongly connected to the face area in person A vs person B you might be able to predict that person A would be more fearful of people than person B. The sort of conclusion is not particular useful from a psychological perspective (and may not even be particular valid given the multitude of different factors that can modulate the fear response). You wouldn’t be able to tell people’s likes and dislikes – the information we are talking about is really to general to make any sort of conclusions like that

  17. I have been under investigation from the Federal Government for over six years now and I KNOW for a fact that if someone in the private sector just came up with a way to decode the brainwaves then the Federal Government has had it for years. In the six year investigation the Fed’s have been conducting on me(In which I have nothing to worry about) I have gathered information on the details of how the investigation is conducted with this new found technology. What they do to “turn the suspects mind into an interrogation room” and the discomfort inflicted in order to see the reaction of the “suspect”. In the years following 9/11; American’s gave money that wasn’t TAXES to the Government to combat terrorism. This new technology capable of reading the minds of people without that person being “aware” to the extent that when they investigate; it isn’t a one question highway looking for an answer. It’s an assault that preoccupies the mind to the extent that the person isn’t really aware of the investigation. This introduction of an energy that is just way too much to explain; takes years to come to grips with and then they investigators are easier to spot for what they really are. Imagine the time you were stumped by a question where you had no insight whatsoever. But you wanted to. It’s almost like you lose intelligence in the “stumped” position. If you had a flashlight that could be pointed at the brain and cause this stumped feeling to incapacitate the mind. THAT! Is the feeling you have while under investigation.
    Just wanted to make the curious even more curious and the one’s with answer’s to question those answers.
    Signed;
    A Faithful American.
    Rocky

  18. 1. In a previous response to one of my questions, I remember that you said it would be difficult to decode a general thought unless:
    a. the participant was attending to a stimulus
    b. the fmri & researchers had some way of assuming if or when in the scan the participant may have been thinking that generalizable thought (e.g. vs some other general thought)
    Does the same apply for specific thoughts? If an individual trained a personal algorithm for a bunch of specific thoughts in a prior scanning session, would it still have to meet one of the two criteria in order for a thought to be recognized in a new session as opposed to it just randomly passing through unsupervised in resting state? Why or why not?

    2. How does dream decoding work? In all the studies I’ve read it explains how they had to wake up the participants numerous times while in the scanner so they could ask them exactly what they were dreaming about. They then found pictures that were similar to those descriptions and had the participants look at them while simultaneously recording their neural activity to create a personal algorithm. Then when the person fell asleep and dreamed again, the researchers used the p.a. to determine what they were dreaming about. In some studies they could even reconstruct what the person was seeing in their dreams. This somewhat ties into question #1, but how were they able to make sure the person was dreaming about what they wanted them to and recognize the precise activity vs the billions of other things they could’ve dreamt about?

    1. 1) Yes, in experiments the researchers control the stimulus delivery program and therefore know when a stimulus is delivered. In experiments looking at thoughts there is usually some sort of signal to the participant that they have to imagine a specific thing at that time. The algorithm then knows when to look for the activation. In effect they are reliant on the participant attending to the cue AND to them following the instructions to think of x when the cue appears. Real life thought is of course spontaneous to a large degree, so the algorithms trained in MRI experiments would struggle to work in ‘real life’.

      2) I presume you mean the Horikawa study?
      http://zoology.ou.edu/pdf_documents/Neuromunch/Horikawa_et_al_2013.pdf
      What the researchers did was continually wake people up while they were sleeping and get them to report what they were dreaming. They repeated this until they had 200 ‘awakenings’ for each participant. They then grouped the reported content into broad categories (e.g. book, car, character) so that each dream report from each awakening was assigned to one or more categories. They then shows participants images of the categories while they were awake to get MRI data of how their perception of the categories. They create an algorithm to match the category (during perception) to the brain activation, and then used this algorithm to see if it could distinguish which of two potential categories was in the dream report. For each dream report they took the MRI data for the 15s before the awakening. Some things to note
      1)As the algorithm is only choosing between two categories, therfore it has a 50% chance of success. Even random performance/guessing would lead to 50% correct. The algorithm’s performance was 60%, better than chance, but not particularly impressive when it only has a choice of two alternatives. Also of course the researchers already know what catgeories are in the dream (because they asked the participant). Even when they limited the categories to ones that were produced very consistent patterns (when comparing sleep to actual perception) the performance only improved to 70%. They then did a separate analysis where they attempted to see if the algorithm could guess whether the category was present in each period for which they had sleep data for. Of the 60 categories, the decoder was only able to identify 18 at above chance levels. Hopefully you can see that this is NOT recognising ‘the precise activity’ of a dream.
      2) The experiment relies on a huge amount of co-operation for the participants – getting them to go to sleep in a scanner, constantly waking them up and asking them what they were dreaming, getting them to view pictures and record MRI for that.
      3) As before they are reliant on knowing exactly when to look for the dream content ( they use just the 15s before each awakening, and they know what category is in that data – they are not scanning the data for the entire sleep period looking for the categories.
      4) Thy did NOT ‘reconstruct what the person was seeing in their dreams’ in an exact way. What they did was take was take the general categories the algorithm predicted, and use random images from the internet which match those categories (i.e. a picture of car when the category was car) to create a movie. Clearly this is not a recording of their actual dream. As they already had the dream reports from the participants they could of done exactly the same from that data, and it would have been more accurate.

    1. I think probably when they refer to ‘mind reading’ they mean the broad sort of ‘category’ mind reading we already discussed, not getting a running commentary on thoughts. The practical limitations discussed above make that impossible.

  19. 1. Concerning the brain connectivity methods, even if individual connections were somehow accessible in neuroimaging data, wouldn’t it still be impossible to determine exactly which connections were for which certain category, such as a specific face or object?

    2. Regarding Colin’s question on the possibility of ‘telepathy’, and if it were actually feasible, how would they be able to transfer information from one implant to the other? If you wanted someone to think something in particular, wouldn’t you have to stimulate the neurons of that particular area? Would that indicate that information could one day be ‘downloaded’ into someone’s brain, like the Matrix?

    3. How are evoked related potentials measured by EEG? When having someone attend to a specific stimuli, how do they manage to find the correct signal corresponding to that task amidst all the other brainwaves and noise? Does it depend on the number of electrodes and the hertz measuring range?

    4. This study focuses on detection of self-generated thoughts. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4019564/ . They used a questionnaire to find out what people thought during the scan. However, I’m confused as to if they were actually able to determine the specific activity related to those thoughts and when the participant may have been thinking them. Your opinion?

    1. 1) It isn’t possible to see individual connections (i.e. neuron to neuron) because they are so small and the resolution of brain imaging techniques is many order of magnitude too large. I think even if you somehow had that information there would be too many connections in the human brain to make sense of any of the data
      2) It’s not really possible or feasible. You’d definitely need a physical cable, any ‘remote’ electrical signal between two brains would just lost because of the skull/csf/skin and whatever else was physically between the two individuals. Note: telepathy is the passing of information, not ‘thought insertion’. Stimulating the neurons of a particular area wouldn’t produce a specific thought – at ‘best’ if would just produce a vague feeling.
      3) ERPs can be recorded during EEG/MEG rather than MRI. I’m not an expert on the mathematics of it, but they rely on knowing the time of the stimulus, and use that to try and pick up the ERP. Also the experimental designs are usually constructed so no interfering stimulus is present at or near the same time as the stimulus of interest, making the data ‘cleaner’ than it would be in real life. Finally they’ll know the general area of the brain that will be producing the ERP (and therefore which electrodes should be picking up the stimulus) from data gained from animal experiments, where someone has actually opened up the skull of a live animal and recorded direct from the cortex. It’s worth noting that most of the ERPs are generated by very broad, stereotyped type of stimuli – e.g. light flashes, bursts of white noise, specific motor movements. They’re not really anything to do with ‘mind reading’ even in the sense it is meant in MRI research. They’re mainly used in relation to understanding perception.
      4) No they weren’t looking a specific activity relating to specific thoughts. They were broadly characterising the ‘types’ of thoughts that people reported having (e.g. future related, past related, positive, negative, and social) and then connecting them with general characteristics of the ‘resting state’ scan. They didn’t ask the participants for the specific content of any thoughts, or when they happened. The conclusions they can draw are things like, people who tend to have social thoughts during mind wandering, showed more fluctuations in area A at rest, than people who didn’t.

  20. Hi Rob, do you have any comments on the feasibility of this?
    http://spectrum.ieee.org/the-human-os/biomedical/imaging/why-mary-lou-jepsen-left-facebook-to-transform-heath-care-and-invent-consumer-telepathy

    The article talks about Gallant’s work, but I am confused as to how this can be used for a “consumer telepathy device”. The article describes a small device that uses resolution on par with fmri, and somehow this could allow “telepathy” (what is exactly is transfered?) Even ignoring the limitations of fmri decoding, how could “telepathy” even be feasible with this technology? It doesn’t really make sense to me, but then again, I ‘m not a neuroscientist. Is there something I’m missing here?

    1. Hi I can’t seem to log onto my account, but a short answer is that I think the imprecision of her language belies the fact that she doesn’t really understand the scale of the problem she’s claims (or ‘imagines’) she can solve. The claim that you could ‘dump out what you are thinking’ isn’t really viable with any technology, let alone one stuck in a ski hat!

      Rob

    1. Hi, I don’t really have much more to add than what I’ve already said. The dictionary would have to be created for the specific individual if you wanted to get anything like an accurate decoding ability at more than the level of reasonably broad categories. Even when created at the individual level, with technology significantly better than we have at the moment, it would never be 100% accurate, and would almost certainly be quite inaccurate when trying to distinguish concepts that are closed related.

  21. This seems interesting:

    https://www.nature.com/articles/ncomms15037

    “Our approach allows the decoding of arbitrary object categories not limited to those used for decoder training. This new method has a range of potential practical applications, particularly in situations where what kind of objects should be decoded is unknown. ”

    Any thoughts on this?

    1. The study mentioned in the article is from 1982, and relates to transferring a small bit of the brain that is involved in releasing hormones to regulate sexual maturation. I think their work basically says that the hormone function still worked when they transferred the brain area into a new mouse. I don’t think the brain area has any cognitive function, or if it does, they didn’t measure it’s performance in the study. I don’t think the same would apply to cognitive functions, or at least anything remotely complex like ‘thinking’ because these things rely on a large network of regions and connections that are partly unique to each individual. A hormone release function works (I presume) via a much simpler mechanism, so can possibly be transplanted from one brain to the next with some success.

      1. I believe I once read a study on an article a while back that had to do with a monkey taking part in a memory task. After it completed it once, they made a lesion in the monkey’s memory section. Over several weeks they monitored its condition by giving it an extra series of tasks. They then transferred unused hippocampus cells from a separate monkey (in embryonic state, if I recall) to the one with the lesion. Apparently, it’s memory improved with the cell transplant. I can’t find the link, so I may not have been able I recall every detail correctly.

        But even if individual cells were transplanted to improve cognitive function in a different individual, wouldn’t it have to mold in away unique to that person, so as to form new brain connections? Or, like you said, most likely be unfunctional in the first place?

        1. 2. To expand upon the previous question, wouldn’t neuroplacticity make it so that the connections from the original owner might adjust themselves in order to fit the new brain? And if that were the case would it function differently for the new person? Or still contain remnants (like memories) from the previous brain?

          1. Hi

            I think the experiments work by providing extra cells for the area involved in the cognitive task (in this case the Hippocampus). The extra cells don’t transfer any new information to the monkey, they just provide extra ‘capacity’ in the brain area so that it can process more information.
            The new cells would have to grow or connect to the existing cells. Presumably this is possible; brain cells do show plasticity throughout their lifetime I believe. I don’t think actual memories or anything would transfer from the old brain, these memories would rely on connections to other parts of the original brain, and that information would be lost on the transplant.

  22. Hi Rob, here’s another new study, this one claiming to have found “what thoughts are built of”.

    https://www.cmu.edu/dietrich/news/news-stories/2017/june/brain-decoding-complex-thoughts.html

    Would what you have already explained to me apply to this study as well? Also, there was a study I linked to above that claims to demonstrate that fmri decoding can be generalized to different types of thoughts (the “generic decoding” one). Do the caveats you mentioned apply to this study?

    1. The drawbacks I mentioned relate to the nature of the brain and individual differences. There are also drawbacks/limitations relating to the scanning methodologies used. They therefore apply to all brain imaging studies of ‘mind reading’.

  23. Based only on one’s own atonomical connections, would categories such as gender, personality, or skin color be too specific to try and predict function of? If the functional area pertaining to one of those categories has distinct connections to a functional area relating to a specific emotion (fear, anger, happiness) would someone be able to conclude how that person feels towards others belonging to one of the said categories?

    1. If you mean the perception of gender/skin colour from viewing a image/video of a person then I suspect that there is probably a way via scanning of identifying the part(s) of the brain that decode that. Such areas wouldn’t have ‘distinct’ connections with emotion areas. I think brain imaging papers relating to individual differences look to identify the ‘strength’ of the connections between areas (either via functional activation during tasks, or maybe using a measure of the volume of white matter in the connections between the areas. It’s debatable how useful those measures of ‘connection strength’ are from a psychological perspective. They’ve been used to distinguish between ‘normal’ and clinical groups, on metrics that we know the clinical groups are different at, but I doubt whether they’re much good at predicting differences between people who are much closer together behaviourally.
      If you’re thinking of a scanning test that could be used to detect a proxy for racism (e.g. stronger connection between perception of black face and fear) then I don’t think scanning would be much use for that. The concept/manifestation of racism from a psychological perspective if far too complex for the sort of crude comparisons neuroimaging can perform.

  24. I’m guessing the same idea would apply if something was connected to the memory location. If some general area pertaining to a broad category of perception had white matter tracts connected to the memory center, would they be able to identify that someone had a memory of that general thing? Or, because the connections in any given brain area can be used in perceiving multiple categories, would they not be able to tell the difference from one memory to the next?

    1. Again, some information could possibly be determined from looking at anatomical connections, but I don’t see what practical use it would have outside diagnosing specific brain disorders. In healthy populations you might be able to see that a particular person has a strong connection to the area that deals with memories of faces, or a weaker than usual connection with an area that tends to store memories for concepts, but I’m not sure what practical use that information would be. Even if you were willing to infer that this meant that the person had a ‘strong memory for faces’/’weak memory for concepts’, I’m not sure what that really tells you about the person. Also a far simpler behavioural test (e.g. a memory test) would almost certainly give you a better indication of a persons memory performance than trying to infer it from brain images.

  25. The reason I ask is because the category of ‘cars’ seems rather specific. Did they actually find the exact areas pertaining to those two categories using only two major fiber bundles, or were they just trying to find the overall general relationship between the behavioral tasks and the brain scan?

    1. Hi

      The paper doesn’t say anything about the cortical processing of cars. I’m not sure where you got that impression from. They just used a ‘car discrimination’ task as a control for the face discrimination task, enabling them to control for the general cognitive processing required for visual discrimination tasks.
      Their hypotheses regarding which white matter tracts are important for face processing (which is the focus of the article) come from a large literature of mainly functional (rather than anatomical) imaging which shows what areas activate to faces.
      I did notice in passing that their main behavioural effect (older people struggling with face discrimination more than car discrimination) didn’t actually reach the standard statistical significance threshold of p<.05, and their post-hoc tests on the (non-significant) interaction doesn't seem to match the conclusions that they drew. In the anatomical analysis they also do an awful lot of comparisons (72 + a whole brain analysis) which is pretty problematic as well from a statistical perspective (see https://en.wikipedia.org/wiki/Multiple_comparisons_problem).

      Finally it's worth noting that the idea that there are 'exact areas' for the processing of certain types of stimulus isn't really correct. Any type of cognitive processing is, in reality, going to be subserved by a network of areas, rather than just one area. Quoting from the paper you linked to:

      The conclusion from these studies, increasingly supported by theoretical accounts that favor more distributed networks mediating face perception (Gobbini & Haxby, 2006; Ishai et al., 2005; Rossion et al., 2003; Haxby, Petit, Ungerleider,& Courtney, 2000), is that normal face processing re-
      quires the integrated function of multiple regions, including ‘‘core’’ areas such as the fusiform gyrus and superior temporal sulcus, but also more far-flung or extended regions such as the frontal cortex.

  26. I recall some time ago hearing that regular old anatomical MRI scans could possibly predict whether or not someone might be at risk for substance abuse or criminal activity. I can’t remember the specific source, or even if it was valid one for that matter. I do know that some forms of MRI can be used to detect mental illness. Do you believe brain scanning could potentially be predictive of someone’s future actions?

    1. I think all you could do from brain scans is possibly say what people might be more at risk from addiction (based off perhaps having lower anatomical volume / activation in control areas of the brain that might help control consumptive behaviour). You couldn’t predict the future or people’s future actions from a brain scan (i.e. whether someone is going to become an addict) because those things rely on interactions with the environment, which is independent of the brain.

  27. Hi Rob,

    I could very well have schizophrenia or bipolar disorder, although I function normally aside from believing that my thoughts and dreams are being decoded covertly. I believe I have been under surveillance for about 5 years now due to sending messages to a website created by the actor, Rainn Wilson, called Soul Pancake. In those 5 years, my brain waves have been collected and analyzed in order to form a type of Thought Dictionary that enables the people involved to know my specific thoughts and dreams using my brain waves. I think I may be proof that thought and dream reading is indeed possible.

    Diane Pyzik
    Dianepyz@gmail.com

  28. There’s a product that’s been on the market for quite a while called the Emotiv Epoch, designed as a consumer device for detecting brainwaves. It claims to detect emotions, facial expressions, and trained tasks that can be used for playing games. However, it appears that it can distinguish between different emotions and facial expressions (smiling, winking, frowning, etc.) without directly being trained for them. It this only because it has a specific list of variables for the device to choose between, possibly even using the process of illumination to guess between the the expression or emotion done by player?

    1. It reads the facial expressions from the electronic signals from the actual face muscles, rather than from the brain. This is much more straight forward to accomplish because everyone uses (roughly) the same muscles to smile/wink etc.
      The 6 emotions it claims to be able to distinguish are (according the their website):

      EMOTIV currently measures 6 different emotional and sub-conscious dimensions in real time – Excitement (Arousal), Interest (Valence), Stress (Frustration), Engagement/Boredom, Attention (Focus) and Meditation (Relaxation).

      There’s some very fuzzy definitions here – is stress the same as frustration? I’d say not, they overlap somewhat. Is interest the same as valence, but different to ‘attention (focus)’? This is a common problem with emotion research – defining exactly what emotions are occurring and distinguishing between them . It looks to me like they have 6 different signals that they can distinguish, but they aren’t really sure as to what emotion they relate to. The signals you get from EEG are very broad in scope (i.e. they relate to the synchronisation/desynchronisation of firing from large arrays of neurons) so defining a specific cause to them is difficult. Of course in this case you also have the more general problem of how you categorise emotions.

      I notice the article was written in 2008, and yet the technology doesn’t seem to have advanced much into computer games, possibly because the success rate of the algorithms isn’t good enough to be usable in computer games. In research, where you tend to average responses over large number of trials, you don’t need to get a good, distinguishable signal on every trial in order to get a significant effect. In contrast with a control system for a computer game you’d want close to 100% accuracy otherwise the game would become unplayable.

  29. Hi Rob,
    I would once again like to thank you for all the helpful answers you’ve posted over the past year and a half.

    I found a story that raises some pretty interesting questions: https://www.psychologytoday.com/blog/mindmelding/201207/conjoined-twins-conjoined-brains-conjoined-minds This article, based around a pair of conjoined twins that each a share a part of the brain, seems to suggest that they can experience each other’s sensations, such as what the other is seeing or doing. It even hints that they might be able to read each other’s thoughts. The author goes on to say that if two separate individuals were to somehow hook their brains/consciousnesses together, that they would be able to achieve a form of ‘mind melding’. Thus, they would be able to perceive each other’s thoughts, experiences, and memories. Based on what you’ve said in the past, wouldn’t the unique connections in each separate individual’s brain be too different from each other to connect in a way as to read each other’s thoughts?

    1. Hi
      The article quotes a number of neuroscientists who say that another person’s consciousness cannot be accessed. His counter-assertion is (quote from the article)

      A large body of evidence points to our conscious states themselves residing in the back of the brain’s cortex. If we connected one person’s conscious states to another person’s sense of self—which I argue resides in the front of the brain, in the prefrontal lobes—we would have achieved true mindmelding.

      These is no agreed conclusion as to where ‘consciousness’ resides in the brain. Even if we ignore the (huge) problem of defining consciousness, the general consensus is that a distributed network of areas contribute to consciousness (e.g. http://www.sciencedirect.com/science/article/pii/S0959438813002298 ) not just one area. The localisation of the ‘sense of self’ has similar definitional issues to that of consciousness. To state that they definitively exist in one brain area is massively oversimplifying both the philosophical and neurological aspects of it. I also note that the terms he uses for the neural locations are hopelessly vague (i.e. ‘front of the brain’).
      Above that, he doesn’t specific how any such connection between two people would work. As I’ve discussed in earlier posts, each individual’s brain is both unique and massively complex. There would simply be no way of connecting two people’s brains together so that consciousness could be passed between the two.

  30. I recently read an article about a study were the researchers were able to predit what song a singbird will sing based on their neural activity. I didn’t read the study itself, only the MIT article.
    The article states that this is the next step towards human “typing with their thoughts”, but would this fall under “mind reading” in your view? They measured and decoded neural activity in the sensory-motor nucleus.

    Here is the article (which also contains a link to the study): https://www.technologyreview.com/s/609032/scientists-can-read-a-birds-brain-and-predict-its-next-song/amp/

    1. I think they are mapping the sensory areas that are involved in actually producing the sounds. Presumably (relatively) predictable areas of the bird’s brain are involved in producing particular frequencies of sound. Needless to say they aren’t reading the ‘meaning’ of the song. Listening to the actual vs predicted (by the algorithm) bird songs they actually sound slightly different to me, and no doubt the bird’s ears are better tuned to identifying differences in their songs than human ears, so I’m not sure how well we can really say that we have replicated the bird’s song.

      1. I recently read this article about a new study where AI is used to decode what a person was seeing.

        https://www.inverse.com/article/37682-mind-reading-ai-brain

        Apparently, the AI can use the data from one person to predit what another person is seeing. The study is pay-walled, so I can’t read it. According to your view, these predictions would never be very specific because of individual differences, correct?

        Also, one of the authors of the study says that decoding mental images could be helpful for “treating mental health issues”. I don’t really understand how (other than helping someone communicate). Any thoughts?

Leave a Reply

Your email address will not be published. Required fields are marked *