[HN Gopher] Brain-Computer Interface User Types 90 Characters pe...
       ___________________________________________________________________
        
       Brain-Computer Interface User Types 90 Characters per Minute with
       Mind
        
       Author : pessimizer
       Score  : 105 points
       Date   : 2021-05-14 17:49 UTC (5 hours ago)
        
 (HTM) web link (www.the-scientist.com)
 (TXT) w3m dump (www.the-scientist.com)
        
       | pvg wrote:
       | Recently:
       | 
       | https://news.ycombinator.com/item?id=27134049
        
       | gentleman11 wrote:
       | Very cool until 50 years in the future where Facebook and
       | government starts researching ways to use the advanced versions
       | of this sort of tech
        
         | yonaguska wrote:
         | I think you're thinking too far into the future. They'll be on
         | this sort of tech much sooner than 50 years in the future.
        
           | cobertos wrote:
           | They already are, they bought CTRL-Labs which made a
           | prototype wristband for reading motor neuron activation. They
           | have a demo on YouTube showing it for keyboarding as well as
           | for surprisingly accurate 3D hand pose estimation.
           | 
           | CTRL-labs is the one behind https://tech.fb.com/inside-
           | facebook-reality-labs-wrist-based...
        
       | [deleted]
        
       | scrubs wrote:
       | I could really use such a device for my youngest kid. Hoping to
       | see more on this.
        
       | carapace wrote:
       | It seems like a direct brain-to-speech approach would make more
       | sense for this application (communication from paralyzed people.)
       | This is sort of like trying to write with a theremin, eh?
       | 
       | I'm still on the far side with this, I know, but whenever these
       | BCI stories go by I am bemused. You connect a brain to a computer
       | and then "train a neural network" on the computer side, that
       | makes no sense. The _sophisticated_ NN is on the brain-side, yes?
       | With an integrated subjective UI already, yes? You do not need
       | fancy implants or hardware to  "talk" to your machine, galvanic
       | response and (now) micro-IMUs (inertial measurement units, one on
       | each finger) are _plenty_ of bits-per-second for a subjective
       | Think-to-Type experience. Use hypnosis, it 's easy, to program
       | your brain to output a byte by changing the angle of your
       | fingertips and then twitching the thumb to "clock" the data. Or
       | just standard stenotype, it would likely be easier, eh?
       | 
       | You don't need fancy hardware or NN software (and especially
       | surgery to stick electrodes in your brain) to "talk" to your
       | machine with your mind, you already have all the technology you
       | need "built in" to your nervous system. It's not even difficult
       | to use. Literally the first thing I learned to do when I was
       | studying (self-)hypnosis was setting up a binary "yes/no" signal
       | (a finger twitch) from my unconscious mind to facilitate
       | communication.
       | 
       | Anyhow, not to go on and on about it, the bottom line is when you
       | connect a computer to a brain please remember that the _brain_ is
       | the fancier, more powerful  "device", okay? You will waste less
       | time. (I am not sure why I am not able or willing to try to make
       | something out of this. I can talk to my computer well enough that
       | I don't bother with BCI. ANd somehow the plight of these poor
       | folks doesn't move me enough to do something. Am I a moral
       | cretin? I'm seriously you guys. Should I try (harder) to get
       | something going with simple hypnotic BCI systems? HN?)
        
         | milkey_mouse wrote:
         | I totally agree. There is more than enough "unmapped" area on
         | the motor & sensory homuncului[1] (creepy image warning) to be
         | repurposed as an I/O channel by retraining the brain. In
         | general, brains are plastic enough to remap senses like this[2]
         | given a handful of months' work/practice. Adults are capable of
         | learning ASL, for example, which uses completely different
         | "output channels" from spoken language. Even when learning to
         | drive, the car can start to feel like an extension of one's
         | self (and I'm not a car person at all). Famously, people who
         | become blind begin to develop better acuity in their other
         | senses because of all the "freed up space" in the visual cortex
         | (which I'm sure is an oversimplification, I'm not a
         | neuroscientist).
         | 
         | We have so much raw data coming into our brains--via sight and
         | sound especially, but surprising amounts via other senses as
         | well--that I see little reason to add "additional senses" or
         | outputs when we have rather high bandwidth already. Heck, my
         | thoughts run ahead of my speech or typing fairly often, and
         | both of those are pretty low-bandwidth channels.
         | 
         | One-hand chorded keyboards[3] are perennially reinvented by
         | someone with an Arduino and a few buttons (not to belittle that
         | work!) but you can't just pick up a chorded keyboard and type
         | as fast as a regular one, so they are generally treated as a
         | novelty. But if one spent as much time learning to use a
         | chorded keyboard as they did a regular one, I'd bet they'd be
         | just as fast. Perhaps the reason fluent computer users show
         | little desire for better human-computer interfaces is that they
         | are afflicted by a generalized form of baby duck syndrome[4].
         | To mix metaphors in a really confusing way, interacting with a
         | computer with a keyboard, mouse, and monitor feels like using
         | GNU nano: shallow learning curve, but it tapers off quickly. I
         | wish I knew the Vim of HCI: I'd be willing to practice for
         | years to adapt to it, to compute more efficiently for the rest
         | of my life.
         | 
         | [1]: https://en.wikipedia.org/wiki/Cortical_homunculus [2]:
         | https://en.wikipedia.org/wiki/Cortical_remapping#Plasticity
         | [3]: https://www.stavros.io/posts/keyyyyyyyys/ [4]:
         | https://en.wikipedia.org/wiki/Imprinting_(psychology)#Baby_d...
        
         | simonh wrote:
         | All of that involved consciously causing physiological changes
         | in your body for the computer to sense. Suppose this was a
         | thought interface for an aircraft pilot. Physically they're
         | already fully engaged in a demanding activity, what we need is
         | to open up a new channel of communication from them to the
         | computer in addition to their physical input. If they're
         | focusing on twitching or changing the resistivity in their
         | fingertips, that's going to interfere with their using their
         | fingers to manipulate controls.
         | 
         | There are other similar situations, such as people in a coma
         | that can't cause physiological changes. The point is to offload
         | as much processing on to the computer, to make the process of
         | communication as simple and direct for us as possible. Training
         | ourselves to jump through mental and physiological hoops is the
         | opposite of the point.
        
         | lowdose wrote:
         | Maybe the connections are in reality electronic stimulants to
         | release dopamine, serotonine, endorphin & GABA et. al. in the
         | mix of your cocktail without using pharmaceuticals. Neural link
         | is a big Pharma killer dressed as a pig. Use a NN in the
         | feedback loop trained on your autocomplete output.
        
       | neatze wrote:
       | How well subject can play QWAP with such interface ?
        
       | texasbigdata wrote:
       | So 10ish words per minute? Not blazing fast but for leading
       | research pretty darn good! 40 WPM it probably beats the average
       | human and most cell phone applications.
        
         | spoonjim wrote:
         | For someone who previously couldn't type, 10WPM probably feels
         | like being able to leap over Mt. Everest.
        
         | sonograph wrote:
         | The NN in the article is based on handwriting, so comparing WPM
         | to handwriting is a better apples-to-apples comparison than
         | typing:
         | 
         | > Studies compiled by Amundson (1995) show that copying rates
         | using handwriting at the 1st grade level are about 5 words per
         | minute (WPM) on average, but by the end of elementary school at
         | the 5th and 6th grade level are about 10 to 12 WPM.
         | 
         | Source
         | https://www.montgomeryschoolsmd.org/departments/hiat/resourc...
        
         | cortesoft wrote:
         | 90 characters per minute is 18 words per minute.
         | 
         | 'Words' in WPM is standardized to 5 characters
         | 
         | https://en.wikipedia.org/wiki/Words_per_minute
        
           | geoelectric wrote:
           | Word length in characters:
           | 
           | Zylog Z80: 1
           | 
           | Intel 8086: 2
           | 
           | PowerPC: 4
           | 
           | Apple M1: 8
           | 
           | IBM Selectric: 5
        
         | whiddershins wrote:
         | Yeah, similar to another comment, autocomplete alone could get
         | WPM way up
        
         | tobyjsullivan wrote:
         | I guess the typer's experience could be a factor. I would
         | expect someone to type around 10WPM - or even less - the first
         | time they were placed in front of a keyboard. Getting to 40WPM
         | usually takes years of practice. No evidence one way or another
         | if this method could achieve the same with just practice and no
         | changes to the technology.
        
           | Jtsummers wrote:
           | > Getting to 40WPM usually takes years of practice.
           | 
           | 30-40 WPM was the target for high school computer typing
           | courses in the 90s (my experience, may have been a similar
           | target earlier and I don't know the target for courses
           | targeting typewriters), and this was a time when computers
           | weren't ubiquitous like today. It's very feasible to hit that
           | target in a few months especially if you learn proper hand
           | placement (hunt & peck, two finger typists, will have a
           | longer ramp up time but can often still hit those speed
           | targets).
        
         | kordlessagain wrote:
         | With about a word or two error every few minutes...maybe they
         | could make the idea of rage run the backspace key.
        
           | visarga wrote:
           | We're going to complain about neural interface design like we
           | do about web design. What are the best mappings?
        
         | [deleted]
        
         | melling wrote:
         | Throw in some eye tracking software, autocompletion,
         | specialized writing or programming software and you can do much
         | better.
         | 
         | https://www.tabnine.com/
        
       | oblak wrote:
       | Purely medical implications of such tech are more profound than I
       | can imagine. And people have been imagining it for decades.
       | 
       | Can you even imagine StarCraft or Quake matches, or better yey,
       | some completely new games developed for brain computer interface?
       | Man, it's VERY exciting.
        
         | neatze wrote:
         | Highly doubt such interfaces will outperform human natural eyes
         | and hands.
        
           | anonporridge wrote:
           | At first...
        
       | Nihilartikel wrote:
       | As exciting as this is now, I'm very eager to see how brain-
       | machine interfaces mature with findings how to best meet the
       | machine half way.
       | 
       | Decoding hand writing accurately is an approximate thing even
       | when it's spoon fed to the machine via a digitizer, to say
       | nothing of interpreting the movements from neural activity. What
       | could be accomplished if we abandoned the familiarity of writing
       | and existing glyphs and established a novel set of learnable
       | 'brain semaphores' (e.g. visualize this symbol, play this sound
       | on your mental speakers) that were easy enough for the user to
       | practice and master with feedback, and blazing unambiguous
       | markers to the probes and recognition algorithm? It's not hard to
       | imagine that with deliberate practice and immediate feedback, the
       | brain can develop connections for the sole purpose of the
       | interface that no longer even piggyback on imagining
       | motor/sensory events. Sending a keystroke would be no different
       | than lifting a finger.
       | 
       | I'm sure this isn't a novel idea, but my armchair observer
       | prediction is that the field will start to move in the direction
       | of purposefully designed mind-sign schemes and away from mimicry
       | of physical movement as the technology becomes ubiquitous and
       | less invasive.
        
         | Hammershaft wrote:
         | This concept made my sleep deprived morning just a little
         | better, so thanks.
        
         | nitred wrote:
         | You've put to words what I've been thinking about for many
         | years. I'm one of those people who find languages difficult and
         | often find it difficult to convert my thoughts into words and
         | sentences. From my perspective, 75 percent of mental effort
         | goes into sentence formation and this process often derails my
         | thought process.
         | 
         | When I discovered NLP models like Word2Vec and Thought-Vectors,
         | which assign vectors to words or even whole sentences/concepts,
         | intuitively, it felt like that was exactly what was happening
         | in my mind. It might be an illusion but I do not think in
         | sentences or words, I only think in concepts and images that
         | appear in my mind's eye in an instant. To form a larger idea, I
         | build a chain of concepts. I am sure that eventually probes can
         | pick up a clear distinct vector that uniquely summarizes the
         | entire concept of what I was thinking or visualizing.
         | 
         | And as you suggested we could start off with a smaller subset
         | of semaphores or concepts or visualizations that at first are
         | the easiest signals to pick up via probes. After practice,
         | these signals only get crisper.
        
           | rblatz wrote:
           | Thanks for putting words to something I've been living with
           | but unable to express.
        
           | corndoge wrote:
           | Can I ask if/how much you read books? I have a hypothesis I'm
           | gathering data on.
        
         | alanbernstein wrote:
         | That's an interesting idea, but what makes you confident that
         | any concept/image/sound you hold in your mind can be less
         | squishy and vague than traditional sounds or characters?
        
       | narwally wrote:
       | Since it uses the neural signals for handwriting, I'm not sure
       | this is going to allow me to achieve my dream of using emacs with
       | my mind.
        
         | WesolyKubeczek wrote:
         | You could probably put in some custom strokes for special keys.
         | 
         | I'd like those interfaces to be less invasive. Maybe one day
         | they are going to be less Neo and more Case.
        
           | peterkos wrote:
           | From my friends who studied neuroscience, the hardest part is
           | getting any signal from the crazy amounts of noise. Maybe
           | with the crazy amount of ML research poured into that space
           | we can get one of those nice cross-disciplinary breakthroughs
           | that come from similar tech (bluetooth/wireless sigproc,
           | audio, etc.)
        
       | doener wrote:
       | See also: https://news.ycombinator.com/item?id=27152734
        
       | harveywi wrote:
       | Now it is possible for one to literally mind one's p's and q's.
        
       ___________________________________________________________________
       (page generated 2021-05-14 23:00 UTC)