[HN Gopher] Artificial intelligence gave a paralyzed woman her v...
___________________________________________________________________
Artificial intelligence gave a paralyzed woman her voice back
Author : gehwartzen
Score : 109 points
Date : 2023-08-24 17:36 UTC (5 hours ago)
(HTM) web link (www.ucsf.edu)
(TXT) w3m dump (www.ucsf.edu)
| FollowingTheDao wrote:
| You know these article titles, to me, are equivalent to?
|
| "Adolf Hitler rescued a lost kitten he found on the side of the
| road."
| c7DJTLrn wrote:
| I noticed that she selects characters by using her glasses as a
| pointing device and moving her head. Surely they could use an eye
| tracking device like Tobii instead?
| jrootabega wrote:
| Maybe there is some medical reason not to for her. But as a
| healthy user of head tracking for gaming, I would rather have
| head tracking, so I can move my eyes without interacting with
| the screen.
|
| Maybe it also doesn't work well with multiple people watching?
| danuker wrote:
| Also, instead of the button interface, she should check out
| Dasher text input:
|
| http://www.inference.org.uk/dasher/
| 5440 wrote:
| I have several of these devices in front of FDA on behalf of
| clients. I recommend anyone interested read the FDA guidance on
| the matter.
|
| Implanted Brain-Computer Interface (BCI) Devices for Patients
| with Paralysis or Amputation Non-clinical Testing and Clinical
| Considerations
|
| https://www.fda.gov/media/120362/download
| apaprocki wrote:
| Interesting thought experiment... does the right to remain silent
| prevent a USB device from being placed on your head?
| StarterPro wrote:
| Currently, they can't compel you to give up your fingerprint,
| but can force a password. So I think thoughts would count as a
| biologic rather than a generated phrase in that aspect.
| intrasight wrote:
| I'm not sure that right will protect you. The right applies to
| "something you know" not "something you have". You have a
| brain.
|
| A good analogy is smartphone passwords. Authorities can't make
| you share your password (something you know), but they can make
| you unlock you phone with a fingerprint (something you have).
| birdyrooster wrote:
| You are forgetting "something you are"
| vkou wrote:
| Yes, just like it prevents the prosecution from bringing in a
| psychic that will speak your thoughts to the court.
| bitdivision wrote:
| The 5th amendment protects you from being a witness against
| yourself [0]. So to me it seems pretty clear that in the US
| this would not be allowed.
|
| But then again, it seems as though being forced to reveal your
| password is not necessarily a violation of the 5th amendment
| [1]. I can't quite understand why the supreme court hasn't made
| a decision on this one yet. There are a lot of conflicting
| decisions now.
|
| 0: `nor shall be compelled in any criminal case to be a witness
| against himself`
|
| 1: https://www.aclu.org/news/privacy-technology/police-
| should-n...
| daveguy wrote:
| That is an interesting thought experiment. IANAL, but I'm
| pretty sure putting something on your head to essentially
| coerce information would be a violation of the right to remain
| silent. I don't know whether it would fall in the fingerprint
| vs spoken password space in terms of subject-to-search-warrant.
|
| Fortunately this tech is currently very person-specific and
| have to be trained to the person. So to thwart it you'd just
| have to think applesauce over and over.
| PcChip wrote:
| Wasn't it trained to her brain specifically?
| etrautmann wrote:
| A similar study came out, also in Nature on the same day from a
| team at Stanford:
|
| paper: https://www.nature.com/articles/s41586-023-06377-x
|
| one press writeup: https://spectrum.ieee.org/brain-implant-speech
| padolsey wrote:
| > Rather than train the AI to recognize whole words, the
| researchers created a system that decodes words from smaller
| components called phonemes. These are the sub-units of speech
| that form spoken words in the same way that letters form written
| words. "Hello," for example, contains four phonemes: "HH," "AH,"
| "L" and "OW."
|
| This is nifty. Also I'm oddly comforted by the fact that this
| system doesn't "read thoughts". It just maps slightly upstream
| from actual speech to the relevant speech/motor production
| regions of the brain. So no immediate concern for thought
| hacking...
|
| Separately, this makes me wonder what such a system would be for
| deaf people (with signing ability) who have lost their ability to
| move their arms. I imagine-optimistically-that one could just
| attach the electrodes to a slightly different area in the motor
| cortex and then once again train an AI to decode intent to signs
| (and speech). So basically the same system?
| idiotsecant wrote:
| I think not all deaf people have the same mapping of words and
| their precise phoneme to the typical expected muscle movements.
| If this mapping differed from a regular person this system
| would not be that useful on the interpreting speech side. I
| think we've had pretty good gesture recognition for a while, on
| the other hand. I bet it's possible to decode individual signs
| right now but sign language also has a different grammar from
| typical spoken English and a lot of meaning is context based so
| it might be tricky in that way, more of a translation problem.
| padolsey wrote:
| Oh yeh definitely. I meant more specifically: might it be
| possible to capture the electrical signals (much like this
| current system) from the parts of the motor cortex that
| create the series of muscle movements that form a 'sign', and
| then creating a 2d projection/display of these muscle
| movements and then ... downstream, a gesture recognition
| solution as u mention (a big downstream challenge).
|
| It sounds like a lot. Was just a thought experiment of how
| such spinal blocks/paralysis would affect deaf people and how
| they'd be able to continue to communicate with their deaf
| partners. Definitely niche but nonetheless interesting, and I
| think possible using the same general approach as the OP
| article.
|
| But yeh to then translate gestures to speech is a distinct
| and incredibly challenging problem on its own as you allude
| to. Perhaps in the future they can tap into signing/speaking-
| translators' brains and have AI learn those mappings in a
| fuzzy way.
| retrac wrote:
| > I think not all deaf people have the same mapping of words
| and their precise phoneme to the typical expected muscle
| movement
|
| In fluent sign language, there is something analogous to
| phonemes. In linguistics these days, they're just called
| phonemes, and considered equivalent to spoken language
| phonemes. They're a fixed class of shapes and locations. They
| combine in certain ways that make up morphemes, which then
| make up words. It does work very similarly, perhaps
| identically, to spoken language.
|
| The distribution of handshapes and they way they interact
| resembles spoken language. For example, it's somewhat hard to
| say "strengths" and people often produce a slurred
| "strengfs". The way it slurs together is rather predictable.
| It's very hard to say "klftggt", and so it just doesn't occur
| in natural language. Same with sign languages and hard-to-
| sign combinations.
|
| Phonemes have an exact realization, but they also exist
| relative to each other, the distance and direction between
| them is important. This is probably part of why an American
| can fairly easily understand New Zealand English, despite
| nearly all of the vowels being different. Another analogy: in
| tonal languages, if there's a low flat tone, then 3 rising
| tones, then a low flat tone, that final low tone may be quite
| a bit higher than the first low tone -- but it will be
| interpreted as a low tone, as it is judged relative to the
| previous rising tone. Vowel qualities besides tone work the
| same way. And so do hand gestures.
|
| There _is_ a lot of variation by dialect /region/community in
| sign languages. More than in a language like English. This
| makes it more complicated, but it shouldn't be
| insurmountable. And of course, not all deaf people speak sign
| languages as their native language. They would struggle just
| as people who learn any other language later in life do.
| notahacker wrote:
| A ML system which skipped the brain and just read the physical
| movements and converted them to voice goes a long way (I
| understand there are camera and glove based apps that can do
| this, but I'm not sure what the accuracy is like)
| cf100clunk wrote:
| The woman from the title is from Regina, Saskatchewan, Canada,
| and the CBC did a feature on her story. Her husband is pictured
| at her side in a Saskatchewan Roughriders tee shirt and Toronto
| Blue Jays ball cap, having dressed with his Canuckness set to 11:
|
| https://www.cbc.ca/news/health/paralysis-brain-speech-1.6943...
|
| My hope is that she'll be able to cheer on their teams.
| swayvil wrote:
| This kind of brain-reading certainly seems to be in the same
| general species as lie detection.
|
| So it must exist. An AI device that takes a thousand data points
| off your brain and tells you lie or not.
|
| What would we do if we had a ~100% accurate lie detector? How
| would that go down, socially?
| xwdv wrote:
| It would make trials much more straight forward events and
| allow us to deliver justice at scale, since we would now have a
| digital engine of truth.
| swayvil wrote:
| And then we demand that all our police and lawmakers and
| public servants get hooked up to the machine and then the
| next day it's illegal tech.
| sharikous wrote:
| We don't have lie detectors even for AIs and we can look at
| every single bit inside them
| burkaman wrote:
| It is not in the same species, these systems have to be trained
| on every individual brain, they are not reading some objective
| signal that is the same for everyone.
|
| > For weeks, Ann worked with the team to train the system's
| artificial intelligence algorithms to recognize her unique
| brain signals for speech. This involved repeating different
| phrases from a 1,024-word conversational vocabulary over and
| over again until the computer recognized the brain activity
| patterns associated with all the basic sounds of speech.
|
| To use this type of system for lie detection, if such a thing
| is possible, you'd have to get each subject to give you
| thousands of example statements with truth/lie labels. This
| obviously defeats the purpose, and also doesn't really seem
| possible - does lying for a training exercise produce the same
| brain patterns as lying to actually cover something up?
| Probably not.
| swayvil wrote:
| Your objections are that training is required and that where
| and how the lie is uttered might bear.
|
| Mere technical hurdles IMO.
| bagels wrote:
| I would presume that someone being subject to a lie
| detector may have different incentives than those running
| the lie detector, and they may intentionally taint the
| data.
| TuringTest wrote:
| I think the point is, who would deliberately train a system
| to detect their own lies?
| consumer451 wrote:
| Maybe an employee required to do so as part of on-
| boarding to a corporation or government org?
|
| Or maybe someone in the process of a passport or visa
| application?
|
| This podcast episode with Sean Carroll and Nita Farahany
| scared the crap out of me on this topic, it seems
| inevitable.
|
| This is the time to regulate neuro access prior to it
| becoming big business, and part of the TSA screening
| process.
|
| https://www.preposterousuniverse.com/podcast/2023/03/13/2
| 29-...
| swayvil wrote:
| I was thinking it might be something everybody does in
| highschool or something. Every morning you spend an hour
| training your AI shadow. Everybody gets one. So useful,
| like a cell phone.
| Vecr wrote:
| If that's the sensitivity, what's the specificity? How well
| does it translate from the population it's trained on to other
| populations? In what contexts is the type of lying it detects
| useful to detect? I would assume using it on someone in a
| criminal investigation context without their permission would
| be a 5th and 6th amendment violation (as it would almost
| entirely subvert the usefulness of legal representation).
| etrautmann wrote:
| no - this is not conceptually related to lie detection. Yes it
| uses ML to decode something, but that's where the similarity
| ends. This is decoding the patterns of brain activity used to
| generate speech.
| criley2 wrote:
| There can never be a 100% accurate lie detector, only a 100%
| "thinks they're telling the truth detector". Ultimately human
| memory is deeply flawed and we're capable of having entirely
| false memories and swearing on our lives to things that never
| occurred. A machine which can perfectly read our brains can
| only get this messy imperfect mix.
|
| Even in the sense of political intrigue, is it so hard to
| imagine someone so brainwashed they truly believe the lie they
| are telling you?
| circuit10 wrote:
| I would only call something a lie if it's conscious and
| intentional, otherwise it's a mistake
| nerdponx wrote:
| That seems like a pretty big leap. This doesn't require
| language understanding, just a translation between muscle
| movements and sounds. Lie detection is way more complicated.
| swayvil wrote:
| It could detect intent to deceive. Or the brain-mode specific
| to lie-crafting.
| causi wrote:
| Unless someone figures out a way to do this without surgery
| from ten feet away the answer is that it won't.
| swayvil wrote:
| Ya I was thinking the same thing. Requiring a brain operation
| to make it work would be a deal breaker. Need an MRI or
| something.
| egoregorov wrote:
| I am very happy that this woman is able to communicate.
|
| /n
|
| /n
|
| That being said, why can't "AI" help me complete my code!
___________________________________________________________________
(page generated 2023-08-24 23:00 UTC)