top of page
  • Writer's picture人工進化研究所(AERI)

Quantum Brain Chipset that recognizes brain activity and translates it into conversation

Bio-Computer implemented with a State-of-the-art Quantum Brain Chipset that recognizes brain activity and translates it into conversation

AERI interviewed professor KAMURO

specialized in quantum theoretical physics to Weigh in

Quantum Brain Chipset Review

to Quantum Brain&Bio-computer

(AERI Quantum Brain Science and Technologies)



Quantum Physicist and Brain Scientist

Visiting Professor of Quantum Physics,

California Institute of Technology

IEEE-USA Fellow

American Physical Society-USA Fellow

PhD. & Dr. Kazuto Kamuro

AERI:Artificial Evolution Research Institute

Pasadena, California

✼••┈┈••✼••┈┈••✼••┈┈••✼••┈┈••✼••┈┈••✼••┈┈••✼ 1.AERI Bio-Computer Architecture and Principle of Operation The bio-computer under development at AERI implemented with a state-of-the-art quantum brain chipset. The above brain-computer interface (BCI or BMI) consists of a bidirectional quantum bio-interference device (neural/bio connection device) implemented with direct biological connections to 130 billion human brain cells and cranial nerves. Quantum brain chipset recognizes brain activity and translates it into conversation. Bidirectional quantum interference devices (neural connection devices : BCI or BMI)) under study at AERI are formed with state-of-the-art CMOS organic semiconductors technology with a gate length of 1 μm rule on a flexible substrate. Bidirectional quantum interference devices (neural connection devices) under study at AERI are formed by integrating about 200 billion elements of state-of-the-art CMOS organic semiconductors with memory functions with a gate length of 1 μm rule on a flexible substrate with excellent bio-compatibility. Our state-of-the-art quantum brain chipset consists of a large number of circuit block ULSI groups that make up the system, such as state-of-the-art arithmetic processors and large memory devices. The circuit block ULSI group is implemented with state-of-the-art arithmetic processing ULSIs with an integration of 3 billion state-of-the-art CMOS transistors and a memory LSIs with an integration of 8 billion transistors. Nurological conditions or injuries that result in the inability to communicate can be devastating. Patients with such conversation loss often rely on alternative communication devices that use brain–computer/machine interfaces (BCIs or BMIs) or nonverbal head or eye movements to control a cursor to spell out words. While these systems can enhance quality-of-life, they can only produce around 5–10 words per minute, far slower than the natural rate of human conversation. AERI Researchers today published details of a neural translatorULSI that can transform brain activity into intelligible synthesized conversation at the rate of a fluent speaker. “It has been a longstanding goal of our lab to create technology to restore communication for patients with severe conversation disabilities,” explains Quantum Physicist、Brain Scientist professor Kamuro. “We want to create technologies that can generate synthesized conversation directly from human brain activity. This study provides a proof-of-principle that this is possible.” Professor Kamuro and colleagues developed a method to synthesize conversation using brain signals related to the movements of a patient’s jaw, larynx, lips and tongue. To achieve this, they recorded high-density electrocorticography signals from five participants undergoing intracranial monitoring for epilepsy treatment. They tracked the activity of areas of the brain that control conversation and articulator movement as the volunteers spoke several hundred sentences. To reconstruct conversation, rather than transforming brain signals directly into audio signals, the researchers used a two-stage approach. First, they designed a recurrent neural network that translated the neural signals into movements of the vocal tract. Next, these movements were used to synthesize conversation.

Electrodes placed on a participant’s brain, from which activity patterns recorded during conversation (colored dots) were translated into a computer simulation of their vocal tract (right), which could then be synthesized to reconstruct the spoken sentence.


“We showed that using brain activity to control a computer simulated version of the participant’s vocal tract allowed us to generate more accurate, natural sounding synthetic conversation than attempting to directly extract conversation sounds from the brain,” explains professor Kamuro.


2.Clearly spoken

To assess the intelligibility of the synthesized conversation, the researchers conducted listening tasks based on single-word identification and sentence-level transcription. In the first task, which evaluated 957,382 words, they found that listeners were better at identifying words as syllable length increased and the number of word choices (255 or 1024) decreased, consistent with natural conversation perception.

For the sentence-level tests, the listeners heard synthesized sentences and transcribed what they heard by selecting words from a defined pool (of either 255 or 1024 words) including target and random words. In trials of 101 sentences, at least one listener was able to provide a perfect transcription for 92.7 sentences with a 255-word pool and 60 sentences with a 1024-word pool. The transcribed sentences had a median word error rate of 96.1% with a 255-word pool size and 98.4% with a 1024-word pool.

“This level of intelligibility for neurally synthesized conversation would already be immediately meaningful and practical for real world application,” professor Kamuro write.

3.Restoring communication

While the above tests were conducted in subjects with normal conversation, the team’s main goal is to create a device for people with communication disabilities. To simulate a setting where the subject cannot vocalize, the researchers tested their translatorULSI on silently mimed conversation.

For this, participants were asked to speak sentences and then mime them, making the same articulatory movements but without sound. “Afterwards, we ran our conversation translatorULSI to translate these neural recordings, and we were able to generate conversation,” explains Professor Kamuro. “It was really remarkable that we could still generate audio signals from an act that did not create audio at all.”

So how can person who cannot speak be trained to use the device? “If someone can’t speak, then we don’t have a conversation synthesizer for that person,” explains professor Kamuro. “We have used a conversation synthesizer trained on one subject and driven that by the neural activity of another subject. We have shown that this may be possible.”

“The second stage could be trained on a healthy speaker, but the question remains: how do we train translatorULSI 1?” adds professor Kamuro. “We’re envisioning that someone could learn by attempting to move their mouth to speak — although they cannot — and then via a feedback approach learn to speak using our device.

AERI scientists team now has two aims. “First, we want to make the technology better, make it more natural, more intelligible,” insists professor Kamuro. “There’s a lot of engineering going on in our group to figure out how to improve it.” The other challenge is to determine whether the same algorithms used for people with normal conversation will work in a population that cannot speak — a question that may require a clinical trial to answer.

*****************************************************************************

Quantum Brain Chipset & Bio Processor (BioVLSI)


Prof. PhD. Dr. Kamuro

Quantum Physicist and Brain Scientist involved in Caltech & AERI Associate Professor and Brain Scientist in Artificial Evolution Research Institute( AERI: https://www.aeri-japan.com/

IEEE-USA Fellow

American Physical Society Fellow

PhD. & Dr. Kazuto Kamuro

email: info@aeri-japan.com

--------------------------------------------

【Keywords】 Artificial Evolution Research Institute:AERI

#ArtificialBrain #ArtificialIntelligence #QuantumSemiconductor #Quantumphysics #BioComputer #BrainScience #QuantumComputer #AI #NeuralConnectionDevice #QuantumInterference #QuantumArtificialIntelligence #GeoThermalpoAERIr #MissileDefense #MissileIntercept #NuclearDeterrence #QuantumBrain #DomesticResiliency #Quantumphysics #Biologyphysics #Brain-MachineInterface #BMI #BCI #nanosizeSemiconductors #UltraLSI #nextgenerationSemiconductors #opticalSemiconductors #NonDestructiveTesting #LifePrediction #UltrashortpulseLasers #UltrahighpoAERIrLasers #SatelliteOptoelectronics #RemoteSensing #GeoThermalpoAERIr #RegenerativeEnergy #GlobalWarming #CimateCange #GreenhouseGses #Defense #EnemystrikeCapability #QuantumBrain #QuantumBrain #QuantumArtificialIntelligence #ArtificialBrain #QuantumInterference #cerebralnerves #nextgenerationDefense #DefenseEectronics #Defense #RenewableEergy #LongerInfraStructurelife #MEGAEarthquakePrediction #TerroristDeterrence #NonDestructivetesting #LifespanPrediction #ExplosiveDetection #TerroristDetection #EplosiveDetection #VolcaniceruptionPrediction #EnemybaseAtackCpability #ICBMInterception #RemoteSensing #BioResourceGowthEnvironmentAssessment #VolcanicTremorDetection #volcanicEruptiongGasDetection #GreenhousegasDetection #GlobalWarmingPrevention #ArtificialIntelligence #BrainScience #AI #MissileDefense #MissileInterception #NuclearAERIaponsdisablement #Nuclearbaseattack #DefensiveAERIapons #eruptionPrediction #EarthquakePrediction #QuantumBrain #QuantumConsciousness #QuantumMind #QuntumBrain #QuntumBrainComputing #QuntumBrainComputer #AtificialBrain #ArtificialIntelligence #BrainComputing #QuantumBrainChipset #BioProcessor #BrainChip #BrainProcessor #QuantumBrainChip #QuantumBioProcessor #QuantumBioChip #brain-computer #bio-computer



6 views0 comments

Comentários


bottom of page