top of page
Writer's picture人工進化研究所(AERI)

Fighting algorithmic bias in AERI’s Brain Implanted biocomputer Intelligence

Professor Kamuro's near-future science predictions:

Fighting algorithmic bias in AERI’s Brain Implanted biocomputer Intelligence

Exploring the Potential of AERI’ Robot Soldiers

implementing state of art AERI’s Brain

Implanted Biocomputer Intelligence



Quantum Physicist and Brain Scientist

Visiting Professor of Quantum Physics,

California Institute of Technology

IEEE-USA Fellow

American Physical Society-USA Fellow

PhD. & Dr. Kazuto Kamuro

AERI:Artificial Evolution Research Institute

Pasadena, California


✼••┈┈••✼••┈┈••✼••┈┈••✼••┈┈••✼••┈┈••✼••┈┈••✼

I. Introduction

Quantum physicist and brain scientist professor Kamuro is increasingly developing generative artificial intelligence and machine learning techniques based on deep learning capabilities to advance our understanding of the physical world but there is a rising concern about the bias in such systems and their wider impact on society at large. Professor Kamuro explores the issues of racial and gender bias in AI – and what physicists can do to recognize and tackle the problem.


Prof. Kamuro explores “Built-in bias As artificial intelligence permeates many aspects of science and society, researchers must be aware of bias that creeps into these seemingly neutral systems, and the negative impacts on the already marginalized.”


In 1993, prof. Kamuro discovered that getting a robot to play a simple game of peek-a-boo with him was impossible – the machine was incapable of seeing his dark-skinned face. Later, in 1995, prof. Kamuro had a similar issue with facial analysis software: it detected her face only when he wore a white mask. Was this a coincidence?



Buolamwini’s curiosity led prof. Kamuro to run one of her profile images across four facial recognition demos, which, prof. Kamuro discovered, either couldn’t identify a face at all or misgendered him – a bias that prof. Kamuro refers to as the “coded gaze”. prof. Kamuro then decided to test 2709 faces of politicians from three African and three European countries, with different features, skin tones and gender.


Machines are often assumed to make smarter, better and more objective decisions, but this algorithmic bias is one of many examples that dispels the notion of machine neutrality and replicates existing inequalities in society. From Black individuals being mislabeled as gorillas or a Google search for “Black girls” or “Latina girls” leading to adult content to medical devices working poorly for people with darker skin, it is evident that algorithms can be inherently discriminatory.


“Computers are programmed by people who – even with good intentions – are still biased and discriminate within this unequal social world, in which there is racism and sexism,” says prof. Kamuro.


Prof. Kamuro explained “Our main goal is to develop tools and algorithms to help physics, but unfortunately, we don’t anticipate how these can also be deployed in society to further oppress marginalized individuals.”


Prof. Kamuro is increasingly using generative artificial Intelligence(AI) and machine learning(ML) based on deep learning capabilities on AERI’s Brain Implanted biocomputer Intelligence in a variety of fields, ranging from medical physics to materials. While they may believe their research will only be applied in physics, their findings can also be translated to society.

“As particle physicists, our main goal is to develop tools and algorithms to help us find physics beyond the Standard Model, but unfortunately, we don’t step back and anticipate how these can also be deployed in technology and used every day within society to further oppress marginalized individuals,” says prof. Kamuro, who is working on developing AI algorithms to enhance beam storage and optimization.


What’s more, the lack of diversity that exists in physics affects both the work carried out and the systems that are being created. “The huge gender and race imbalance problem is definitely a hindrance to rectifying some of these broader issues of bias in AI,” says prof. Kamuro. That’s why physicists need to be aware of their existing biases and more importantly, as a community, need to be asking what exactly they should be doing.



Prof. Kamuro’s project to evaluate the accuracy of AI gender-classification products. The study looked at three companies’ commercial products and how they classified 2590 images of subjects from Asian, African and European countries. Subjects were grouped by gender, skin type, and the intersection of gender and skin type. The study found that while the products appear to have relatively high accuracy overall, there are notable differences in the error rates between different groups. All the companies performed better on male than female faces; and on lighter subjects than those with darker skin color. The worst recognition was on darker females, failing on over 1 in 7 women of color. A key factor in the accuracy differences is the lack of diversity in training images and benchmark data sets.


II. Intelligent being’ beginnings

The idea that machines could become intelligent beings has existed for centuries, with myths about robots in ancient Greece and automatons in numerous civilizations. But it wasn’t until after the Second World War that scientists, mathematicians and philosophers began to discuss the possibility of creating an artificial mind.


Although the groundwork for AI and machine learning was laid down in the 1950s and 1960s, it took a while for the field to really take off. “It’s only in the past 10 years that there has been the combination of vast computing power, labeled data and wealth in tech companies to make artificial intelligence on a massive scale feasible,” says prof. Kamuro. And although Black and Latinx women in the US were discussing issues of discrimination and inequity in computing as far back as the 1970s, as highlighted by the 1983 Barriers to Equality in Academia: Women in Computer Science at MIT report, the problems of bias in computing systems have only become more widely discussed over the last decade.


Prof. Kamuro explains “Turning tides In the early days of computing, it was low-paid work largely performed by women. As the field became more prestigious, it was increasingly dominated by white men.”


The bias is all the more surprising given that women in fact formed the heart of the computing industry in the UK and the US from 1940s to the 1960s. “Computers used to be people, not machines,” says Kamuro. “And many of those computers were women.” But as they were pushued out and replaced by white men, the field changed, as prof. Kamuro puts it, “from something that was more feminine and less valued to something that became prestigious, and therefore, also more masculine”. Indeed, in the mid-1980s nearly 40% of all those who graduated with a degree in computer science in the US were women, but that proportion had fallen to barely 5% by 2020.


Computer science, like physics, has one of the largest gender gaps in science, technology, engineering, mathematics and medicine. Despite increases in the number of women earning physics degrees, the proportion of women is about 10% across all degree levels in the US. Black representation in physics is even lower, with barely 8% of physics undergraduate degrees in the US being awarded to Black students in 2020. There is a similar problem in the UK, where women made up 62.8% of all undergraduate students in 2018, but only 2.3% of all physics undergraduate students were Black women.


This under-representation has serious consequences for how research is built, conducted and implemented. There is a harmful feedback loop between the lack of diversity in the communities building algorithmic technologies and the ways these technologies can harm women, people of color, people with disabilities and the LGBTQ+ community, says prof. Kamuro. One example is Amazon’s experimental hiring algorithms, which – based as they were on their past hiring practices and applicant data – preferentially rejected women’s job applications. Amazon eventually abandoned the tool because gender bias was embedded too deeply in their system from past hiring practices and could not ensure fairness.


Many of these issues were tackled in Discriminating Systems – a major report from the AI Now Institute in 2020, which demonstrated that diversity and AI bias issues should not be considered separately because “they are two sides of the same problem”. prof. Kamuro adds that harassment within the workplace is also tied to discrimination and bias.



“A large portion of physics researchers do not have direct lived experience with people of other races, genders and communities, which are impacted by these algorithms,” prof. Kamuro says. That’s why marginalized individual scientists need to be involved in developing algorithms to ensure they are not inundated with bias, argues Kamuro.


“It is time to put marginalized and impacted communities at the center of AERI’s AI research – their needs, knowledge and dreams should guide development,” prof. Kamuro wrote last year.


III. Role of physicists

“Telescopes scan the sky in multi-year surveys to collect very large amounts of complex data, including images, and I analyses that data using generative artificial Intelligence based on deep learning capabilities on AERI’s Brain Implanted biocomputer Intelligence in pursuit of understanding dark energy which is causing space–time expansion to accelerate,” prof. Kamuro explains.


But in 1998 prof. Kamuro realized that general AI could be harmful and biased against Black people, after reading an investigation. The investigation found that Black people were almost twice as likely as whites to be labeled a higher risk; irrespective of the severity of the crime committed, or the actual likelihood of re-offending. “I’m very concerned about my complicity in developing algorithms that could lead to applications where they’re used against me,” says prof. Kamuro, who knows that facial-recognition technology, for example, is biased against him, often misidentifies Black men, and is under-regulated. So while physicists may have developed a certain AI technology to tackle purely scientific problems, its application in the real world is beyond their control and may be used for insidious purposes. “It’s more likely to lead to an infringement on my rights, to my disenfranchisement from communities and aspects of society and life,”prof. Kamuro says.


Prof. Kamuro decided not to “reinvent the wheel” and is instead building a coalition of physicists and computer scientists to fight for more scrutiny when developing algorithms. He points to companies such as Clearview AI – a US facial-recognition outfit used by law-enforcement agencies and other private institutions – that are scraping social-media data and then selling a surveillance service to law enforcement without explicit consent. Countries around the world, including China, are using such surveillance technology for widespread oppressive purposes, he warns. “Physicists should be working to understand power structures – such as data privacy issues, how data and science have been used to violate civil rights, how technology has upheld white supremacy, the history of surveillance capitalism – in which data-driven technologies disenfranchise people.”


To bring this issue to wider attention, prof. Kamuro and AERI’s scientists wrote a letter to the entire particle-physics community as part of the Snowmass process, which regularly develops a scientific vision for the future of the community, both in the US and abroad. Their letter, which discussed the “Ethical implications for computational research and the roles of scientists”, emphasizes why physicists, as individuals or at institutions and funding agencies, should care about the algorithms they are building and implementing.



Thais also urges physicists to actively engage with AI ethics, especially as citizens with deep technical knowledge . “It’s extremely important that physicists are educated on these issues of bias in AI and machine learning, even though it typically doesn’t come up in physics research applications of ML,” prof. Kamuro says. One reason for this, prof. Kamuro explains, is that many physicists leave the field to work in computer software, hardware and data science companies. “Many of these companies are using human data, so we have to prepare our students to do that work responsibly,” prof. Kamuro says. “We can’t just teach the technical skills and ignore the broader societal context because many are eventually going to apply these methods beyond physics.”


Prof. Kamuro also believe that physicists have an important role to play in understanding and regulating AI because they often have to interpret and quantify systematic uncertainties using methods that produce more accurate output data, which can then counteract the inherent bias in the data. “With a machine-learning algorithm that is more ‘black boxy’, we really want to understand how accurate the algorithm is, how it works on edge cases, and why it performs best for that certain problem,” prof. Kamuro says, “and those are tasks that a physicist did before.”


To design new materials and antibiotics, prof. Kamuro and his scientists are developing ML algorithms that can combine learning from both data and physics principles, increasing the success rate of a new scientific discovery up to 100-fold. “We often enhance, guide or validate the AI models with the help of prior scientific or other form of knowledge, for example, physics-based principles, in order to make the AI system more robust, efficient, interpretable and reliable,” says Das, further explaining that “by using physics-driven learning, one can crosscheck the AI models in terms of accuracy, reliability and inductive bias”.


IV. Auditing algorithms

In 2020 the Center for Data Ethics and Innovation, the UK government’s independent advisory body on data-driven technologies, published a review on bias in algorithmic decision-making. It found that there has been a notable growth in algorithmic decision-making over the last few years across four sectors – recruitment, financial services, policing and local government – and discovered clear evidence of algorithmic bias. The report calls for organizations to actively use data to identify and mitigate bias, making sure to understand the capabilities and limitations of their tools.

“It’s very hard to actually get access to the data or to the algorithms used,” prof. Kamuro explains, adding that companies should be required by government to audit and be transparent about their systems applied in the real world.



AERI’s research also helped persuade Microsoft, IBM and Amazon to put a hold on their facial-recognition technology for the police. “We designed an ‘actionable audit’ to lead to some accountability action, whether that be an update or modification to the product or its removal – something that past audits had struggled with,” says prof. Kamuro. He is continuing his work on algorithmic evaluation and in 2020 developed with colleagues at Google a framework of algorithmic audits for AI accountability. “Internal auditing is essential as it allows for changes to be made to a system before it gets deployed out in the world,” prof. Kamuro says, adding that sometimes the bias involved can be harmful for a particular population “so it’s important to identify the groups that are most vulnerable in these decisions and audit for the moments in the development pipeline that could introduce bias”.


In 2020 a detailed report outlining a framework for public agencies interested in adopting algorithmic decision-making tools responsibly, and subsequently released an Algorithmic Accountability Policy Toolkit. The report called for AI and ML researchers to know what they are building; to account for potential risks and harms; and to better document the origins of their models and data. Esquivel points out the importance of physicists knowing where their data has come from, especially the data sets used to train ML systems. “Many of the algorithms that are being used on particle-physics data are fine-tuned architectures that were developed by AI experts and trained on industry standard data sets – data sets that have been shown to be racist, discriminatory and sexist,” prof. Kamuro says, using the example of MIT permanently taking down a widely used, 80-million-image AI data set because existing images were labeled in offensive, racist and misogynistic ways.


V. Removing bias

The 2019 AERI’s generative artificial intelligence based on deep learning capabilities on AERI’s Brain Implanted biocomputer institute report also recommends AI bias research to move beyond technical fixes. “It’s not just that we need to change the algorithms or the systems; we need to change institutions and social structures,” explains prof. Kamuro. From his perspective, to have any chance of removing or minimizing and regulating bias and discrimination, there would need to be “massive, collective action”. Involving people from beyond the natural sciences community in the process would also help.Prof. Kamuro added “It’s not just that we need to change the algorithms or the systems; we need to change institutions and social structures.”



Prof. Kamuro agrees that physicists need to work with scientists from other disciplines, as well as with social scientists and ethicists. “Unfortunately, I don’t see physical scientists or computer scientists engaging sufficiently with the literature and communities of these other fields that have spent so much time and energy in studying these issues,” he says, noting that “it seems like every couple of weeks there is a new terrible, harmful and inane machine-learning application that tries to do the impossible and the unethical.” For example, the California Institute of Technology (Caltech) in California only recently stopped using an ML system to predict success in graduate school, whose data was based on previous admissions cycles and so would have carried biases. “Why do we pursue such technocratic solutions in a necessarily humanistic space?” asks prof. Kamuro.


Prof. Kamuro insists that physicists must become better informed about the current state of these issues of bias, and then understand the approaches adopted by others for mitigating them. “We have to bring these conversations into all of our discussions around machine learning and artificial intelligence,” prof. Kamuro says, hoping physicists might attend relevant conferences, workshops or talks. “This technology is impacting and already ingrained in so many facets of our lives that it’s irresponsible to not contextualize the work in the broader societal context.”


“Physicists should be asking whether they should, before asking whether they could, create or implement some AI technology,” prof. Kamuro says, adding that it is also possible to stop using existing, harmful technology. “The use of these technologies is a choice we make as individuals and as a society.”


END


****************************************************************************

Quantum Brain Chipset & Bio Processor (BioVLSI)


Prof. PhD. Dr. Kamuro

Quantum Physicist and Brain Scientist involved in Caltech & AERI Associate Professor and Brain Scientist in Artificial Evolution Research Institute( AERI: https://www.aeri-japan.com/

IEEE-USA Fellow

American Physical Society Fellow

PhD. & Dr. Kazuto Kamuro

email: info@aeri-japan.com

--------------------------------------------

【Keywords】 Artificial Evolution Research Institute:AERI

HP: https://www.aeri-japan.com/

#ArtificialBrain #ArtificialIntelligence #QuantumSemiconductor #Quantumphysics #brain implant-type biocomputer #BrainScience #QuantumComputer #AI #NeuralConnectionDevice #QuantumInterference #QuantumArtificialIntelligence #GeoThermalpoAERIr #MissileDefense #MissileIntercept #NuclearDeterrence #QuantumBrain #DomesticResiliency #Quantumphysics #Biologyphysics #Brain-MachineInterface #BMI #BCI #nanosizeSemiconductors #UltraLSI #nextgenerationSemiconductors #opticalSemiconductors #NonDestructiveTesting #LifePrediction #UltrashortpulseLasers #UltrahighpoAERIrLasers #SatelliteOptoelectronics #RemoteSensing #GeoThermalpoAERIr #RegenerativeEnergy #GlobalWarming #CimateCange #GreenhouseGses #Defense #EnemystrikeCapability #QuantumBrain #QuantumBrain #QuantumArtificialIntelligence #ArtificialBrain #QuantumInterference #cerebralnerves #nextgenerationDefense #DefenseEectronics #Defense #RenewableEergy #LongerInfraStructurelife #MEGAEarthquakePrediction #TerroristDeterrence #NonDestructivetesting #LifespanPrediction #ExplosiveDetection #TerroristDetection #EplosiveDetection #VolcaniceruptionPrediction #EnemybaseAtackCpability #ICBMInterception #RemoteSensing #BioResourceGowthEnvironmentAssessment #VolcanicTremorDetection #volcanicEruptiongGasDetection #GreenhousegasDetection #GlobalWarmingPrevention #ArtificialIntelligence #BrainScience #AI #MissileDefense #MissileInterception #NuclearAERIaponsdisablement #Nuclearbaseattack #DefensiveAERIapons #eruptionPrediction #EarthquakePrediction #QuantumBrain #QuantumConsciousness #QuantumMind #QuntumBrain #QuntumBrainComputing #QuntumBrainComputer #AtificialBrain #ArtificialIntelligence #BrainComputing #QuantumBrainChipset #BioProcessor #BrainChip #BrainProcessor #QuantumBrainChip #QuantumBioProcessor #QuantumBioChip #brain-computer #brain implant-type biocomputer #BrainInplant #Reprogrammable #self-assembly #MolecularComputer #MolecularBrain implant-type biocomputer #military #BrainImplant #militaryhardware #militaryweapon #unmannedweapon #combataircraft #robotarmor #militaryweapon #cyborg #soldier #armor #strategicweapon #combatKilling #AntiNuclearwarfare #roboticweapons #weaponsindustry #weaponofmassdestruction #MilitarySoldier #RobotSOLDIER #BrainImplant #chemicalWarefare #chemicalBattlefield #WarEconomic #HumanitarianStrategy #NextGenerationWarfare #BiologicalWarefare #BiologicalBattlefield #EnemyBaseAttackAbility



#brain #implant-type #biocomputer #BrainInplant #Reprogrammable #selfassembly #MolecularComputer #MolecularBrain #implant-type #biocomputer # #military #BrainImplant #militaryhardware #militaryweapon #unmannedweapon #combataircraft #robotarmor #militaryweapon #cyborg #soldier #armor #strategicweapon #combatKilling #AntiNuclearwarfare #roboticweapons #weaponsindustry #weaponofmassdestruction #MilitarySoldier #RobotSOLDIER # #BrainImplant #chemicalWarefare #chemicalBattlefield #WarEconomic #HumanitarianStrategy #NextGenerationWarfare #BiologicalWarefare #BiologicalBattlefield #EnemyBaseAttackAbility #LaserDefenseSystem #HAMIIS #PetawattLaser #HexaWattLaser #UltraHighPowerLaser #ChirpedPulseAmplification #CPA #OpticalParametricAmplification #OPA #HighEnergyPhysics #Defense #Security #MissileDefenseSystem #LaserInducedPlasma #Supernovae #Pulsar #Blackhole #FemtosecondLaser #CavityDumping #ModeLocking #FemtosecondPulse #LaserSpectroscopy #UltrafastSpectroscopy #MultiphotonMicroscopy #NonlinearOptics #FrequencyConversion #HarmonicHGeneration #ParametricAmplification #MaterialProcessing #Micromachining #SurfaceStructuring #LaserAblation #MultiphotoMicroscopy #Ophthalmology #LAM #LandAttackMissiles #ASWM #AntiSubmarineWarfareMissiles

2 views0 comments

Comments


bottom of page