Polling
    
                
    
        
                
                                        
                     
                    
                    
                    
                    
                
Artificial intelligence
                                                                                                    
Artificial intelligence (AI) is the ability of a computer to perform tasks commonly associated with intelligent beings, such as the ability to reason, discover meaning, generalize, and learn from past experience. AI is now used in limited applications such as medical diagnosis, computer search engines, and facial, voice, and handwriting recognition. Supporters claim AI has the potential to transform every sector of our economy and society by powering the information economy, fostering better informed decisions, and helping unlock answers to questions that are currently unanswerable. However, many people are worried about the future of AI - including those involved with creating that very future. Surveys show that more 57% of respondents rate the societal risks of AI as high, compared with 25% who say the benefits of AI are high.  One recent criticism of AI has largely focused on chatGPT, which has been widely attacked for being inaccurate, biased, and almost human: “the bot can become aggressive, condescending, threatening, committed to political goals, clingy, creepy, and a liar.” 
By analyzing patterns in people's online activities and social media interactions, AI algorithms can predict what a person is likely to do next. Some of the biggest risks today include consumer privacy, biased programming, danger to humans, job displacement, and unclear legal regulation. Some estimates suggest 300 million full-time jobs could be affected globally by AI automation globally by 2040. With the acceptance of autonomous robots and generative AI, artificial intelligence will eventually transform virtually every existing industry. Cyberattacks that employ AI techniques have become more prevalent. Cybercriminals can misuse these attacks to gain ill intent, utilizing AI-enhanced tools like deepfake videos, chatbots, and fake audio to deceive and manipulate individuals or systems.
Critics say top AI labs acknowledge that extensive research has shown AI systems with human-competitive intelligence can pose profound risks to society and humanity. They say advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. They also worry governments around the world will use AI to develop weapons before anything else, and claim AI could become self-aware one day and have feelings and emotions that mimic those of humans. It is estimated that AI’s point of singularity –the hypothetical future of machines with the cognitive capacity equal to humans– will occur as soon as 2030. After that point, machine intelligence will exceed that of humans.
Prominent AI pioneer Geoffrey Hinton has been particularly vocal about the need for advanced AI systems to be programmed not to harm humans. Hinton is widely known as the "Godfather of AI" for his foundational work on neural networks. Since leaving his position at Google in 2023, he has publicly spoken out about the potential risks of AI. Hinton and other experts have raised concerns that AI could be used maliciously to create harmful tools like lethal autonomous weapons or biological agents. Hinton has stressed the need for urgent research into AI safety to understand how to control systems that surpass human intelligence, noting that profit motives may not sufficiently drive large companies to prioritize safety. He has suggested programming advanced AI with "maternal instincts" to ensure the protection of humans and has supported government regulation, such as a California AI safety bill, arguing it is necessary to encourage tech companies to invest more in safety research. These concerns relate to the broader "AI alignment problem," which is the challenge of ensuring that AI goals align with human values.
Pending Resolution: H.R.4223 - National AI Commission Act
Sponsor: Rep. Ted Lieu (CA)
Status: House Committee on Science, Space, and Technology
Chair: Rep. Brian Babin (TX)
By analyzing patterns in people's online activities and social media interactions, AI algorithms can predict what a person is likely to do next. Some of the biggest risks today include consumer privacy, biased programming, danger to humans, job displacement, and unclear legal regulation. Some estimates suggest 300 million full-time jobs could be affected globally by AI automation globally by 2040. With the acceptance of autonomous robots and generative AI, artificial intelligence will eventually transform virtually every existing industry. Cyberattacks that employ AI techniques have become more prevalent. Cybercriminals can misuse these attacks to gain ill intent, utilizing AI-enhanced tools like deepfake videos, chatbots, and fake audio to deceive and manipulate individuals or systems.
Critics say top AI labs acknowledge that extensive research has shown AI systems with human-competitive intelligence can pose profound risks to society and humanity. They say advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. They also worry governments around the world will use AI to develop weapons before anything else, and claim AI could become self-aware one day and have feelings and emotions that mimic those of humans. It is estimated that AI’s point of singularity –the hypothetical future of machines with the cognitive capacity equal to humans– will occur as soon as 2030. After that point, machine intelligence will exceed that of humans.
Prominent AI pioneer Geoffrey Hinton has been particularly vocal about the need for advanced AI systems to be programmed not to harm humans. Hinton is widely known as the "Godfather of AI" for his foundational work on neural networks. Since leaving his position at Google in 2023, he has publicly spoken out about the potential risks of AI. Hinton and other experts have raised concerns that AI could be used maliciously to create harmful tools like lethal autonomous weapons or biological agents. Hinton has stressed the need for urgent research into AI safety to understand how to control systems that surpass human intelligence, noting that profit motives may not sufficiently drive large companies to prioritize safety. He has suggested programming advanced AI with "maternal instincts" to ensure the protection of humans and has supported government regulation, such as a California AI safety bill, arguing it is necessary to encourage tech companies to invest more in safety research. These concerns relate to the broader "AI alignment problem," which is the challenge of ensuring that AI goals align with human values.
Pending Resolution: H.R.4223 - National AI Commission Act
Sponsor: Rep. Ted Lieu (CA)
Status: House Committee on Science, Space, and Technology
Chair: Rep. Brian Babin (TX)
Suggestion
                    
                            Poll Opening Date
        November 3, 2025
    Poll Closing Date
        November 9, 2025