The Truth About AI “Superpowers” in science
Have you watched those news videos about revolutionary Artificial intelligence (AI) solutions that are expected to revolutionize every field? We need to take a deeper look at how these tools operate especially when big companies like Google make big promises about AI assisting scientists (Brown et al., 2022).
Google made a public statement in February 2025 about their new “AI Co-scientist” tool that claimed to assist scientists in their work. The media outlet questioned whether this tool provides scientists with “superpowers” (Le Page, 2025). Before we start getting excited, we should clear away promotional promises from actual facts.
AI tools are helpful, but they’re not magical. These systems function as advanced search engines which lack creative abilities. The tools review large amounts of existing information to generate responses from learned data and need human experts to verify their accuracy while lacking self-generated novel concepts. The tools operate under specific limitations that tech companies fail to disclose during their marketing campaigns. This practice generates exaggerated expectations about technology products which diminishes customer trust when these products fail to meet expectations (Mitchell, 2023).
The straightforward response to whether AI can discover new things stands at no. AI tools only function by reorganizing data which already exists. They’re good at connecting dots, but they can’t create new dots. The experts have demonstrated that these systems operate by repeating patterns they have learned previously. The systems lack the ability to understand or generate authentic new insights (Bender & Gebru, 2021).
Google presented various cases to demonstrate their tool, but thorough evaluation reveals these claims become invalid. Google presented their AI system as discovering new treatments for liver scarring, but expert Dr. O’Reilly investigated the claims and concluded that the drugs represented “well-known substances” without any new discoveries (O’Reilly, 2025, p. 25). The AI system located existing medical treatments found in journal articles thus duplicating the results of a standard Google search. Sharma and Patel (2024) included these drugs in their review paper before the AI discovered them.
Professor Pinatas felt astonished when the AI system produced results that matched his team’s recent discovery. The AI system received data from the professor’s previous research publications which it used to generate its results. The AI system failed to uncover new information according to Le Page (2025). The observed pattern exists since the beginning. The Google AI system reported discovering 40 new materials through its assistance in 2023 but Dr. Palgrave later discovered that none of these materials represented actual new discoveries (Palgrave, 2024, p. 115).
These AI tools hold no worth but also serve an important purpose. They do have real value. AI tools demonstrate excellence in scientific paper organization alongside study connection detection and summary generation and experimental verification support for researchers. According to Dr Palgrave experts should use AI to support scientific research because AI becomes most beneficial when experts guide its operations (Palgrave, 2024, p. 118). Current research demonstrates that scientific paper reading speed increases by approximately 40% through these tools which allows researchers to dedicate more time to experimental work (Ramachandran et al., 2024). The technology delivers useful benefits although its capabilities should not be overstated.
These claims fail to understand what scientists truly require. Most scientists already have many research concepts. Scientists face two major challenges in their work: finding enough time and securing sufficient funds to execute their research ideas. Science’s most demanding task involves asking the right questions. Scientist Richard Feynman advised people to avoid self-deception because it remains the easiest way to get deceived (Feynman, 1999, p. 28). Human creativity cannot be substituted by computer programs as we should avoid misleading ourselves about this fact.
Scientists face genuine worries about excessive reliance on these automated systems. The high cost of these tools poses a threat to research conducted by certain scientists. The built-in biases in the AI system have the potential to steer scientists toward incorrect paths. When AI produces incorrect information who should take responsibility for this mistake? The ownership of concepts developed through these tools remains unclear. Research conducted with 500 institutions revealed that commercial AI tools worry 73% of institutions about their dependence on these tools (Wong & Fernandez, 2024).
Some experts maintain that general access to these tools could enable smaller laboratories to compete effectively against large research facilities with extensive financial resources. AI assistants combined with open access implementation will address the resource disparities which have oppressed developing region researchers according to Johnson (2024, p. 203).
We should be skeptical about big claims about AI when they come from companies trying to sell us these technologies. When headlines promise “superpowers” but deliver fancy search tools, we all end up confused about what science and technology can really do.
The writer, Ayesha Khan, is a BS English student at Iqra University (Karachi Campus) and can be reached at ayesha.g29447@iqra.edu.pk.