Opinion

Artificial Intelligence at a crossroads: An entanglement of progress, power and risk

AI systems have become increasingly autonomous, powerful and deeply integrated into society over the past few years. Large Language Models (LLMs) can now think about many different things, write software code, read legal documents and even help scientists make discoveries

Artificial Intelligence (AI) has made a big leap from research labs to everyday life. What once took decades to commercialise is now reaching the real world in months or even days. AI is changing faster than almost any other technology has ever seen. It ranges from conversational systems built into smartphones to autonomous vehicles that operate businesses. This momentum promises substantial benefits, but it also raises an important question: are people still entirely in control of the systems they are building?
AI systems have become increasingly autonomous, powerful and deeply integrated into society over the past few years. Large Language Models (LLMs) can now think about many different things, write software code, read legal documents and even help scientists make discoveries. AI algorithms are already assisting physicians in interpreting X-rays and determining which emergency patients require immediate treatment. In finance, algorithmic methods assess credit risk and detect fraud in real time.
The advent of agentic AI, or systems that can plan, carry out and change tasks on their own without constant human supervision, is probably the most worrying. AI agents are currently being used to manage cloud infrastructure, run marketing campaigns, or execute financial trades autonomously. For example, the shift from 'assistive AI' to 'autonomous AI' represents a significant leap in how people use technology.
AI today can analyse large volumes of data far faster than humans and generate language that sounds like a person, produce images and films that appear real, recreate environments and improve the performance of complex systems. AI-generated synthetic media is a clear example. It can now record people saying things they never said, which is a significant threat to trust, journalism and democracy.
In manufacturing, AI-powered robots use sensor data to change production lines in real time. This makes the process more efficient but also means that fewer people are directly in charge. In cybersecurity, AI systems can protect against attacks and, ironically, create quite advanced viruses. These dual-use capabilities demonstrate that AI itself is neutral; its effects depend entirely on how it is designed, deployed and governed.
Even today, developers sometimes have difficulty explaining why complex models yield specific results. This is called the 'black box' problem. When systems continually learn and change, they may not function as intended. For example, experimental trading algorithms have caused flash crashes in financial markets within a few seconds, before humans could intervene. Recommendation algorithms on social media sites have also spread false information and extremist content, not intentionally, but to increase the likelihood that users will interact with it. These instances show that having different goals can lead to adverse outcomes at scale, even if there is no genuine intellect or intent behind them.
Strong protections are needed to keep meaningful human control. First, AI systems, especially those that are high-risk, need to have human-in-the-loop features that make sure important decisions can't be made without human consent. Second, AI models should be subject to verification and explanation, particularly in sensitive domains such as healthcare, law enforcement and finance. Companies also need to follow specific rules for testing and deploying. Before being made available to the public, robust AI systems should be stress-tested for unforeseen effects, just like drugs. AI shouldn't be a 'set-and-forget' technology; it needs to be watched all the time.
Governments worldwide are beginning to act. For instance, the European Union's AI Act classifies AI systems by risk and imposes stringent restrictions on high-risk uses, such as biometric surveillance. Similarly, several governments require AI developers to submit transparency reports that explain how they train, test and govern their models.
AI's rapid, explosive growth is both fantastic and scary. AI strategy cannot be left unregulated and no one stakeholder should have sole decision-making capacity. Responsible AI is essential and must be a collaborative effort; otherwise, it will serve profit without people.

Dr Mythili Kolluru
KV Ch Madhu Sudhana Rao
The writer is AI researcher based in the USA.