The limitations of AI tools for research
Published: 03:08 PM,Aug 20,2025 | EDITED : 09:08 PM,Aug 20,2025
Artificial Intelligence (AI) platforms, although often celebrated as game-changing and promising, remain constrained by significant restrictions and limitations that researchers must take into account before relying on them.
Many AI applications fail to deliver accurate and reliable outcomes, misleading the researcher rather than assisting them.
One of the crucial limitations of AI tools is that they cannot process visual inputs; for instance, platforms like Elicit and Consensus only work on textual data. Therefore, the risks of missing critical data and distorting conclusions without interpreting visual data are high and researchers should not rely on these outcomes; such outcomes have to be independently verified.
Furthermore, another shortcoming of AI platforms is that they often fail to understand the context, as Rajendra K Gupta observes, “AI struggles with tasks requiring emotional intelligence, empathy and profound contextual understanding”.
Thus, AI tools rarely consider historical, cultural, or even emotional conditions that might shape a study’s overall meaning and impact. ChatGPT is a well-known example of an AI tool that struggles with deep contextual implications and it may sometimes sound context-aware, but most often relies on generalisation; that is, it is not smart enough to produce accurate outputs.
Another significant weakness of AI tools is that they lack citation and validation and sometimes could make up information that is not true, leading researchers to incorrect and unreliable outcomes. For example, the Gemini platform sometimes struggles to provide verifiable sources; it may invent or fabricate false references when reliable ones are unavailable. Accordingly, this not only weakens the quality of the research but also increases the risk of merging unverified citations in the academic work. Similarly, ChatGPT usually gives made-up information, especially when it cannot answer the user's questions. The Research Rabbit also suffers from citation-related restrictions, especially in newly published works. In the long term, this is very likely to mislead researchers. Thus, the given data by these AI platforms should not be used in research unless it has been verified in other sources.
While AI platforms play a significant role in the whole research process, it is crucial to consider their limitations and shortcomings. The tools, though advanced and progressive, cannot ever replace the creative human mind. Without careful verification, these AI platforms can be misleading for researchers. Researchers need to understand that these applications are not infallible.