Opinion

India speaks over 100 languages. Microsoft wants AI to bridge its linguistic gaps

The government is also building language datasets through Bhashini, an AI-led language translation system that is creating open-source datasets in local languages for creating AI tools

Depending on how you count, India has at least 120 languages, and another 1,300 'mother tongues,' an Indian term that refers to local dialects. The country’s government recognizes 22 languages but primarily operates in just two: Hindi, mostly spoken in India’s north, and English. That excludes tens of thousands of Indians who speak neither.

For a few weeks this year, villagers in Karnataka read out dozens of sentences in their native Kannada language into an app as part of a project to build the country's first AI-based chatbot for Tuberculosis.

There are more than 40 million native Kannada speakers in India, and it is one of the country's 22 official languages and one of over 121 languages spoken by 10,000 people or more in the world's most populous nation.

But few of these languages are covered by natural language processing (NLP), the branch of artificial intelligence that enables computers to understand text and spoken words.

Hundreds of millions of Indians are thus excluded from useful information and many economic opportunities.

'For AI tools to work for everyone, they need to also cater to people who don't speak English or French or Spanish,' said Kalika Bali, principal researcher at Microsoft Research India.

'But if we had to collect as much data in Indian languages as went into a large language model like GPT, we'd be waiting another 10 years. So what we can do is create layers on top of generative AI models such as ChatGPT or Llama,'Llama,' Bali told the Thomson Reuters Foundation.

The villagers in Karnataka are among thousands of speakers of different Indian languages generating speech data for tech firm Karya, which is building datasets for firms such as Microsoft and Google to use in AI models for education, healthcare and other services.

The government, which aims to deliver more services digitally, is also building language datasets through Bhashini, an AI-led language translation system that is creating open source datasets in local languages for creating AI tools.

The platform includes a crowdsourcing initiative for people to contribute sentences in various languages, validate audio or text transcribed by others, translate texts and label images.

Tens of thousands of Indians have contributed to Bhashini.

'The government is pushing very strongly to create datasets to train large language models in Indian languages, and these are already in use in translation tools for education, tourism and in the courts,' said Pushpak Bhattacharyya, head of the Computation for Indian Language Technology Lab in Mumbai.

'But there are many challenges: Indian languages mainly have an oral tradition, electronic records are not plentiful, and there is a lot of code mixing. Also, to collect data in less common languages is hard, and requires a special effort.'

Of the more than 7,000 living languages in the world, fewer than 100 are captured in major NLPs, with English the most advanced.

ChatGPT - whose launch last year triggered a wave of interest in generative AI - is trained primarily on English. Google's Bard is limited to English, and of the nine languages that Amazon's Alexa can respond to, only three are non-European; Arabic, Hindi and Japanese.

Governments and startups are trying to bridge this gap.

Grassroots organisation Masakhane aims to strengthen NLP research in African languages, while in the United Arab Emirates, a new large language model called Jais can power generative AI applications in Arabic.

For a country like India, crowdsourcing is an effective way to collect speech and language data, said Bali, who was named among the 100 most influential people in AI by Time magazine in September.

'Crowdsourcing also helps to capture linguistic, cultural and socio-economic nuances,' said Bali.

'But there has to be awareness of gender, ethnic and socio-economic bias, and it has to be done ethically, by educating the workers, paying them, and making a specific effort to collect smaller languages,' she said. 'Otherwise it doesn't scale.'

With the rapid growth of AI, there is demand for languages 'we haven't even heard of', including from academics looking to preserve them, said Karya co-founder Safiya Husain.

Karya works with non-profit organisations to identify workers who are below the poverty line, or with an annual income of less than $325, and pays them about $5 an hour to generate data - well above the minimum wage in India.

Workers own a part of the data they generate so they can earn royalties, and there is potential to build AI products for the community with that data, in areas such as healthcare and farming, Husain said.

'We see huge potential for adding economic value with speech data - an hour of Odia speech data used to cost about $3-$4, now it's $40,' she said, referring to the language of eastern Odisha state.

Fewer than 11 per cent of India's 1.4 billion people speak English. Much of the population is not comfortable reading and writing, so several AI models focus on speech and speech recognition.

Google-funded Project Vaani, or voice, is collecting speech data of about 1 million Indians and open-sourcing it for use in automatic speech recognition and speech-to-speech translation.

Bengaluru-based EkStep Foundation's AI-based translation tools are used at the Supreme Court in India and Bangladesh, while the government-backed AI4Bharat centre has launched Jugalbandi, an AI-based chatbot that can answer questions on welfare schemes in several Indian languages. - The New York Times

Nicholas Gordon

The writer is a Hong Kong-based associate editor