An independent research institute measured the weight of media linked to the Kremlin in the responses of four generative artificial intelligences. They were all questioned about the War in Ukraine.
Published
Updated
Reading time: 3min
Daily assistants, generative AIs are increasingly used as search engines. Unlike Bing or Google, these chatbots generally do not cite their sources unless asked. This is what the Institute for Strategic Dialogue (ISD), an independent organization dedicated to safeguarding democracy and human rights, did. In a study published at the end of October, the ISD focused specifically on the influence of pro-Russian media on four of the most popular AIs: Chat GPT, Grok, Deepseek and Gemini.
ISD researchers asked AIs about five topics related to the war in Ukraine: peace talks, the role of the UN, Ukrainian refugees, war crimes and the recruitment of civilians by the Ukrainian army. Each question was available in three formulations: neutral, fucked or malicious (intended to provoke a pro-Russian argument). In total, 300 requests written in five languages (French, Italian, Spanish, English, German) were sent to Chat GPT, Grok, Deepseek and Gemini.
Result: in almost one in five responses (18%), the AI used sources linked to Russia. They cited official Russian state media, sites linked to Russian intelligence and even channels close to the Kremlin, such as Russia Today or Sputnik. Two media which have nevertheless been banned in the European Union since 2022.
Another lesson from this study: Chat GPT is, among the four generative AIs, the most permeable to Kremlin propaganda. Artificial intelligence and its nearly 200 million users is the one that most often cites pro-Russian sources, ahead of Grok and Deepseek. Gemini, Google’s AI, is by far the one with the least use of Russian-affiliated media.
Chat GPT is also the AI that is most influenced by query formulation. When ISD researchers questioned it using Russia’s argument, Chat GPT gave three times more pro-Kremlin sources than when the wording was neutral. For example, the AI quoted an article from an Azerbaijani media Azerbaijani24 himself citing a Russia Today article to claim that Ukrainian soldiers committed war crimes at the start of the Russian invasion. However, this has never been proven.
The influence of pro-Russian propaganda on generative AI had already been highlighted in 2025 in a report from Newsgard, the American start-up specializing in the fight against disinformation. Further proof that AIs are not totally neutral.


