"No one greeted them." How the Chinese church and Russian disinformation campaigns hunt Ukrainians.
A newlywed couple on a bed: she in a wedding dress, he in military uniform, but without legs. Three smiling girls or three boys with amputations are sitting on a bench and eating ice cream. A mother with her son in a cemetery against the background of the grave of a fallen soldier.
These are popular images created by artificial intelligence. They are very realistic and difficult to distinguish from real ones. Such images, accompanied by emotionally charged messages and requests for reposts, go viral and receive tens of thousands of likes and shares on Ukrainian social networks.
A recent example. On July 5, 2025, a Facebook account under the name Olga Kovalchuk posted about a little girl whose parents had died, using an AI-generated image as an illustration. Within 23 hours of posting, the post was shared 931 times and received 4,300 likes. (It is currently unavailable; it has been replaced with another post about a biblical prophecy. Screenshots are available to the editors.)
Since the beginning of 2025, the number of such images has increased many times over, according to those surveyed. LIGA.net spoke with information security and OSINT specialists. The obvious reasons are the rapid development of artificial intelligence technologies and their decreasing cost for users. For example, in 2024, according to the Stanford University AI Index , companies in the US invested $109.1 billion in AI development, in China – $9.3 billion, and in the UK – $4.5 billion. Regarding cost, in 2025, using models like GPT-3.5 will be 280 times cheaper than in 2022.
Services have learned to generate realistic photos, as if taken with a smartphone camera for an Instagram post. With their help, internet users are lured into cults, information and psychological operations (IPSO) are conducted, and the reach of pages on social networks is artificially increased. How does it work?