On deepfakes and disinformation

Cornelia Shipindo
Deepfakes are digital manipulations of individuals, making them appear to say things they have not said, with seamless voices and facial features.
These deceptive creations are intentionally designed to mislead and deceive, posing a targeted threat against the truth. Utilising Artificial Intelligence (AI), deepfakes produce convincing audio, video, and image forgeries, in contrast to cheapfakes which involve manual or digital manipulation of existing recordings.
Through the use of AI, media can automatically be created or altered, allowing for changes in faces, speech, gestures, and even the invention of entire speeches purportedly delivered by individuals. Furthermore, the realistic appearance of these creations prompts us to doubt their authenticity and reliability.
Deepfakes seriously threaten organisations as their lifelike quality can lead to detrimental outcomes. Audio and visual deepfakes have been utilised for fraudulent activities, financial theft from organisations, personal harassment, and dissemination of misinformation.
Various methods, such as non-algorithmic techniques, automated tools, and cryptographic or blockchain-based solutions, have been developed to identify and combat deepfakes. In actual fact, financial institutions are facing a major challenge as criminals are exploiting personal engagement techniques, like phone calls and video calls, used to prevent fraud to commit fraud through deepfakes.
The rise in deepfake-assisted financial crimes includes impersonating account holders for unauthorised withdrawals and manipulating transactions through voice scams, indicating the quick evolution of tactics used by fraudsters.
Additionally, bad actors leverage deepfakes to take advantage of social media platforms' fast-paced, widespread nature to undermine public trust and interfere with democratic processes. This is facilitated through using automated bots which are software agents created to imitate real users on social media.
Micro-targeting
Furthermore, micro-targeting is employed to strategically utilise individuals’ online information to personalise and distribute tailored misinformation to specific, often small, audience groups. These tactics have increased the scope, efficiency and accuracy of disinformation campaigns, which pose a threat to elections and can influence public perception negatively.
Moreover, individuals need to be aware of the signs of deepfakes, as they may be used for malicious purposes such as harassment, intimidation and spreading misinformation.
To avoid falling victim to deepfakes, individuals should carefully scrutinise emails and other communications, question suspicious audio and video content, report any suspicious activities, and verify the authenticity of information through questioning and cross-referencing with reputable sources.
On a national and regulatory level, strategies to combat deepfakes involve the development of laws aimed at preventing the harmful use of AI and deepfake technology, the establishment of frameworks to minimise the potential risks of these technologies, collaboration among various stakeholders such as governments, tech companies, and civil society, and multi-stakeholder partnerships focused on combating synthetic disinformation while preserving the balance between combatting misinformation and upholding freedom of expression.
* Cornelia Shipindo is the Manager of Cyber Security at the Communications Regulatory Authority of Namibia (CRAN)
** Opinion pieces and letters by the public do not necessarily reflect the opinion of the editorial team. The editors reserve the right to abridge original texts. All newspapers of Network Media Hub adhere to the Code of Ethics for Namibian Media, a code established jointly with the Media Ombudsman.