15
January
2024
|
09:51
Asia/Singapore

We have weapons against AI-powered deepfakes but fighting truth decay won’t be easy

The first few weeks of 2024 have seen a rise in the misuse of deepfake technology to create false images and videos of public figures, like celebrities and politicians. However, even with the tools available to crack down on these AI-powered felonies, deepfake technologies are becoming increasingly sophisticated, making it harder to identify AI-generated content.

Professor Simon Chesterman, who is the NUS Vice Provost (Education Innovation), Dean of NUS College and Senior Director of AI Governance at AI Singapore, highlights how governments worldwide face many challenges in laying out regulations to address the harms associated with generative AI while not limiting its potential.

He shares how at NUS, he is working with a team looking at a broader approach called “digital information resilience”, which emphasises the role of consumer behaviour in understanding why people consume fake news and how it affects them, as well as the important role of technology. Prof Chesterman adds that with all the tools and regulations in place, it is still vital as consumers of technology to be discerning and to take responsibility for what we consume and share on the Internet.

Read more here.