Technology such as deep fake sounds quite exciting, however, it has very deep-rooted challenges and massive potential to harm.
As experts argue that the reach of fake news, doctored images and graphics have already ailed the world, deep fake generated through GANs could start a new wave of fake media being circulated.
As quite evident from the example of sp.a, the Belgian political party, GANs, and Deep Fakes, in particular, can cause widespread public outrage and place any public figure whether be a politician or anyone else, in a bad light.
Another challenge after the deep fakes come in trend would be the denial of true videos by the culprits. As the culprits can contest that the figure in the video is not them but the entire video has been forged by GANs. When nothing is true then the dishonest person will thrive by saying what’s true is fake.
Researchers from the University at Albany brought out a paper in June 2018 explaining how the deep fakes can be identified by a lack of blinking by the synthetic objects.
However, a criticism of the paper is that given enough time, the machines can circumvent forensic detection by advanced deep learning.
A new concept of W-GANs has been developed, in which researchers have been able to overcome various shortcomings like incessant blinking and pixelated blur on the edge of the face.
Dissemination of dis-information is a matter of high concern for human rights. The intention behind the disinformation is also important because the impact such news has, impact on public thinking, politics et cetera.
Disinformation and propaganda are matters of huge concern. The concern is grave because the extent of the problem needs to be identified. Empirical research is not available on the matter.
UDHCR and International Covenant on Civil and Political Rights, confer power to an individual to seek, receive and impart information of any kind. The catch here is that the law doesn’t only protect true information. We need to draw a line if we think we are over-regulating. Many laws need to be updated for digital space, without curbing the right to expression.
The scientists contest that if proper measures are not taken and technology to detect deep fake not developed, it is only a matter of time before deep fakes exacerbate the networked information environment that already suffers from truth decay.
The author expresses his gratitude and acknowledges the inputs by Harsh Bajpai.