What is the news?
Deepfake videos made with Artificial Intelligence (AI) are now increasing rapidly on the internet. These videos are sometimes made in the name of fun and sometimes are used for wrong purposes. In view of this growing problem, YouTube has taken a new step. Now any person can complain about videos in which his AI generated face or voice has been used without his permission.
YouTube started new complaint system
YouTube has started a special process for reporting deepfake videos. In this system, the affected person himself has to come forward and complain about the video. This process is different from YouTube’s normal reporting system. The company says that as AI technology is improving, the platform needs to take strict steps to prevent misuse. This arrangement is an effort in that direction.
How to report AI videos?
If a user sees a video on YouTube that uses an AI generated image or voice, they can report it through the privacy complaint process. There is an option to ‘report altered or synthetic content’. This applies to videos in which someone’s image, voice or identity has been altered using AI tools.
What happens after complaint?
When a deepfake video is reported, YouTube initiates action on it. The creator of the video is given 48 hours to remove the video or remove any identifying information. Merely making the video private is not considered sufficient. YouTube usually gives priority to removing videos. This step is being considered a necessary and positive initiative against the misuse of deepfakes.

Leave a Comment