
The "Fundamental Law on the Development of Artificial Intelligence and the Formation of a Foundation for Trust" has come into effect in South Korea. This law creates a legal framework to combat misinformation and other threats arising from the development of AI, reports Kazinform, citing information from Yonhap.
The country's Ministry of Science announced the adoption of comprehensive guidelines for the use of AI at the state level. The main goal of the law is to increase the accountability of companies and AI developers regarding the fight against deepfakes and misinformation that may be created using AI. Under the law, the government has been granted the authority to conduct investigations and impose fines for violations.
The new law introduces the term "high-risk artificial intelligence," which encompasses AI models used to create content that can significantly impact people's lives and safety, including in the areas of employment, credit analysis, and medical consultations.
Organizations using such high-risk AI models are required to inform users that their services are based on AI and are also responsible for the safety of these services. All content created with the help of AI must contain watermarks indicating its artificial origin.
"Labeling AI-generated content is an important precautionary measure to prevent the negative consequences of technology misuse, such as the creation of deepfakes," commented a ministry representative.
Companies providing AI services in South Korea that meet at least one of the following criteria: an annual global revenue of 1 trillion won (approximately $681 million), domestic sales of 10 billion won, or having at least 1 million daily users in the country, are required to appoint a local representative. Among those who already meet these requirements are OpenAI and Google.