The Framework Act for Development of Artificial Intelligence and Establishment of Trust establishes a robust regulatory foundation for a safe AI society. It categorizes systems like High-impact AI, which affects human safety or fundamental rights, and Generative AI, which mimics data to create diverse outputs. All AI operators must conduct a preliminary review to determine if their products qualify as high-impact and are encouraged to assess their influence on vulnerable groups.
Transparency is a core pillar of the Act, requiring operators to notify users when a service utilizes high-impact AI or generates content that could be mistaken for reality. Failure to provide such notice can result in administrative fines of up to 30 million won. Furthermore, high-impact AI operators are mandated to implement risk management plans and user protection measures, with a duty to retain compliance records for five years. Notably, the Act provides a streamlined compliance path by recognizing existing certifications, such as those under the MVMA, as sufficient proof of meeting these safety and management requirements.