NEWSNews

  • 2024.10.10

    Report on the SAFE Project Plenary Meeting

    On October 10, the SAFE (Safety and Assurance of Generative AI) Project experts from GPAI convened online for the SAFE Plenary Meeting. This meeting aimed to support the development of practical approaches to ensure the safety of generative AI in its commercialization phase.

    At the beginning of the meeting, Dr. HARAYAMA Yuko, Secretary General of the GPAI Tokyo Experts Support Center, which identifies the SAFE Project as a core initiative, delivered opening remarks. Co-chairs Inma Martinez and Cyrus Hodes provided an overview of the project’s activities, divided into three tracks:
    1) Technical safety of large language models (LLMs),
    2) Data governance of LLMs, and
    3) AGI (Artificial General Intelligence).
    They emphasized the importance of discussions on AI safety and the necessity of comprehensively mapping AI safety solutions.

    The participating experts raised concerns about the gap between the rapid development of AI safety and the knowledge of policymakers, as well as the need for communication between GPAI and AISIs (Artificial Intelligence Safety Institutions). In response, Secretary General Harayama explained that collaboration with AISI has already begun, with initial efforts focused on AISI coordination, followed by partnerships with external organizations like GPAI.

    The Secretariat of the GPAI Tokyo Experts Support Center also proposed methods for organizing future plenary meetings and track-specific working group sessions to advance the SAFE Project. These proposals were approved.

    Moving forward, the Tokyo ESC plans to present progress on the SAFE Project at the GPAI Serbia Summit, which will be held in Belgrade on December 3-4.

    Participants

    Participants

    Presentation by Co-chairs

    Presentation by Co-chairs