Saturday, September 21, 2024
HomeDigital MarketingOpenAI forms Safety Committee as it tests advanced AI model

OpenAI forms Safety Committee as it tests advanced AI model

OpenAI has announced the creation of a new Safety and Security Committee to oversee the company’s safety and security measures. This committee, composed of select board members, will evaluate and enhance the processes and safeguards of the San Francisco-based AI firm. The formation of this committee coincides with OpenAI’s testing of its next-generation artificial intelligence (AI) model. Recently, the company also released its Model Spec document, outlining its approach to building responsible and ethical AI models.

In a blog post, OpenAI detailed its new committee, which includes directors Bret Taylor (Chair), Adam D’Angelo, Nicole Seligman, and CEO Sam Altman. The committee will make recommendations to the full Board on critical safety and security decisions for OpenAI’s projects and operations.

In addition to the directors, the committee will include OpenAI’s Head of Preparedness Aleksander Madry, Head of Safety Systems John Schulman, Head of Security Matt Knight, and Chief Scientist Jakub Pachocki. Over the next 90 days, the committee will evaluate and develop the firm’s safety processes and safeguards. They will then present their findings and recommendations to the full Board, which will review them before OpenAI publicly shares the adopted recommendations.

OpenAI’s recent initiatives include testing a new, advanced AI model, referred to as the ‘frontier’ AI model. This large language model (LLM) is expected to bring the company closer to achieving Artificial General Intelligence (AGI). AGI is a form of AI that can understand, learn, and apply knowledge across a wide range of real-world tasks with human-like intelligence. Some definitions also suggest that AGI can function autonomously and develop a degree of self-awareness.

Source: Social Samosa

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments