Launching a platform to promote safe artificial intelligence
3:1:25 2023-07-28 669

Major AI makers launched a group dedicated to the "safe and responsible development" of the field.

Founding members of the Frontier Model Forum include Anthropic, Google, Microsoft and Open AI.

The forum aims to promote and develop a standard for evaluating the safety of artificial intelligence, while helping governments, companies, policymakers and the public understand the technology's risks, limits and potential, according to a statement released by Microsoft on Wednesday.

The forum also seeks to develop best practices to address "society's biggest challenges," including "climate change mitigation and adaptation, early detection and prevention of cancer, and combating cyber threats."

Anyone can join the forum as long as they are involved in developing "frontier models" aimed at making breakthroughs in machine learning technology, and are committed to the integrity of their projects. The Forum plans to form working groups and partnerships with governments, NGOs and academics.

"The companies that make AI technology have a responsibility to ensure that it is safe, secure, and remains under human control," Microsoft President Brad Smith said in a statement.

AI thought leaders have increasingly called for meaningful regulation in an industry that some fear could wipe out civilization, citing the dangers of runaway development that humans can no longer control. The chief executives of all forum participants, except Microsoft, signed a statement in May urging governments and global agencies to make "mitigation of extinction from AI" a priority on the same level as preventing nuclear war.

Anthropic CEO Dario Amoudi warned the US Senate on Tuesday that artificial intelligence is much closer to surpassing human intelligence than most people think, insisting that they pass strict regulations to prevent nightmare scenarios such as using artificial intelligence to produce biological weapons.

His words echoed those of Open AI CEO Sam Altman, who testified before the US Congress earlier this year, warning that AI could go "completely wrong".

Why Do We Learn?   2025-08-15
Reality Of Islam

A Mathematical Approach to the Quran

10:52:33   2024-02-16  

mediation

2:36:46   2023-06-04  

what Allah hates the most

5:1:47   2023-06-01  

allahs fort

11:41:7   2023-05-30  

striving for success

2:35:47   2023-06-04  

Imam Ali Describes the Holy Quran

5:0:38   2023-06-01  

livelihood

11:40:13   2023-05-30  

silence about wisdom

3:36:19   2023-05-29  

MOST VIEWS

Importance of Media

9:3:43   2018-11-05

Illuminations

do not burn out

2:34:48   2022-01-18

humanity

6:28:21   2022-12-20

people in need

4:25:57   2023-02-11

bahlool & a businessman

8:21:9   2018-06-21

be yourself

4:2:19   2022-10-10



IMmORTAL Words
LATEST Mixing Coffee and Antibiotics Could Be a Bad Idea, Study Shows Breathing Crystal Breakthrough Could Revolutionize Clean Energy Swarms of Tiny Catfish Seen Climbing Waterfalls in Surprise Discovery Localization of Technology Interpretation of Sura Maryam (Mary) - Verse 11 On the Path of Responsibility Your Heart Is Vulnerable. These 4 Things Will Help You Protect It. Your Phone Is Covered in All Kinds of Germs. Here is The Solution. Study Confirms Abrupt Changes in Antarctica – And the World Will Feel Them Rationalizing Reactions Interpretation of Sura Maryam - Verses 9-10 Outcomes of being Adhered to Responsibility