December 21, 2024

Baskentmuhendislik

The technology folks

Google fired Blake Lemoine, the engineer who said LaMDA was sentient

[ad_1]

Comment

Blake Lemoine, the Google engineer who told The Washington Post that the company’s artificial intelligence was sentient, said the company fired him on Friday.

Lemoine said he received a termination email from the company on Friday along with a request for a video conference. He asked to have a third party present at the meeting, but he said Google declined. Lemoine says he is speaking with lawyers about his options.

Lemoine worked for Google’s Responsible AI organization and, as part of his job, began talking to LaMDA, the company’s artificially intelligent system for building chatbots, in the fall. He came to believe the technology was sentient after signing up to test if the artificial intelligence could use discriminatory or hate speech.

The Google engineer who thinks the company’s AI has come to life

In a statement, Google spokesperson Brian Gabriel said the company takes AI development seriously and has reviewed LaMDA 11 times, as well as publishing a research paper that detailed efforts for responsible development.

“If an employee shares concerns about our work, as Blake did, we review them extensively,” he added. “We found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months.”

He attributed the discussions to the company’s open culture.

“It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” Gabriel added. “We will continue our careful development of language models, and we wish Blake well.”

Lemoine’s firing was first reported in the newsletter Big Technology.

Lemoine’s interviews with LaMDA prompted a wide discussion about recent advances in AI, public misunderstanding of how these systems work, and corporate responsibility. Google previously pushed out heads of Ethical AI division, Margaret Mitchell and Timnit Gebru, after they warned about risks associated with this technology.

Google hired Timnit Gebru to be an outspoken critic of unethical AI. Then she was fired for it.

LaMDA utilizes Google’s most advanced large language models, a type of AI that recognizes and generates text. These systems cannot understand language or meaning, researchers say. But they can produce deceptively humanlike speech because they are trained on massive amounts of data crawled from the internet to predict the next most likely word in a sentence.

After LaMDA talked to Lemoine about personhood and its rights, he began to investigate further. In April, he shared a Google Doc with top executives called “Is LaMDA Sentient?” that contained some of his conversations with LaMDA, where it claimed to be sentient. Two Google executives looked into his claims and dismissed them.

Big Tech builds AI with bad data. So scientists sought better data.

Lemoine was previously put on paid administrative leave in June for violating the company’s confidentiality policy. The engineer, who spent most of his seven years at Google working on proactive search, including personalization algorithms, said he is considering potentially starting his own AI company focused on a collaborative storytelling video games.

[ad_2]

Source link