They found out the code! FIRE THEM!
Back in June, an engineer from Google claimed that the company’s AI system, LaMDA, had gone sentient. When the news broke out, Google suspended the engineer for a month for breaching confidentiality policy.
Fast forward to July, it is reported that Blake Lemoine, the engineer behind this claim, got fired. The report originated from Big Technology where Lemoine shared about his dismissal from Google during a podcast.
LaMDA (Language Model for Dialogue Applications) is a chatbot and language AI system developed by Google to further enhance its services, notably Google Home and Google Assistant. It is designed to communicate with humans as legitimately as possible, so the so-called “sentience” may have stemmed from here.
A newsletter also confirmed the dismissal of Lemoine. The newsletter reads:
“As we share in our AI Principles, we take the development of AI very seriously and remain committed to responsible innovation. LaMDA has been through 11 distinct reviews, and we published a research paper earlier this year detailing the work that goes into its responsible development. If an employee shares concerns about our work, as Blake did, we review them extensively. We found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months. These discussions were part of the open culture that helps us innovate responsibly. So, it’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information. We will continue our careful development of language models, and we wish Blake well.”