OpenAI CEO Sam Altman was fired. “I was fired for being consistently disingenuous in communication with the board of directors,” a notice said. It did not disclose specific reasons.

The CEO of the world’s top artificial intelligence company has been cut, what does this mean.

OpenAI CEO Sam Altman was fired. “I was fired for being consistently disingenuous in communication with the board of directors,” a notice said. It did not disclose specific reasons.

Why is this important.

  • There’s a good chance something serious is going on. It may be a matter of the future of mankind.
  • OpenAI is not just a startup company. It is a company that is valued at a whopping $90 billion.
  • It’s not easy to imagine an open AI without Sam Alt.
  • Looking at the board’s Worthing, it can be interpreted that there was something Sam Altman didn’t say and that was caught, but very little is known. New York Times reporter Kevin Rouge interviewed Sam Altman on Wednesday afternoon, saying he “didn’t seem to have any idea he was going to be fired.”

a few confirmed facts.

  • Sam Altman briefly said on X, “The time I spent at OpenAI was really good,” adding, “It was an opportunity to change the world a little bit.”
  • Co-founder Greg Brockman also quit. X also said, “I’m very proud of the achievements we’ve all made together,” adding, “I decided to quit after hearing the news today.”
  • Mira Murati, CTO, will be the interim CEO.

What’s the reason.

  • There is a clue to understanding the recent situation in the exclusive report of Deinformat.
  • There was controversy over safety, and Mira Murati emphasized three things, and the second is that AI alignment and AI’s capabilities and risks should be predicted.
  • The first is to advance technology research, the second is AI alignment, and the third is to share technology in a way that is beneficial to everyone. In other words, conflicts may have been triggered over AI alignment.
  • According to DeInformation, the OpenAI board said Altman was concerned about the possibility of commercial businesses sacrificing society’s safety.

a big picture.

  • OpenAI is a non-profit organization that started as an organization that did research to protect humanity from the explosion of artificial intelligence, not artificial intelligence development.
  • Looking at the governance structure released by OpenAI, one of the most important roles of the OpenAI board is to determine whether the artificial intelligence developed by OpenAI has reached general artificial intelligence. OpenAI is a structure in which a non-profit parent company controls a for-profit subsidiary, and if it reaches AGI, it must stop its for-profit business. License agreements with Microsoft will also be suspended.
  • Microsoft, OpenAI’s largest shareholder, also said it had heard about Sam Altman’s firing minutes before the announcement.

a deeper reading.

  • In July, Ilya Sutzkeber wrote on an open AI blog. It is said that the powerful power of super artificial intelligence can be very dangerous and could threaten humanity or lead to extinction. “We need an institution to manage governance and we need to solve the alignment problem of super artificial intelligence,” Sutskeber stressed.
  • If AI exceeds human capabilities, it is a question of who will control AI. This is because alignment is controlling AI’s ability, and if the alignment is supervised by a person, there is a possibility that AI will bypass or deceive the alignment.
  • OpenAI said it plans to spend 20% of its computing capabilities secured over the next four years on resolving alignment issues. In preparation for the era of general artificial intelligence, the task is to set principles on how far artificial intelligence can and should not start from and manage it so that it does not deviate from those principles.
  • Ilya Sutzkeber operates a superalignment team with the concept of confronting artificial superintelligence. In an interview with MIT Technology magazine, he said, “If artificial intelligence emerges beyond human control, many people may choose to become a part of artificial intelligence.” So that we don’t get into that situation
  • Sam Altman and Ilya Sutzkeber may have been at odds over alignment issues. The view of the dangers of artificial intelligence can be said to be much more powerful in Sutschever than Altman.
  • Sutzkeber is a disciple of Jeffrey Hinton, who is called the father of artificial intelligence. Hinton warned of the dangers of artificial intelligence when he left Google in May this year. “I regret my life,” he said, adding, “I just comfort myself thinking that someone else would have done it if I didn’t.”

The governance structure of open AI.

  • OpenAI is a non-profit organization registered in Delaware and its subsidiary OpenAI Global LLC is a corporation established for commercial purposes. OpenAI Global is allowed to generate and distribute profits, but it is strictly required to comply with the mission of the parent company.
  • OpenAI Global has a separate holding company and management company, and shareholders can invest in holding companies, but they cannot intervene in management.
  • A majority of the board of directors is outside directors and outside directors do not hold shares. Sam Altman doesn’t have a stake at all.
  • There is a limit on the profits that employees, Microsoft and other investors can take. Any profit beyond the limit is attributed to a non-profit organization.
  • OpenAI said, “The beneficiaries of OpenAI are not investors, but humans (The Nonprofit’s principal benefit is humanity, not OpenAI investors).”

a back story.

  • Shortly after Sam Altman was fired, employees at OpenAI asked Ilya Sutschever if this was a coup. Ilya Sutzkeber said, “I don’t agree, but you can call me that. The board only did what it needed to do to build general artificial intelligence that benefits humanity.”
  • According to DeInformation, some employees understood that Altman tried to grow his business at the expense of potential safety concerns and that he thought the pace was too fast.

“Effective altruists”.

  • OpenAI has a total of six board members, including Ilya Sutzkeber and Qora CEO Adam D’Angelo, robotics engineer Tasha McCauley and Georgetown’s director of strategy Helen Toner. Gizmodo analyzed that they are all related to the “Effective Altruists” movement.
  • Efficient altruists are a group of people who claim that the solution to humanity’s problems is for well-intentioned people to become tremendously rich and donate their money to good things. Sam Bankman-Fried of FTX was one of the people who led the meeting. (Sam Bankman-Fried has been arrested and tried on fraud charges since the bankruptcy of FTX, and some predict that he will be sentenced to 110 years in prison.)
  • After the FTX crisis, the meeting was almost disbanded. An OpenAI spokesperson said in an interview with VentureBeat, “There are no ‘efficient altruists’ members of our board.”
  • In an interview with The New York Times in March this year, Sam Altman said, “Open AI aims to secure most of the world’s wealth and then redistribute it to people.” It is said that up to $100 trillion was mentioned.

Oppenheimer of our time.

  • Oppenheimer said this when the Trinity experiment was successful. “I am now death, the destroyer of the worlds.”
  • Sam Altman compared himself to Oppenheimer several times. He also talked about having the same birthday as Oppenheimer. It has also suggested that an international organization such as the IAEA, which monitors nuclear power, is needed.
  • Sam Altman’s goal was to make good AI and dominate the field before bad people made bad AI, said New York Magazine, which interviewed Sam Altman. “Open AI promised to release its research results as an open source in accordance with the philosophy of ‘efficient altruists’, and declared that if someone is ready to implement general AI with them and those who are ‘value aligned’ and ‘safety conscious’ will help the project instead of competing with them.”
  • Henry Kissinger, a former defense minister, pointed out in “The World After AI” written with Eric Schmidt, a former Google CEO, that “nuclear weapons have a ban treaty recognized by the international community and the concept of deterrence is clearly defined, but no one has agreed on a line that should not be crossed when it comes to AI.” “There has never been a time when we face such a complex strategic and technical problem, and there has never been such a lack of consensus on the nature of the problem and even the vocabulary necessary to discuss it,” it said. Sam Altman’s recent remarks show he most likely got the idea from the book, which was published in 2021.
  • New York Magazine pointed out that Sam Altman is obsessed with the general theory. It is pointed out that in order to say that AI may destroy the world, it is necessary to explain specifically what risks are and how to deal with them.
  • “Sam Altman says, ‘Please regulate,’ and insists, ‘This is a really complex and professional subject, so we need a complex and professional institution,’ but we are well aware that such an institution will not be created,” said tech writer Jonathan Sadowsky.
  • “He sees himself as a superhuman being, as Nietzsche calls it. It will create what destroys us, and it will save us from it.”
  • Safety engineer Heidi Klap asked, “If the system can’t even stop discriminating against black people, how can we stop destroying humanity?” It’s not that you can’t do it, but that you can’t do it and it’s a matter of choice.

What will happen.

  • There are too few people who know the truth, and everyone is silent.
  • Its risks and control have emerged as important issues as well as the speed of technological development. The conflict between OpenAI is also likely to have been triggered at this point. There is a possibility that there was a confrontation over the direction and philosophy of alignment, and the possibility that the singularity of artificial intelligence is imminent or has already passed cannot be excluded.
  • Sam
tslaaftermarket

Share
Published by
tslaaftermarket

Recent Posts

Tesla News Summarizes Up Steadily

Tesla News Summarizes Up Steadily SpaceX Successfully Performs Starship Rocket Flip ManeuveringSpaceX's Starship rocket has…

10시간 ago

MSTR-related Stock Sale Recurring

MSTR #MSTU #MSTX #MSTZ #SMST MSTR-related Stock Sale Recurring I continued to study which one…

16시간 ago

Tesla’s Good News Story

1 Tesla's Good News StoryIn straight acceleration, the electric car wins by a landslide compared…

2일 ago

Tesla News Summary Ends Once Authorized for Self-Driving

Tesla News Summary Ends Once Authorized for Self-Driving Trump's transition team plans to ease regulations…

3일 ago

The most interesting episode of this U.S. presidential election was the North Carolina

[About Starlink] The most interesting episode of this U.S. presidential election was the North Carolina…

3일 ago

On the public television documentaryI watched it because AI came out.

On the public television documentaryI watched it because AI came out. It's part 1 and…

4일 ago