OpenAI
The for-profit OpenAI LP and its non-profit parent firm, OpenAI Inc., make up the American artificial intelligence (AI) research facility known as OpenAI. The business performs AI research with the declared aim of advancing and creating benign AI in a manner that benefits all human beings. Elon Musk, Sam Altman, and others formed the group in San Francisco at the end of 2015, with a $1 billion US commitment total. In February 2018, Musk resigned from the board but continued to provide money. A $1 billion investment from Microsoft was made in OpenAI LP in 2019. The Pioneer Building in San Francisco's Mission District serves as the home office for OpenAI.
History:
Sam Altman, Elon Musk, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), Infosys, and YC Research announced the creation of OpenAI in December 2015 and made over US$1 billion in funding commitments to the project. The company said that by publishing its patents and research, it would "freely interact" with other organisations and researchers. In San Francisco's Mission District, in the Pioneer Building, is where OpenAI is based.
A public beta of "OpenAI Gym," a platform for reinforcement learning research, was made available by OpenAI in April 2016. "Universe," a software platform for assessing and honing an AI's general intelligence across the abundance of games, websites, and other applications available worldwide, was introduced by OpenAI in December 2016.
Despite resigning from his board position in 2018, Musk remained a donor and cited "a possible future conflict (of interest)" with Tesla's AI research for self-driving vehicles.
OpenAI changed its status from non-profit to "capped" for-profit in 2019, with a 100X profit cap on all investments. In addition to partnering with Microsoft, which announced a US$1 billion investment package in the company, the company distributed equity to its employees. Then, OpenAI declared that it would licence its technologies for use in commercial applications.
GPT-3, a language model trained on trillions of words from the Internet, was unveiled by OpenAI in 2020. It also disclosed that the first of its commercial products will be built around a related API, simply known as "the API". GPT-3 is designed to respond to inquiries in natural language, but it can also translate across languages and cohesively produce improvised writing.
DALL-E, a deep learning model that can produce digital images from descriptions in natural language, was unveiled by OpenAI in 2021.
Once ChatGPT, OpenAI's new AI chatbot built on GPT-3.5, was made available as a free preview around December 2022, the company gained extensive media interest. Over a million people signed up for the preview within the first five days, claims OpenAI. Reuters quoted unnamed sources in December 2022 that said OpenAI projected US$200 million in sales for 2023 and US$1 billion in revenue for 2024. It was in discussions for finance as of January 2023, which would put a $29 billion value on the business.
Participants:
- CEO and co-founder Sam Altman, former president of the startup accelerator Y Combinator
- President and co-founder: Greg Brockman, former CTO, 3rd employee of Stripe
- Chief Scientist and co-founder Ilya Sutskever, a former Google expert on machine learning
- Chief Technology Officer Mira Murati, previously at Leap Motion and Tesla, Inc.
- Chief Operating Officer Brad Lightcap, previously at Y Combinator and JPMorgan Chase
Companies:
- Microsoft
- Khosla Ventures
- Infosys
Early in January 2016, the team included nine researchers. Yoshua Bengio, one of the "founding fathers" of the deep learning movement, was reportedly present when Brockman met with him to compile a list of the "top researchers in the field," according to Wired. According to Microsoft's Peter Lee, a top AI researcher costs more than a top NFL quarterback prospect. Even while OpenAI pays corporate-level wages (as opposed to nonprofit-level wages), it does not yet offer AI researchers salaries that are on par with those of Facebook or Google. Sutskever admitted that he was prepared to leave Google for OpenAI, saying that this was "partly because of the extremely powerful group of individuals and, to a very big degree, because of its objective." "Moving mankind closer to constructing actual AI in a safe manner" was, in Brockman's words, "the finest thing that I could imagine doing." Wojciech Zaremba, an OpenAI researcher, said that he rejected "borderline ridiculous" offers of two to three times his market worth to join OpenAI.
Motives:
Scientists like Stephen Hawking and Stuart Russell have expressed fears that the extinction of humans might result from an uncontrollable "intelligence explosion" if powerful AI one day can redesign itself at an exponential pace. The "greatest existential danger" to civilization, according to Elon Musk, is AI. The creators of OpenAI created it as a non-profit organisation so that they may concentrate their research on producing beneficial long-term benefits for mankind.
According to Musk and Altman, worries about the existential threat posed by artificial general intelligence are one of their driving forces. According to OpenAI, it is impossible to imagine "how much human-level AI may assist civilization," as well as "how much it could hurt society if constructed or deployed poorly." It is not safe to put off research into safety because "it's impossible to forecast when human-level AI could come within reach due to AI's surprise past." AI "should be an extension of individual human wills and, in the spirit of liberty, as widely and equitably disseminated as feasible," according to OpenAI. Sam Altman, the co-chair, anticipates that the multi-decade initiative will outperform human intellect.
Former Infosys CEO Vishal Sikka said that for him to support an initiative, it must be "open" and "deliver outcomes generally for the better good of mankind." He also said that OpenAI "aligns very neatly with our long-held principles" and their "endeavour to conduct meaningful work." According to Cade Metz of Wired, companies like Amazon may be driven by a desire to employ open-source software and data to compete on an equal footing with companies like Google and Facebook which own vast amounts of private data. According to Altman, Y Combinator firms would provide OpenAI access to their data.
Having previously been a 501(c)(3) nonprofit organisation, OpenAI changed its legal form in 2019 to become a for-profit corporation named OpenAI LP to raise money while maintaining management over a non-profit called OpenAI Inc.
Strategy:
Musk asked the following: "What can we do to assure that the future is brightest? We could watch from the sidelines, we could support regulatory monitoring, or we could actively engage with those who are passionately committed to the development of AI in a manner that is secure and good to mankind." Musk said that "The greatest defence is to "enable as many people as possible to have AI," even though there is always a chance that by genuinely striving to improve (friendly) AI, we may end up creating the thing we are worried about. If everyone possesses AI abilities, then no one person or small group of people can possess an AI superpower."
Those who are worried about the existential danger posed by artificial intelligence disagree with Musk and Altman's counterintuitive approach of attempting to lessen that risk by providing AI to everyone. Nick Bostrom, a philosopher, has expressed scepticism against Musk's strategy, saying that "you don't want to give it to everyone if you have a button that may do horrible things to the planet." We don't expect to share all of our source code, Altman stated in a 2016 talk about technological singularity. He also cited a proposal to "enable vast swathes of the globe to elect members to a new governing board." Gregory Brockman said "Our current objective is to do the best action possible. It's not really clear."
Conversely, proponents of openness have criticised OpenAI's original choice to withhold GPT-2 out of a desire to "err on the side of caution" in the face of possible abuse. I don't believe [OpenAI] spent enough work establishing [GPT-2] was genuinely harmful, according to text generation specialist Dilip Rao. Other detractors said that open publishing is required to reproduce the study and develop defences.
OpenAI spent US$7.9 million, or 25% of its functional expenditures, on cloud computing for the 2017 tax year. In contrast, DeepMind's overall costs in 2017 were US$442 million, a significant increase. In the summer of 2018, it took several weeks to rent 128,000 CPUs and 256 GPUs from Google only to train OpenAI's Dota 2 bots. To enable workers to declare, "I'm moving to Open AI, but in the long run, it's not going to be unfavourable to us as a family," OpenAI claims that the capped-profit model implemented in March 2019 enables OpenAI LP to lawfully attract investment from venture funds. Many of the best researchers are employed by companies like Google Brain, DeepMind, or Facebook, Inc., which may provide stock options that a charity cannot. OpenAI LP got $1 billion from Microsoft in June 2019. OpenAI intends to use this money "within five years, and maybe considerably sooner." Even a billion dollars may be inadequate, according to Altman, who also said that the lab may eventually need "more resources than any non-profit has ever received" to achieve artificial general intelligence.
Oren Etzioni of the Allen Institute for AI viewed the transition from a nonprofit to a capped-profit company with scepticism. He acknowledged that it is challenging to attract top researchers to a nonprofit, but asserted that "I disagree with the notion that a nonprofit can't compete" and cited successful low-budget projects by OpenAI and others. "IBM would still be number one if larger and better financed was always better." After the change, OpenAI LP's senior executives are no longer obligated to publicly disclose their salaries. OpenAI LP's only controlling stakeholder is the nonprofit organisation OpenAI Inc. Despite being a for-profit business, OpenAI LP has a legal fiduciary duty to uphold the nonprofit charter of OpenAI Inc. The majority of the board of OpenAI Inc. is prohibited from owning shares in OpenAI LP. Additionally, owing to a conflict of interest, minority members who own stock in OpenAI LP are prohibited from participating in some votes. According to some academics, OpenAI LP's decision to become for-profit contradicts its claims to be "democratising" AI. We have "usually never been able to depend on venture capitalists to improve mankind," a Vice News writer stated.
Products and applications:
The majority of OpenAI's research focuses on reinforcement learning (RL). DeepMind's main rival is thought of as OpenAI.
Gym aims to standardise how environments are defined in AI research publications, making published research more easily reproducible. It is similar to but broader than, the ImageNet Large Scale Visual Recognition Challenge used in supervised learning research. The gym provides an easy-to-set-up general intelligence benchmark with a wide variety of different environments.
In "RoboSumo," virtual humanoid "meta learning" robots are assigned the tasks of learning to move about and shoving the opposing agent out of the ring even though they are initially incapable of walking. When an agent is removed from this virtual environment and placed in a new virtual environment with strong winds, the agent braces to stay upright, indicating it has learned how to balance in a generalised way through this adversarial learning process. The agents learn how to adapt to changing conditions in this way.
The Debate Game, developed by OpenAI in 2018, trains robots to argue theoretical issues in front of a judge. The goal is to investigate if such an approach may help with auditing AI choices and creating AI that is comprehensible.
Dactyl trains a Shadow Hand, a mechanical hand that resembles a human hand, to operate actual things using machine learning. It uses the same RL algorithms and training code as OpenAI Five to learn totally in simulation. Domain randomization, a simulation method that exposes the learner to a range of experiences rather than attempting to match them to reality, was used by OpenAI to address the object orientation issue. Dactyl's setup includes RGB cameras in addition to motion tracking cameras so that the robot may control any item just by looking at it. 2018's OpenAI demonstration of the system's cube and octagonal prism manipulation abilities.