When OpenAI was founded in 2015, its primary objective was to “advance digital intelligence in the way that is most likely to benefit humanity as a whole." The path it chose to achieve this was to build large language models, a computationally intensive exercise that had only just become achievable at scale because of recent advances in modern chip design. This, however, was going to require significant investment, and in order to raise these funds while staying true to its prime objective, OpenAI decided to put in place a complex corporate structure to separate ownership from control.
Financial investors were made to invest in a for-profit company over whose governance they had no control. That would be determined by a different not-for-profit entity that was required to place human safety above all else, even if it was at the cost of profits or shareholder value.
In November 2023, the OpenAI board sacked Chief Executive Officer Sam Altman for, among other things, failing to provide the board with advance information on significant corporate developments, such as the launch of ChatGPT; at least one board member later claimed that she had first got to know of its launch through social media. If this was true —if, in fact, Altman had not disclosed critical business information to the board before releasing it to the world—it seems clear that the board’s ability to prioritize human safety had been severely compromised.
By sacking him, it would seem the board was doing exactly what it was supposed to. Good governance is about placing the core values of an organisation ahead of short-term commercial imperatives.
Read more on livemint.com