This book review was written by Eugene Kernes
“I could see that the experiment in
idealistic governance was unraveling.
OpenAI had grown competitive, secretive, and insular, even fearful of
the outside world under the intoxicating power of controlling such a paramount
technology. Gone were notions of
transparency and democracy, of self-sacrifice and collaboration. OpenAI executives had a singular obsession:
to be the first to reach artificial general intelligence, to make it in their
own image.” – Karen Hao, Prologue: A Run for the Throne, Page 24
“It was this fundamental assumption – the need to be first
or perish – that set in motion all of OpenAI’s actions and their far-reaching
consequences. It put a ticking clock on
each of OpenAI’s research advancements, based not on the timescale of careful
deliberation but on the relentless pace required to cross the finish line
before anyone else. It justified
OpenAI’s consumption of an unfathomable amount of resources: both compute,
regardless of its impact on the environment; and data, the amassing of which
couldn’t be slowed by getting consent or abiding by regulations.” – Karen Hao,
Chapter 3: Nerve Center, Page 95
“These two features of technology revolutions – their
promise to deliver progress and their tendency instead to reverse it for people
out of power, especially the most vulnerable – are perhaps truer than ever for
the moment we now find ourselves in with artificial intelligence. Since its conception, the development and use
of AI has been propelled by tantalizing dreams of modernity and shaped by a
narrow elite with the money and influence to bring forth their conception of
the technology.” – Karen Hao, Chapter 4: Dreams of Modernity, Page 98
Is This An Overview?
There are those who are attempting to develop Artificial
Intelligence to do harm. A record of
automated software that enabled misinformation and harmed humankind. OpenAI wanted to undermine these attempts, by
developing an AI that would be beneficial to humankind. An AI aligned with human values. To do that, the development would need to be
open. Research and decisions would be to
be collaborative, transparent, and democratic.
Safety precautions were a priority.
Ideals which influenced many to support the development of OpenAI, but
the commitments eroded quickly.
Being open meant potentially sharing the technology with
those of malicious intent, therefore the technology needed to become less
transparent. To enable the commitments,
required funding. Funding that would
come from commercial products. OpenAI
developed a for-profit section to obtain funding. Wanting profit, OpenAI became competitive,
secretive, and insular. Transparency,
democracy, and collaboration were removed.
Internal dissenters became silenced, such as those wanting safety
precautions. Competition and the
assumption that being first matters, would override safety concerns, careful
deliberation, environmental impact, regulations, and the potential exploitive
use of the technology on society. The
result is developed technology, which OpenAI originally wanted to prevent.
What Is The Effect Of AI Technology?
Those who develop new technologies make claims that the
benefits would be widespread, but in practice, the benefits accrue to a small
elite. The case of AI is no
different. Competition was considered a
problem, and wanted a monopoly. Wanted
to control the technology, and design it in their own image.
The AI was meant to resolve complex human problems as AI
would be able to quickly communicate and implement information without an
incentive problem. AI would resolve
complex problems such as environmental degradation, while in practice the
equipment uses massive amount of scarce water and energy thereby exacerbating
the environmental degradation while contributing noise population.
AI does not provide factual responses, just those most
probable. The responses depend on what
information the AI was trained on.
Responses that can become harmful due to being trained on data that
harms people. Data that is full of
harmful stereotypes, create responses that are full of harmful
stereotypes. Even fringe harmful
propaganda somehow ended up being used for responses. To filter out the known extremely harmful
responses, the work was outsourced to exploited workers. The data used was also trained on artists
work, without consent of the artists, to produce a business that replaced the
artists.
Caveats?
To validate the claims about the problems within OpenAI and
the industry, diverse research and sources are presented. But, the perspectives can become
repetitive. The way in which OpenAI
technology has harmed society is represented, but not how the technology
helped. Missing is research on potential
solutions for the reported problems.




