The launch of OpenAI’s new chatbot, just over a year ago, has been punctuated by the hyper-acceleration of life itself. Ushering in a new dawn of possibilities, the generative artificial intelligence (AI) chatbot’s perceived allure is the precious hours it might save.
The sacking, and subsequent reinstatement, of its founder Sam Altman, the 38-year-old American entrepreneur and investor, apparently came out of nowhere and sent shockwaves throughout the tech community. Amidst the flurry, OpenAI’s initial commitment to helping humanity faded fast as it pivoted towards becoming a for-profit model. But despite this fast-paced narrative, it is vital to slow down and consider OpenAI’s emblematic role in our daily lives.
New Chatbot on the Block
To carve out temporal space, albeit just to binge-watch shows on Netflix, play online games, endlessly scroll, like, and reply, or chuckle over TikTok cat videos, ChatGPT seemed to know – even before we did – that we are too exhausted (or lazy) to produce prose, read or write reports, develop projects or formulate ideas. For time-stretched professionals, who are sick of working around the clock, ChatGPT is the ultimate ‘lifehack’. It can swiftly answer clients’ questions; spew out emails; and offer rapid coding. Not only does it condense time and thought within instantaneous boxes of grey typography, it does so while being soothingly, and grammatically correct. It’s responsive to our prompts, even the quirky ones while learning and evolving continually.
Unsurprisingly, fuelled by the hype, ChatGPT reached 100 million users in just two months. With an average of 1.5 billion visits per month, it saw more traffic than its next-closest rivals combined. To understand the magnitude of its popularity, we should appreciate that Facebook achieved 100 million users in four years, Snapchat and Myspace in three years, and Instagram in two years, almost a year for Google.
Boardroom ‘Shuffle’
It is incredibly telling that when OpenAI’s blogpost announced the dismissal of Altman, for not being “consistently candid in his communications” with board members, the news came as a shock to most of its employees, industry insiders, and even its minority owner Microsoft, which has invested about $13 billion in the startup.
Under this cloud of confusion, the board appointed the chief technology officer of OpenAI, Mira Murati, as an interim CEO. This was swiftly followed by the appointment of the former Twitch CEO Emmett Shear. Meanwhile, Altman returned to the startup’s headquarters for negotiations, and, by the evening, Microsoft announced it had hired him to lead a new AI division. By the following morning, more than 95% of OpenAI’s 750 employees had signed a letter announcing that they would resign unless Altman was reappointed. Signatories incongruously included Murati, whom many believed was the initial engineer of Altman’s ousting, as well as OpenAI’s co-founder and chief scientist, Ilya Sutskever, who is thought to be committed to a more cautious progression of AI’s capabilities.
The tumultuous narrative is said to have been partly driven by the progress that OpenAI is making on AI’s ability to process mathematical problems, or Project Q*, which is being considered as a move towards achieving ‘artificial general intelligence’ (AGI). But the tussles at the top also reek of a bromance saga, flamed by interjections on social media. But don’t worry if you’re finding this cascade of events difficult to follow, since even the media are scrambling to stay on top of the volatility. Along with the staggering paradigm shift that OpenAI represents, it has, in some respects, been a victim of its precociousness. Yet, these squabbles are indicative of an accelerationist ethos and Altman’s desire to do away with the veil of ignorance. And, despite the newsworthy tales and audience thirst for personality politics, the lessons for businesses, professionals, and the technology community should be more profound.
AI Afterglow
Beyond the hype surrounding OpenAI’s power scuffle, the lack of substantial regulations surrounding the AI industry has significant implications for the rest of us. Several commentators have stated that the fallout indicates the childishness of the AI industry, grounded in the petulance of Silicon Valley’s desperation to retain dominance in the global market.
Consecutively, it is tainted by the machismo of startups, devoid of standards, a professional body of ethics, consensus, and certifications. While these freedoms are undoubtedly features of its rapid rise, like Big Tech more generally, it is dominated by a handful of individuals who ultimately stand to profit from the unregulated iterations.
But even while ChatGPT’s hyper-capabilities initially dazzled the world, there were evident concerns about the loss of employment it would trigger; misgivings about its hallucinations and ability to produce unreliable, unsubstantiated, or immoral content, not to mention its psychopathic tendencies. Others have even argued, somewhat melodramatically, that it is hurtling human societies toward its destruction. Certainly, it is disappointing that it is following in the direction of other technology cycles, which were initiated for social good but then capped by profit-driven models. Moreover, in the absence of adequate regulation of the companies moderating AI, the foibles and idiosyncrasies of its creators are taking on an exaggerated importance.
Conversely, it is vital to consider the implications of generative AI for society as a whole. What is emerging is that the generative AI industry is hurtling along ideological lines, divided between rapid profitable deployment or more measured, regulated, and axiological reflexivity. Yoshua Benigo, a leading figure in the deep-learning movement and winner of the Turing Award urges research and governments to navigate the dangers of AI and ramp regulation.
Similarly, Richard Blumenthal and Josh Hawley, chairs of the US Senate subcommittee, have proposed a bipartisan AI bill “to establish guardrails”. This might help to facilitate a more mature sector and regulations could insulate consumers and consumer-facing products from the fights at the top. Such a mandate recognises that individual orchestrators of AI would not be so consequential if the industry was not predominantly self-regulated. In the Middle East region, technology leaders are also concerned that, even while AI has seemingly ‘superhuman’ powers, its potential for disruption, unintended consequences, and malignant side effects are apparent.
Nevertheless, investments in digital transformation across the Middle East, Turkey, and Africa region are projected to surpass $74 billion by 2026, helping organisations achieve long-term stability and growth, according to data from the International Data Corporation. Simultaneously, considering generative AI is still in its infancy it will be difficult for tech-industry outsiders to monitor its iterations, many of which are still evolving. However, what is required and potentially achievable is a new mindset of governments, legislators, and leaders of industry who are prepared to launch an agile call to action. This would involve developing independent frameworks for ongoing, pre-emptive regulations and monitoring, rather than knee-jerk reactions or emotive prophies of doom.
Generative Spirals
At individual levels, researchers, industry professionals, creatives, and laypeople using AI must also be advised to see through its generative capabilities’ apparent ease, speed, and convenience. Freud’s 1930 essay, Civilization, and its Discontents can help us to realise that these dilemmas, although occurring within the contemporary AI landscape, represent an age-old paradox. Freud’s thesis rests on three main arguments, including first, that the development of civilization reiterates the trajectory of the individual. Second, civilization’s central purpose is to repress the aggressive instinct of unbearable suffering. Third, considering these dichotomies, the individual (and society) is torn between the desire to live (Eros) and the wish to die (Thanatos).
Just as Freud’s argument about civilization was premised on these unresolvable contradictions, between Eros and Thanatos, as users of generative AI we must all accept our responsibility for the need to self-regulate, critically assess, and think outside of the shallows of algorithms. This requires scrutinising so-called ‘free’ courses promoted by tech companies like Google, eager to get us hooked on AI’s affordances-of-ease. Professionals and members of the cognitive class also need to be prepared to address some of the systemic limitations of AI’s versions of communicative capitalism while becoming more informed about how we might benefit and invest in its capabilities.
Within the spirals of generative AI, it will be increasingly difficult to distinguish between our subjectivities and place in the market as digital users who are commercially orientated towards product placement, promotion, consumption, and production of personal content. In an accelerating AI age, the distinction between personhood, commodity, and data will continue to lose viability. Taken collectively, the current generative AI scenario is a supercharged version of what happens when revolutionary ideas, movements, and technology intersect with the individualism and egoism of an unchecked and seemingly uncheckable industry.
Alternatively, we could make time to research, debate, and discuss how generative AI might help humanity move beyond the false dichotomies of the profit principle. We should be demanding more generative AI and the tech-bros at the top. This involves thinking for ourselves and reflecting more deeply on how the democratisation of AI might be better put to work, to help humanity flourish, rather than vice-versa.