Reading Time: 3 mins read
Notwithstanding the ballyhoo that surrounded its introduction, I still have not gotten round to using ChatGPT for anything. A big reason for this is that I believe that I write reasonably well. Nor is my readership that critical. I am also convinced that the effort to improve one’s writing is a critical component of the processes associated with improving one’s thought. If the latter conviction has any meaning, therefore, the use of generative AI to improve one’s writing ought to feel at some point like cheating. The intellectual, if not necessarily moral, equivalent of using performance-enhancing drugs at the Olympics.
Even when I agree with most commentators that this new technology will have unpredictable, and to the extent that some of these are malign, unintended outcomes, my worries about this technology have never been dystopian. All through recorded history, new technologies from writing through the printing press, to the personal computer, have displaced far fewer jobs than they have created employment opportunities. Beyond the material implications of it all, I do not, for example, think that technology, in its many iterations, will be man’s undoing. As a species, we are pre-eminently qualified to hurt ourselves in ways that no third party
sentient or otherwise
could possibly conceive of.
Nonetheless, I am sympathetic to the fears around “technological singularity.” The possibility, in other words, that at some point in the future, an intelligent agent, able to update both its hardware and especially software, attains superintelligence is not scary by itself. The peril to be anticipated and then guarded against is that following this tech equivalent of Paul’s damascene moment, the new superintelligence is then both willing and able to threaten mankind. This potential danger is unfortunately no less bothersome, today, than is the danger to a free-thinking human in and from any of the increasingly dictatorial regimes that our world is blessed with.
Far more troubling, therefore, is whether and how generative AI could collaborate in the emergence, tomorrow, of an übermensch. An AI-powered exoskeleton
with self-repairing, reactive armour, was always likely to show up in some battle theatre in the not-too-distant future. But by far the biggest challenge to the existence of our species from this new superman would be a moral one. Cognition at levels unreached before by any human or collective thereof, including the prospect of a hive consciousness, would be far more deleterious to mankind’s continued existence were it malign in its intent, than anything the Chinese government is currently doing with facial recognition technology today.
Then, a few weeks back, just as I was about to post on my social media timelines, I experienced my AI epiphany. Upon highlighting the text that I had composed, Co-pilot (Microsoft 365’s generative AI) offered to re-write it. It was as audacious a proposition as it was presumptive. Still, I was intrigued enough to take up the offer. And I was sufficiently intrigued by the result to read Co-pilot’s suggested replacement several times over. I am still constrained by X’s (né Twitter’s) 280-character limit
I find it hard to pay for the use of a service that gratuitously harvests my data, and which some day may use the same data to train its own version of a generative pre-trained transformer. In rewriting my post, however, Co-pilot stayed comfortably within the 280-character upper bound. It was a thorough rewrite. Not just did it replace words and switch the ones that it left untouched. Sentences were restructured root-and-branch. Paragraphs were re-arranged. In addition, the suggested new post was easier on both the eye and the head. When, after several re-reads, I decided on the suggestion, I could see how that one-time use of performance-enhancing substances in a bid for a podium place at the Olympics could become a long-term addiction.
I have heard that in this new function, owing to shortcomings or biases in the data with which they have been trained or the algorithm that is to be trained, large language models sometimes produce incorrect information. Luckily, hallucinating generative AI is fixable by human editors. What is not likely to be fixable is when these new models, without any limitations or biases in the data on which they are trained or kinks in the algorithm that is to be trained, knowingly produce untruths. That, incidentally, is mankind’s special gift. And only then, not before the technology singularity or after, will technology begin to threaten mankind’s dominance of this big blue marble.
Support PREMIUM TIMES’ journalism of integrity and credibility
Our Digital Network
Projects & Partnerships