When ChatGPT was first made publicly available, I remember marvelling at the quality of its output and thinking that AI had finally improved to the point where it might be good enough to stand-in for me. Wouldn’t it be wonderful, I thought, if I could describe what I’d like my article to cover and have AI write it for me. So I set about doing just that.
I taught myself to get ChatGPT to generate sentences that I could fashion into paragraphs— which, in turn, could be woven into full length articles that, at least at first glance, looked like something I might have written. I even managed to use this process to generate an article that was accepted for publication. While the process was more laborious than I might have liked, it left me feeling that with just a bit more practice, I’d be able to get AI to write articles for me.
I am sorry to say that this was not how things turned out. Even after training it on every article I have ever written, I simply could not get ChatGPT to generate material that even remotely resembled what readers have come to expect from an Ex Machina column. Not only was the style stilted, the content was banal—utterly devoid of the sort of insights and nuance that I’d like to think characterize my writing.
As for the one AI-generated article that did get published, almost everyone told me that by the time they got to the disclosure at the end, they were wondering why it felt strangely off-key. At its present stage of development, AI can do little more than creatively regurgitate existing content. There is nothing objectively ‘new’ in the output of large language models considering the extent to which they remained constrained by what their training data-sets contain.
Read more on livemint.com