The CNET site conducted a two-month test using ChatGPT to write articles, without showing great transparency. He finally admitted it, after being caught.
Will ChatGPT be everywhere by 2023, as predicted by an article published on CNET on Jan. 11? In any case, it seems that the powerful conversationalist was well on his way to appearing all over CNET… or almost: it was discovered that the site that specializes in new technologies is using this new tool to write articles instead.
This is what the media announced in another article published on January 13 – it explains that CNET has experimented with writing several dozen articles (there were about 75) using ChatGPT. However, the topics were not published raw: they were first ” proofread, checked and edited by a member of the team specializing in the subject, the site said.

This does not have to be a problem, as long as it is clear in advance. A new editorial tool has appeared and everyone is exploring the possibilities of artificial intelligence to see how it can be used in practice. Also the media, be it summarizing, making lists, recognizing an image and so on.
Articles signed “CNET Money”
But in CNET’s case, the test was first done with great discretion: It began in November in CNET’s Money section with the idea of producing explanatory and, according to CNET, basic articles on topics related to financial services. Except that between November and January 13, the byline of all of these articles was simply limited to “CNET Money.”
To learn more, according to the site, you had to hover over the signature to find out more. It is no longer the case. A mention displayed directly in the article indicates that the writing of this topic has been supported by an AI engine and has been reviewed, reviewed, and edited by our editorial team. The name of the person who validated the paper is also listed.
Numerama also had ChatGPT write an article, as part of a test: it was a test subject to put him to the test and ask him to describe himself. But we announced the color from the title (“ChatGPT, what is it? Let’s ChatGPT answer the question”), added a precision to the subtitle, and then developed some thoughts under the article generated by the AI.

CNET’s very discreet test caused a stir across the Atlantic: the site BuzzFeed had fun asking ChatGPT to write a topic about this story (“A tech news site used AI to write articles, so we did here the same”). The article is annotated and includes an update on the January 13 reveal from CNET.
As BuzzFeed’s contrived article ironically puts it, CNET’s practice can only raise serious questions about ethics and transparency. Indeed, carrying out such tests could permanently damage the journalistic pact between the media and the public – while the press is already facing a degree of distrust.
CNET’s explanation can certainly be endorsed: it was a matter of seeing whether the AI can be placed on a low-value-added production, freeing up editors’ time to focus on higher-quality topics. It can also be a support to optimize the readability of the article, the references or its structure.
It may be more about optimizing articles than writing the papers themselves, rather than ChatGPT and other chatbots. should be asked more and more. Instead of having teams that specialize in SEO, or people dedicated to this task, tools like ChatGPT will undoubtedly become more popular. But will we know?
Commentaires
Enregistrer un commentaire