ChatGPT, which is also included in the new Bing , is an extremely useful tool for easily generating texts or making quick queries. This and other similar AIs , however, have a downside. They are so precise that on many occasions it is practically impossible to know if a text is made with ChatGPT or if, instead, it has been written by a human being, something that could lead to misleading or manipulative content. In fact, we have already seen how many teachers have positioned themselves against AI because it leads their students to use it to, among other things, do their homework.
How will we detect texts made with chatgpt and the like?
Nowadays, knowing if a text is made with ChatGPT or a similar AI model is a very complicated task. Above all, if we take into account the natural language that these types of tools have. For example, we could detect texts made with ChatGPT by looking for inaccurate or incorrect content. And it is that the AI is not perfect and, on many occasions, makes mistakes, especially in those texts related to finance.
The problem is that human beings can also make mistakes when writing text. Therefore, the way to know for sure if it has been written by ChatGPT and the like, is to check if somewhere in the text there is a message detailing that it has been generated by an AI . For example, some scientists have used ChatGPT for their studies, naming artificial intelligence as the author. Detecting texts made with ChatGPT and the like without authorship, however, could be easier in the future.
There are tools capable of detecting texts made with ChatGPT
Companies that have developed models capable of generating text, such as OpenAI, are also working on AIs that can detect text made with ChatGPT . They are actually very easy tools to use. The user only has to copy the text, paste it into the platform and wait for it to detect if it is made with ChatGPT, Bard or another similar model.
These types of tools, however, are not totally reliable. For example, the recently announced OpenAI only correctly identifies 26% of text written by AI . In addition, 9% of the time it gives false positives. In other words, that text may have been written by a human, but the platform labels it as written by an AI.
The tool capable of detecting texts made with ChatGPT or similar, yes, can show different results depending on the precision when checking if something has been written by an AI. For example, if it can’t tell if it’s something generated by a human or by an AI model, it will label the result as “possible”. If, on the other hand, the platform is more certain that this text is made with ChatGPT or similar, it will label it as “very unlikely”.
Unfortunately, it is very unlikely that these types of tools will be able to detect texts made with ChatGPT if they are very predictable. For example, it is extremely difficult to know if a list with the names of the provinces of Spain arranged alphabetically was written by an AI or by a human.
In the future, the only way to detect it will be to somehow incorporate watermarks or metadata, in the same way that images often carry EXIF data. But made the law, made the trap. It would not be a surprise if ways to remove that associated metadata emerged, in the same way that you can edit the EXIF data of a photograph. It is there where authorities and main players in the field of artificial intelligence will have to continue working in parallel to the advancement of the models.