I just posted this on LinkedIn in response to a survey from a colleague of mine asking whether ChatGPT should be credited as a co-author on papers:
- ChatGPT does not have a conception of what is going on in the world. It is a word-emitter that tricks human minds into thinking it does. In other words, it’s a kind of complex automaton, a marionette. The fact that the action of it is complex enough to fool us into thinking it “knows” something does not mean it does
- ChatGPT is as likely to emit false information as true information (perhaps more so; has this been assessed?)
- ChatGPT does not have deductive or inductive logical reasoning capabilities; nor does it have any “drive” to follow these principles
- Human papers are for human writers to communicate to human readers. It seems to me that the only argument in favor of including ChatGPT in this process is a misguided drive to speed up the process even more than publish-or-perish has. In fact it should be slowed down and made more careful.
- The present interest in ChatGPT is almost entirely driven by investor-fueled hype. It’s where investors are running after the collapse of cryptocurrency/web3. There is a nice interview with Timnit Gebru on the Tech Won’t Save Us podcast, titled “Don’t Fall for the AI Hype” that goes into this if you’re curious. As computer scientists, we should not be chasing trends like this.