Posted by & filed under Consciousness, Industrial Organizational Psychlology, Industrial Organizational Psychology, Language Development, Language-Thought, Learning, Memory, Persuasion, Social Cognition, Social Influence, Social Perception, Social Psychology.

Description: Is it accurate? Does it plagiarize? Does it lie? Is it dangerous? Can people use t to cheat? Is it alive? Is it sentient? Is it human? Based on these questions, what am I thinking and writing about? I am thinking about ChatGPT and other “bots” that answer questions and write essays in response to submitted requests, almost instantly and with an accuracy and a fluency that is either amazing, disarming, or alarming depending on your point of view. If you are in school at any level you may have already to told either that you cannot use things like ChatGPT to ‘write’ assigned essays or you have been told explicitly how you can use them so as not to have their work passed off as your own. I have posted previously on the school-related issues of such bots but what about the Turing Test or question? A question that has been kicked around in science fiction writing and movies for years is something like, when will we need to acknowledge that such ‘machines” or bots are sentient (or human). Turing proposed a test. He suggested we have two long conversations, one with a machine/bot/AI and one with a real person (by typing our part of the conversation and reading theirs on a screen so that we are not dealing with voice issues or visual cues). If at the end of both conversations we were unable to say with certainty which of our conversational partners was a person and which was the AI machine then we would have to grant the machine person-status. Would this test settle anything (and I suspect ChatGPT might pass it)? Would it raise more questions or issues than is answer or settles? Does it tell us about ChatGPT or about human gullibility and social/perceptual tendencies and biases? Before we can deploy the research and theory capabilities of Psychology we need to dive into a bit of Philosophy. So, consider this question, If ChatGPT ‘seems’ human what do we do with that? What are the next questions to ask? Think about this for a moment and then read the article linked below for a perspective on this.

Source: ChatGPT has convinced users that it thinks like a person. Unlike humans, it has no sense of the real world, Wayne MacPhail, The Globe and Mail

Date: January 27, 2023

Image by Mohamed Hassan from Pixabay

Article Link:

So, what do you think now? Should we run for the hills before the machines and terminators come after us? Should we ask different questions? Should we start writing more regulations? What? In this area I think we need to dive in deeper past the “if it quacks, it is probably a duck” level and think about what is going on with AI like ChatGPT, about how we might we (or if we should) use it, and about how our world might be changing as a result of the arrival of such AI bots. Not small questions and possibly unsettling questions. I think the linked article author’s argument that ChapGPT has no sense of the real world is a very important observation, partly as it has implications for our view of such bots and what they do and because such bots are the creations of companies that, while they speak about their work as research, are in it for the money. Now THERE is an area that needs more research and critical reflection!

Questions for Discussion:

  1. What is the Turing test?
  2. Does it make any sense to ask if AI bots like ChatGPT are sentient or human?
  3. What do you see as three or four of the most important philosophical or psychological research questions that should be asked about AI bots like ChatGPT and their possible impacts on our experience and world?

References (Read Further):

Mahowald, Kyle and Ivanova, Anna A. (2022) Google’s powerful AI spotlights a human glitch: Mistaking fluent speech for fluent thought. The Conversation. Link

French, R. M. (2000). The Turing Test: the first 50 years. Trends in cognitive sciences, 4(3), 115-122. Link

Marcus, G., Rossi, F., & Veloso, M. (2016). Beyond the Turing test. Ai Magazine, 37(1), 3-4. Link

Nov, O., Singh, N., & Mann, D. M. (2023). Putting ChatGPT’s Medical Advice to the (Turing) Test. medRxiv, 2023-01. Link

Noever, D., & Ciolino, M. (2022). The Turing Deception. arXiv preprint arXiv:2212.06721. Link

Guo, B., Zhang, X., Wang, Z., Jiang, M., Nie, J., Ding, Y., … & Wu, Y. (2023). How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection. arXiv preprint arXiv:2301.07597. Link

Cotton, D. R., Cotton, P. A., & Shipway, J. R. (2023). Chatting and Cheating: Ensuring academic integrity in the era of ChatGPT. Preprint. https://doi. org/10.35542/osf. io/mrz8h. Link

Rudolph, J., Tan, S., & Tan, S. ChatGPT: Bullshit spewer or the end of traditional assessments in higher education?. Journal of Applied Learning and Teaching, 6(1). Link

Qadir, J. (2022). Engineering Education in the Era of ChatGPT: Promise and Pitfalls of Generative AI for Education. Link

Tate, T., Doroudi, S., Ritchie, D., & Xu, Y. (2023). Educational Research and AI-Generated Writing: Confronting the Coming Tsunami. Link