Description: You have likely heard about the Turing test that is intended as a means of testing whether the entity one is conversing with via typed messages is a human being or an artificial intelligence (a computer program). Alan Turing argued that if a program seemed to those corresponding with it to be “human” then we should grant it some sort of “being” status. The hyperlink above will take you to a previous post talking about this test and about a possible “winner.” Now, rather than thinking about what sorts of questions you would ask and what sorts of topics you would raise in such an interaction/investigation what if you were asked to come with ONE WORD that would most likely sound “human” rather than “machine (AI)” generated? One word would likely not be enough but think about what your one word would be and think about what research involving the collection of many peoples’ one words might tell us that could be interesting or useful. Once you have those answers in mind read the article linked below to see what several social Psychologists di with peoples’ one word “Turing Test” responses.
Source: What a “Minimal Turing Test” Says About Humans, Matthew Hutson, Psyched! Psychology Today.
Date: September 21, 2018
Photo Credit: Journal of Experimental Social Psychology
The frequency and conceptual patterns of words/concepts invoked in this one-word Turing Test research are interesting. Were you surprised at how well some words did in the head-to-head part of the study where participants were asked to consider pairs of words taken from the first part of the study and pick which word sounded more human. “Poop” beat every other word including “love”! Maybe there is another version of the “shit happens” T-Shirt image to be created here! The approach to examining our concepts of humans and robots or AI’s one word at a time might seem a bit artificial but the results suggest much about the nature of our concepts in this area and the conceptual structures that support them and thr stereotypes they produce.
Questions for Discussion:
- What is the Turing Test?
- What sorts of things does the single word Turing Test allow us to do from a social Psychological perspective?
- What might it mean to say we have “stereotypes” about artificial intelligences, robots etc.?
References (Read Further):
McCoy, J. P., & Ullman, T. D. (2018). A Minimal Turing Test. Journal of Experimental Social Psychology, 79, 1-8.
Abrams, J. (2017). Is Eliza human, and can she write a sonnet?: A look at language technology. Access, 31(3), 4. https://groklearning-cdn.com/resources/Abrams_J_ACCESS_September_2017.pdf
Marcus, G. (2017). Am I Human?. Scientific American, 316(3), 58-63. http://www.cs.virginia.edu/~robins/Am_I_Human.pdf
de Graaf, M. M., & Malle, B. F. (2018). People’s Judgments of Human and Robot Behaviors. https://www.researchgate.net/profile/Maartje_De_Graaf/publication/322641767_People%27s_Judgments_of_Human_and_Robot_Behaviors_A_Robust_Set_of_Behaviors_and_Some_Discrepancies/links/5a66087baca272a158203fdb/Peoples-Judgments-of-Human-and-Robot-Behaviors-A-Robust-Set-of-Behaviors-and-Some-Discrepancies.pdf
Oliveira, R., Arriaga, P., Correia, F., & Paiva, A. (2018). Making Robot’s Attitudes Predictable: A Stereotype Content Model for Human-Robot Interaction in Groups. https://www.researchgate.net/profile/Raquel_Oliveira24/publication/323755735_Making_Robot’s_Attitudes_Predictable_A_Stereotype_Content_Model_for_Human-Robot_Interaction_in_Groups/links/5aa91bf7aca272d39cd502a6/Making-Robots-Attitudes-Predictable-A-Stereotype-Content-Model-for-Human-Robot-Interaction-in-Groups.pdf