Posted by & filed under Consciousness, Industrial Organizational Psychlology, Industrial Organizational Psychology, Intelligence, Learning.

Description: How do you feel about self-driving cars? Would you trust your safety if they started driving though your neighbourhood (perhaps delivering packages or food)? What is a major challenge to computers (AI) learning to drive and to drive safely? Well, recognizing what they “see” so they can respond appropriately. A common position is that computers make mistakes that humans do not make such as not recognizing objects or animals that should lead to action changes. This leads to the related common belief that computers do not “think” like we do. The researchers whose work is discussed in the article linked below challenge this by trying to see if they could create conditions in which humans would “see things” the same way that computer do. How might our thinking about computer thinking and decision making be changed if we can see communalities in AI and Human perception and information processing? For an understanding of the problem give the article linked below a read (or have a look at the original research articled linked down below in the References section).

Source: Researchers get humans to think like computers, Science News, Science Daily.

Date: March 22, 2019

Photo Credit: https://medium.com/udacity/perception-projects-from-the-self-driving-car-nanodegree-program-51fb88a38ff9

Article Link: https://www.sciencedaily.com/releases/2019/03/190322090239.htm

So when humans are asked to respond to what they are seeing using the same options available to computers they look like they “thinking” in very similar ways. The human ability to look at images and decide what they “look like” as opposed to what they actually are (like cloud gazing) may be something humans do that computer are not allowed to do when they are learning how to drive (and other things). Being restricted to decide what everything you see really really is makes sense for learning how to drive and when we restrict ourselves to making those sorts of decisions, we start to act more like computers do in similar circumstances. What this might mean for self-driving vehicles is not really very clear but perhaps it opens an avenue for us to develop a bit more empathy for what the computer in self-driving vehicles are going through!

Questions for Discussion:

  1. How does thinking about children as orchids and dandelions help us to more effectively examine child development?
  2. What does it mean to say that resilience is relational?
  3. What sorts of things should parents be trying to do for their orchid child(ren)? And what about for their dandelion children?

References (Read Further):

Zhenglong Zhou, Chaz Firestone. Humans can decipher adversarial images. Nature Communications, 2019; 10 (1) https://www.nature.com/articles/s41467-019-08931-6.pdf

Sivak, M., & Schoettle, B. (2015). Road safety with self-driving vehicles: General limitations and road sharing with conventional vehicles. https://deepblue.lib.umich.edu/bitstream/handle/2027.42/111735/103187.pdf?sequ

Bojarski, M., Del Testa, D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., … & Zhang, X. (2016). End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316.

Howard, D., & Dai, D. (2014, January). Public perceptions of self-driving cars: The case of Berkeley, California. In Transportation Research Board 93rd Annual Meeting (Vol. 14, No. 4502, pp. 1-16). https://arxiv.org/pdf/1604.07316.pdf?iframe=true&width=1040&height=620

 

Leave a Reply

Your email address will not be published. Required fields are marked *