Posted by & filed under Consciousness, Intelligence, Language-Thought, Learning, Memory, Neuroscience.

Description: Pardon me briefly for doing what older people do and speak historically for a moment (based on somewhat personal experience – as I was around and an adult back when things that now seem very old actually happened). Now don’t worry I am not going to start telling stories about what my introductory classes were like just before and shortly after email just  invented or about what Blockbuster sold before video tapes and recorders were available. Anyway…. When computers first arrived (were invented) artificial intelligence as born as a corollary field. Early on work in AI split into two streams. One that we can think of as the brute force type of AI involved using computers and their exponentially expanding computing power to see if they could be programmed to do things that humans do better than humans do them. The chess playing computer program Deep Blue (and Alan Turing’s code cracking machine). By being able to rapidly generate and evaluate the downstream consequences of a great many decision options this line of AI used computers large processing capacity to do things faster than humans could do. Importantly, however, in so doing, such brute force AI machines do not approach and solve problems the ways humans do. While advantageous for number crunching this meant that there were certain problems (like speech recognition) that computers did not do very well. The other AI stream involved developing expert systems approaches to problems that would have computers doing things more like the ways humans do them. One hope was that this could lead to computers being able to accomplish passable versions of complex tasks that human experts do well such as medical diagnosis (for a fantasy version look up Emergency Medical Hologram Mark I – online of course). The other potential payoff in this line of inquiry basically gave rise to the Information Processing Theory approach to studying human cognition (remember your intro psych course?) and a related payoff of a better understanding of how human cognitive experts do what they do.

Ok, so that is the contextual digression onto history. Now….. Intel and other computer chip manufacturers have grown their businesses over the past 47 years by virtue of what has come to called Moore’s law (you can look that up too, though Moore was ‘real”) which essentially involved the doubling of the processing capacity of core computing chips each year (that’s why you had to buy a new computer every couple of years to keep up). The problem is that chip developers are now starting to bum p up against the physics limits of how much processing capacity can be squeezed onto a chip. One solution is what is referred to as quantum computing (see link below in Further Reading) which could produce a quantum leap in processing capacity. Another solution is to reverse the historical trend of trying to build computers that can out-think human beings and to start to see if building computers that think (process information) like human beings can produce energy efficient (distributed) processing models. So, equipped with a this thin shaving of decades of general wisdom on computer development and Artificial Intelligence and information processing have a read through the article linked below to see where this trend is going. In addition to starting your own (sooner than you will realize) historical reflections it may also suggest some future investment options as well.

Source: Chips Off the Old Block: Computers Are Taking Design Cues From Human Brains, Cade Metz, Technology, New York Times.

Date: September 17, 2017

Photo Credit:  Minh Uong/The New York Times

Links:  Article Link —

So can you see how the development pathways laid out in the linked article diverge from previous AI development pathways? As with virtually all technological advances, the impact of this one (the use of human processing models for computer processing strategy and hardware development) will very likely take us in unexpected directions. But whichever ways it goes, pay attention, because it is certainly going to be interesting and this one may tell us (within psychology) more about human information processing than we can imagine.

Questions for Discussion:

  1. Historically, how have computer developers and the AI field in general linked to or reflected upon human information processing?
  2. Can you identify one or two areas of human information processing or brain functioning that appear(s) to reflect the sort of “design” principles being used in new cutting edge chip and processor developments?
  3. How might these emerging developments in the field of computing (and AI) have impact upon the emerging Psychology sub-field of Cognitive Neuroscience?

References (Read Further):

McCorduck, P., Minsky, M., Selfridge, O. G., & Simon, H. A. (1977, August). History of Artificial Intelligence. In IJCAI (pp. 951-954).

Buchanan, B. G. (2005). A (very) brief history of artificial intelligence. Ai Magazine, 26(4), (the download link is OK)

Heckerman, D., Horvitz, E., & Nathwani, B. N. (2016). Toward normative expert systems part i. Methods of information in medicine, 31.

Sperling, G. (1998). A Century of Human Information-Processing Theory. Perception and Cognition at Century’s End: History, Philosophy, Theory, 199.

Knill, E. (2010). Physics: quantum computing. nature, 463(7280), 441-443.  (this article is a bit thick!)

Quantum Computing 101, Waterloo University,