Tuesday, November 5, 2019

The Knowledge Illusion part 10

This post is on the book The Knowledge Illusion by Steven Sloman (cognitive scientist and professor at Brown University) and Phillip Fernbach (cognitive scientist and professor of marketing at Colorado's Leeds School of business).

This post is the tenth in a series of sixteen that address The Knowledge Illusion and unless otherwise noted all quotes are from The Knowledge Illusion. I recommend reading all sixteen posts in order.

I have written on numerous other books on psychology, social psychology, critical thinking, cognitive dissonance theory and related topics already but discovered this one and feel it plays a complimentary and very needed role. It helps to explain a huge number of "hows" and "whys" regarding the other subjects I mentioned, all of the subjects.



In the seventh chapter of The Knowledge Illusion, Thinking with Technology, the authors take on technology as extension of thought.

The authors caution about the possibility of AI reaching the hypothetical singularity. In technology the term is used to describe a point at which AI surpasses human intelligence and incredibly rapidly goes from our equal or slight superior to hundreds then thousands of times superior to us in a matter of weeks or hours.

Let us consider technology as an extension of thought.

 "According to Ian Tattersall, curator emeritus with the American Museum of Natural History in New York, "cognitive capacity and technology reinforced each other" as civilization developed. Genetic evolution and technological change have run in tandem throughout our evolutionary history. As brains increased in size from one hominid species to its descendants, tools became more sophisticated and more common. " (Page 133)

The progress from using rocks with edges to using fire to stone axes and knives then nets, hooks, traps and bows and arrows then eventually farming was accompanied at each step by changes in the culture and genes of our ancestors that made this progress possible.

One thing the authors note is our adaptability to different situations. We can use tools of many different types and styles as comfortably as our hands. We can use a hammer or knife or broom or a hundred different tools and rapidly feel comfortable with them.

One thing that is making modern technology less comfortable is its changing features. If I use a hammer or pen or fork I reasonably expect it to work or break and to hold few surprises.

But if I use a computer or device like a modern phone with a computer in it,  it may do things that I do not anticipate or even understand. And modern technology may monitor me in ways I do not anticipate and do things with information from and about me I would never have dreamed of, and didn't give knowing consent to.

 "One consequence of these developments is that we are starting to treat our technology more and more like people, like full participants in the community of knowledge. The Internet is a great example. Just as we store understanding in other people, we store understanding in the internet. We have seen that having knowledge available in other people's heads leads us to overrate our own understanding. Because we live in a community that shares knowledge, each of us individually can fail to distinguish whether knowledge is stored in our own head or someone else's. This leads to the illusion of explanatory depth: I think I understand things better than I do because I incorporate other people's understanding into my assessment of my own understanding." (Page 136)

Two different research groups found we have "confusion at the frontier" regarding searching the internet. Adrian Ward, a psychologist at the University of Texas found that using internet searches increases our cognitive self-esteem, our sense of being able to remember and process information. Additionally, it was found that people who search the internet for facts we don't know can misremember and report we knew them when we actually didn't and had to look them up.

I know that I often search for a couple search terms to remember something in my mind and use those instead of searching my memory. Through practice I have learned a celebrity and phrase is often enough to get a song with all its lyrics or a movie or television show with all the actors, writers, directors, episodes and on on related to it.

In another group of studies Matt Fischer who was a PhD student at Yale with Frank Keil (one of the original discoverers of the illusion of explanatory depth), had students answer questions like "how does a zipper work?" and some people were allowed to use the internet to find information to confirm their answer and some people were not.

The people who were allowed to use the internet subsequently felt greater confidence in their abilities to answer other, unrelated questions. The conclusion was that using the internet caused participants to feel they knew the answers to other questions, even though they hadn't researched them yet.

The authors give the example of a person searching the internet to plan out a trip. We bring some ideas and a timeframe, a destination and some priorities but get a lot of information from the internet. At the end we feel like we personally planned the trip and don't usually say "I came up with these seven ideas and got these five from the internet."

 I have written about a lot of things and can confirm sharply separating what I brought to the table and what the internet provides is hard to distinguish.

 "This has some worrying consequences. The Internet's knowledge is so accessible and so vast that we may be fashioning a society where everyone with a smartphone and a Wi-Fi connection becomes a self-appointed expert in multiple domains." (Page 138)

One study with the authors and Adrian Ward found that doctors and nurses say patients who search websites like WebMD don't know much more than other people but think they do and often doubt or reject diagnoses. They also asked in another experiment "what is a stock share?" and had people play an investment game. People who looked it up online bet more in the game. They didn't do better in the game and earned less money.

They identified the problem of looking up medical information or financial information for a few minutes as nowhere near equal to a real medical or financial education but it feels like we have knowledge and understanding based on access, but it is not the same thing.

As of right now machines don't have intentionality. A GPS can map out a route but it doesn't have desires that choose one. It also doesn't independently decide to pass down its information over generations, so it lacks culture.

Without shared intentions and desires we don't truly collaborate with machines. We use them. To share intentions you have to be able to reflect on your desires and the desires of others and conclude which are in agreement or are not and even work to make them agree.

No machine can do that. We don't know how to program a machine to do that and that is why certain kinds of artificial intelligence simply have not been successfully developed.

We are as the authors point out at an awkward moment. We depend on machines for knowledge and see our own knowledge as increased by increased access to information that the machines themselves contain but they do not think and understand as we do. They primarily store information, like libraries but we are acting more educated because these vast libraries exist with easy access but we really don't know or understand the vast majority of information stored and accessed this way.

We now face a paradox of automation leading to greater dependence on machines and greater ignorance about that dependence coupled with the greater assumption or illusion of knowledge because the machines hold knowledge.

We feel safer but because machines don't have our intentions, thoughts, feelings and understanding when they fail we can be surprised at the outcome because we would act differently.

The authors pointed out several relevant examples. Airplanes stall when the plane's airspeed is not enough to generate lift. It means that the plane needs to go at a minimum speed to keep flying or it won't fly. Simple.

Pilots learn that a good way to increase speed and save yourself from a very unpleasant landing (crash) is to point the nose of the plane down and dive, increasing speed quickly, then as the plane's airspeed goes up enough raise yourself out of the dive and you are back to flying and can hopefully continue safely on your way. It is a very basic idea that pilots learn early in their training and it has saved lives probably thousands of times.

In 2009 Air France Flight 447 crashed and tragically killed 228 people. The Airbus A330 had entered a stall and the black box revealed that the copilot tried to push the nose up rather than down. The FAA issued a report claiming that the pilots were too reliant on the technology and lacked basic flying training as a result. The report concluded the flight crew was unaware the plane even could stall (which all kinds of planes have been doing for a very long time) and didn't understand how to interpret the complex signals from the equipment.

They died in a situation in which an older crew using earlier technology and especially earlier training would very likely survive and have a scary but not catastrophic story to tell.

It makes a difference if you know what to do and live as a result or don't know and die as a consequence.

There have been similar issues with reliance on GPS devices and cars driving off docks or ships getting stuck on shore.

How serious could an error in relying on machines for accurate information be ? Believe it or not the world has been in danger of being destroyed and only through human judgement to not obey orders has it been saved.

A name most people don't know but probably everyone should is Stanislav Petrov. In 1983 he was in the Russian military and got an alert that an American missile had launched and per his orders he was supposed to launch missiles in retaliation. This would have resulted in the Americans and their allies launching everything in retaliation and the Russians and likely Chinese launching everything. Probably over five thousand nuclear weapons of various types would have been used in the next few hours.

Stanislav Petrov correctly decided it was more likely that his machine had an error than the Americans had only launched one missile. His system reported five more American missiles but he still felt it was unlikely the Americans would launch a few missiles when they had thousands of nuclear weapons and a real attack would require thousands of weapons.

Fortunately for anyone who likes the human race surviving past 1983 Stanislav Petrov was suspicious of the new system that gave him notification of the launch and disobeyed his orders, resulting in the survival of the human race.

It is only one of several such incidents and Noam Chomsky has remarked that around a hundred such near launch events have occurred, making our survival up to now a minor miracle.

Maybe, just maybe our huge stockpiles of nuclear weapons and military personnel informed to launch everything at a moment's notice are not good ideas. I personally think that if we had treaties limiting every member to, say for example, two hundred modern nuclear weapons, for countries like China, Russia and the United States that would save many billions of dollars on nuclear weapons. For the United States and Russia it is closer to trillions over decades.  Billions that could be used in education and healthcare and eradicating poverty. And if they get down to two hundred weapons then a treaty to go down to a hundred could be introduced. A hundred nuclear weapons could deter any attack. No country would be sustainable after such an attack but they are not the thousands we now have and just that reduction would increase the odds of the human race surviving a nuclear war. Something to consider.



As we can see, computers lack intelligence and cannot truly share knowledge, but there is a way technology helps us to. With crowdsourcing applications people help each other and combine knowledge. As the authors point out crowdsourcing is a critical provider of information to sites that integrate knowledge from different experiences, locations and knowledge bases.

People can share all kinds of information this way and you can answer a question on Reddit or Quora or get a recipe or get a traffic map. Or get a lot of restaurant reviews.

Crowdsourcing works best when it connects people with a need or interest with the right experts. You really need to know about construction or an accident blocking your path when you go to a map or driving app  and you really need someone who knows what they are talking about when you ask about something highly specialized like Scientology.

People using crowdsourcing need to find incentives to get good experts. Money is one incentive and is sometimes used. Feeling right or important is another. Many people contribute tremendous content to Wikipedia and are never paid. The Oxford English Dictionary also has volunteers contribute content. But importantly the right experts are needed for this to work.

Several years ago Pallokerho - 35, a Finnish soccer club, invited fans to participate in decisions regarding recruiting, training and game tactics. They let fans vote. Of course this was a disaster. The team did poorly, the coach was fired and the experiment halted.

This shows an important lesson, for crowdsourcing to work people need expertise in the relevant fields. Enthusiasm alone doesn't guarantee success.

Similarly with certain items the reviews regular people give are actually not helpful as experts know better how to rate and compare them. Things like digital cameras and kitchen appliances are better understood by experts.

The authors point out something several books on psychology have pointed out. Crowdsourcing for some things has been successful. Francis Galton in 1907 wrote a paper entitled Vox Populi (The Wisdom of Crowds). He described a contest to guess the weight of a fat ox. 787 people entered a contest to correctly guess the weight and win a prize. All kinds of people including butchers and farmers entered and plenty of people with no expertise regarding livestock. The average guess was reported to be within one percent of the 1,198 pounds the fat ox actually weighed. So, in some circumstances the average of the crowd, even an uneducated crowd, can be accurate.

The evolution of technology continues as web developers are at work trying to create platforms that use experts to solve specific problems. They have to work out how to get the experts and how to get the experts to work together on the right problems but the potential for success has people hard at work.

The authors suggest that crowdsourcing and future collaborative platforms are where actual superintelligence are going to be found. People sharing knowledge and finding better ways to work out problems together will out produce and out work the machines of today.

The authors warn that as our own systems get more complex our understanding will be less and less but our own illusion of understanding will become greater.

They warn our dependence on experts will grow, especially when technology and our own meager understanding fails. We are more like cogs in a great machine now than masters of our own domain. They point out crucially that this means we have to be even more vigilant and remind ourselves that we really don't know what is going on. That is the most important lesson to me from this chapter.

They also point out two advantages. They see countless benefits from technology like increased safety, reduced effort, increased efficiency. And they see greater access to experts as improving our own knowledge.




No comments:

Post a Comment

Note: Only a member of this blog may post a comment.