Download Subtitles and Closed Captions (CC) from YouTube

Enter the URL of the YouTube video to download subtitles in many different formats and languages.


2. Putting yourselves into other peoples shoes with English subtitles   Complain

Professor Ben Polak: Okay, so last time we looked at

and played this game. You had to choose grades,

so you had to choose Alpha and Beta, and this table told us

what outcome would arise. In particular,

what grade you would get and what grade your pair would get.

So, for example, if you had chosen Beta and your

pair had chosen Alpha, then you would get a C and your

pair would get an A. One of the first things we

pointed out, is that this is not quite a game yet.

It's missing something. This has outcomes in it,

it's an outcome matrix, but it isn't a game,

because for a game we need to know payoffs.

Then we looked at some possible payoffs, and now it is a game.

So this is a game, just to give you some more

jargon, this is a normal-form game.

And here we've assumed the payoffs are those that arise if

players only care about their own grades, which I think was

true for a lot of you. It wasn't true for the

gentleman who's sitting there now, but it was true for a lot

of people. We pointed out,

that in this game, Alpha strictly dominates Beta.

What do we mean by that? We mean that if these are your

payoffs, no matter what your pair does, you attain a higher

payoff from choosing Alpha, than you do from choosing Beta.

Let's focus on a couple of lessons of the class before I

come back to this. One lesson was,

do not play a strictly dominated strategy.

Everybody remember that lesson? Then much later on,

when we looked at some more complicated payoffs and a more

complicated game, we looked at a different lesson

which was this: put yourself in others' shoes

to try and figure out what they're going to do.

So in fact, what we learned from that is,

it doesn't just matter what your payoffs are -- that's

obviously important -- it's also important what other people's

payoffs are, because you want to try and

figure out what they're going to do and then respond

appropriately. So we're going to return to

both of these lessons today. Both of these lessons will

reoccur today. Now, a lot of today is going to

be fairly abstract, so I just want to remind you

that Game Theory has some real world relevance.

Again, still in the interest of recapping, this particular game

is called the Prisoners' Dilemma.

It's written there, the Prisoners' Dilemma.

Notice, it's Prisoners, plural. And we mentioned some examples

last time. Let me just reiterate and

mention some more examples which are actually written here,

so they'll find their way into your notes.

So, for example, if you have a joint project

that you're working on, perhaps it's a homework

assignment, or perhaps it's a video project

like these guys, that can turn into a Prisoners'

Dilemma. Why?

Because each individual might have an incentive to shirk.

Price competition -- two firms competing with one another in

prices -- can have a Prisoners' Dilemma aspect about it.

Why? Because no matter how the other

firm, your competitor, prices you might have an

incentive to undercut them. If both firms behave that way,

prices will get driven down towards marginal cost and

industry profits will suffer. In the first case,

if everyone shirks you end up with a bad product.

In the second case, if both firms undercut each

other, you end up with low prices, that's actually good for

consumers but bad for firms. Let me mention a third example.

Suppose there's a common resource out there,

maybe it's a fish stock or maybe it's the atmosphere.

There's a Prisoners' Dilemma aspect to this too.

You might have an incentive to over fish.

Why? Because if the other countries

with this fish stock--let's say the fish stock is the

Atlantic--if the other countries are going to fish as normal,

you may as well fish as normal too.

And if the other countries aren't going to cut down on

their fishing, then you want to catch the fish

now, because there aren't going to

be any there tomorrow. Another example of this would

be global warming and carbon emissions.

Again, leaving aside the science, about which I'm sure

some of you know more than me here, the issue of carbon

emissions is a Prisoners' Dilemma.

Each of us individually has an incentive to emit carbons as

usual. If everyone else is cutting

down I don't have too, and if everyone else does cut

down I don't have to, I end up using hot water and

driving a big car and so on. In each of these cases we end

up with a bad outcome, so this is socially important.

This is not just some abstract thing going on in a class in

Yale. We need to think about

solutions to this, right from the start of the

class, and we already talked about something.

We pointed out, that this is not just a failure

of communication. Communication per se will not

get you out of a Prisoners' Dilemma.

You can talk about it as much as you like, but as long as

you're going to go home and still drive your Hummer and have

sixteen hot showers a day, we're still going to have high

carbon emissions. You can talk about working hard

on your joint problem sets, but as long as you go home and

you don't work hard, it doesn't help.

In fact, if the other person is working hard,

or is cutting back on their carbon emissions,

you have every bit more incentive to not work hard or to

keep high carbon emissions yourself.

So we need something more and the kind of things we can see

more: we can think about contracts;

we can think about treaties between countries;

we can think about regulation. All of these things work by

changing the payoffs. Not just talking about it,

but actually changing the outcomes actually and changing

the payoffs, changing the incentives.

Another thing we can do, a very important thing,

is we can think about changing the game into a game of repeated

interaction and seeing how much that helps,

and we'll come back and revisit that later in the class.

One last thing we can think of doing but we have to be a bit

careful here, is we can think about changing

the payoffs by education. I think of that as the "Maoist"

strategy. Lock people up in classrooms

and tell them they should be better people.

That may or may not work -- I'm not optimistic -- but at least

it's the same idea. We're changing payoffs.

So that's enough for recap and I want to move on now.

And in particular, we left you hanging at the end

last time. We played a game at the very

end last time, where each of you chose a

number -- all of you chose a number -- and we said the winner

was going to be the person who gets closest to two-thirds of

the average in the class. Now we've figured that out,

we figured out who the winner is, and I know that all of you

have been trying to see if you won, is that right?

I'm going to leave you in suspense.

I am going to tell you today who won.

We did figure it out, and we'll get there,

but I want to do a little bit of work first.

So we're just going to leave it in suspense.

That'll stop you walking out early if you want to win the

prize. So there's going to be lots of

times in this class when we get to play games,

we get to have classroom discussions and so on,

but there's going to be some times when we have to slow down

and do some work, and the next twenty minutes are

going to be that. So with apologies for being a

bit more boring for twenty minutes, let's do something

we'll call formal stuff. In particular,

I want to develop and make sure we all understand,

what are the ingredients of a game?

So in particular, we need to figure out what

formally makes something into a game.

The formal parts of a game are this.

We need players -- and while we're here let's develop some

notation. So the standard notation for

players, I'm going to use things like little i and little j.

So in that numbers game, the game when all of you wrote

down a number and handed it in at the end of last time,

the players were who? The players were you.

You'all were the players. Useful text and expression

meaning you plural. In the numbers game,

you'all, were the players. Second ingredient of the game

are strategies. (There's a good clue here.

If I'm writing you should be writing.) Notation:

so I'm going to use little "s_i" to be a

particular strategy of Player i. So an example in that game

might have been choosing the number 13.

Everyone understand that? Now I need to distinguish this

from the set of possible strategies of Player I,

so I'm going to use capital "S_i" to be what?

To be the set of alternatives. The set of possible strategies

of Player i. So in that game we played at

the end last time, what were the set of

strategies? They were the sets 1,2,

3, all the way up to 100. When distinguishing a

particular strategy from the set of possible strategies.

While we're here, our third notation for

strategy, I'm going to use little "s" without an "i,"

(no subscripts): little "s" without an "i," to

mean a particular play of the game.

So what do I mean by that? All of you, at the end last

time, wrote down this number and handed them in so we had one

number, one strategy choice for each person in the class.

So here they are, here's my collected in,

sort of strategy choices. Here's the bundle of bits of

paper you handed in last time. This is a particular play of

the game. I've got each person's name and

I've got a number from each person: a strategy from each

person. We actually have it on a

spreadsheet as well: so here it is written out on a

spreadsheet. Each of your names is on this

spreadsheet and the number you chose.

So that's a particular play of the game and that has a

different name. We sometimes call this "a

strategy profile." So in the textbook,

you'll sometimes see the term a strategy profile or a strategy

vector, or a strategy list. It doesn't really matter.

What it's saying is one strategy for each player in the

game. So in the numbers game this is

the spreadsheet -- or an example of this is the spreadsheet.

(I need to make it so you can still see that,

so I'm going to pull down these boards.

And let me clean something.) So you might think we're done

right? We've got players.

We've got the choices they could make: that's their

strategy sets. We've got those individual

strategies. And we've got the choices they

actually did make: that's the strategy profile.

Seems like we've got everything you could possibly want to

describe in a game. What are we missing here?

Shout it out. "Payoffs."

We're missing payoffs. So, to complete the game,

we need payoffs. Again, I need notation for

payoffs. So in this course,

I'll try and use "U" for utile, to be Player i's payoff.

So "U_i" will depend on Player 1's choice …

all the way to Player i's own choice …

all the way up to Player N's choices.

So Player i's payoff "U_i," depends on all

the choices in the class, in this case,

including her own choice. Of course, a shorter way of

writing that would be "U_i(s)," it depends

on the profile. So in the numbers game what is

this? In the numbers game

"U_i(s)" can be two things.

It can be 5 dollars minus your error in pennies,

if you won. I guess it could be something

if there was a tie, I won't bother writing that

now. And it's going to be 0

otherwise. So we've now got all of the

ingredients of the game: players, strategies,

payoffs. Now we're going to make an

assumption today and for the next ten weeks or so;

so for almost all the class. We're going to assume that

these are known. We're going to assume that

everybody knows the possible strategies everyone else could

choose and everyone knows everyone else's payoffs.

Now that's not a very realistic assumption and we are going to

come back and challenge it at the end of semester,

but this will be complicated enough to give us a lot of

material in the next ten weeks. I need one more piece of

notation and then we can get back to having some fun.

So one more piece of notation, I'm going to write

"s_-i" to mean what? It's going to mean a strategy

choice for everybody except person "i."

It's going to be useful to have that notation around.

So this is a choice for all except person "i" or Player i.

So, in particular, if you're person 1 and then

"s_-i" would be "s_2,

s_3, s_4" up to

"s_n" but it wouldn't include "s_1."

It's useful why? Because sometimes it's useful

to think about the payoffs, as coming from "i's" own choice

and everyone else's choices. It's just a useful way of

thinking about things. Now this is when I want to stop

for a second and I know that some of you, from past

experience, are somewhat math phobic.

You do not have to wave your hands in the air if you're math

phobic, but since some of you are, let me just get you all to

take a deep breath. This goes for people who are

math phobic at home too. So everyone's in a slight panic

now. You came here today.

You thought everything was going to fine.

And now I'm putting math on the board.

Take a deep breath. It's not that hard,

and in particular, notice that all I'm doing here

is writing down notation. There's actually no math going

on here at all. I'm just developing notation.

I don't want anybody to quit this class because they're

worried about math or math notation.

So if you are in that category of somebody who might quit it

because of that, come and talk to me,

come and talk to the TAs. We will get you through it.

It's fine to be math phobic. I'm phobic of all sorts of

things. Not necessarily math,

but all sorts of things. So a serious thing,

a lot of people get put off by notation, it looks scarier than

it is, there's nothing going on here

except for notation at this point.

So let's have an example to help us fix some ideas.

(And again, I'll have to clean the board, so give me a second.)

I think an example might help those people who are disturbed

by the notation. So here's a game which we're

going to discuss briefly. It involves two players and

we'll call the Players I and II and Player I has two choices,

top and bottom, and Player II has three choices

left, center, and right.

It's just a very simple abstract example for now.

And let's suppose the payoffs are like this.

They're not particularly interesting.

We're just going to do it for the purpose of illustration.

So here are the payoffs: (5, -1), (11,3),

(0,0), (6,4), (0,2), (2,0).

Let's just map the notation we just developed into this game.

So first of all, who are the players here?

Well there's no secret there, the players are -- let's just

write it down why don't we. The players here in this game

are Player I and Player II. What about the strategy sets or

the strategy alternatives? So here Player I's strategy

set, she has two choices top or bottom, represented by the rows,

which are hopefully the top row and the bottom row.

Player II has three choices, this game is not symmetric,

so they have different number of choices, that's fine.

Player II has three choices left, center,

and right, represented by the left, center,

and right column in the matrix. Just to point out in passing,

up to now, we've been looking mostly at symmetric games.

Notice this game is not symmetric in the payoffs or in

the strategies. There's no particular reason

why games have to be symmetric. Payoffs: again,

this is not rocket science, but let's do it anyway.

So just an example of payoffs. So Player I's payoff,

if she chooses top and Player II chooses center,

we read by looking at the top row and the center column,

and Player I's payoff is the first of these payoffs,

so it's 11. Player II's payoff,

from the same choices, top for Player I,

center for Player II, again we go along the top row

and the center column, but this time we choose Player

II's payoff, which is the second payoff,

so it's 3. So again, I'm hoping this is

calming down the math phobics in the room.

Now how do we think this game is going to be played?

It's not a particularly interesting game,

but while we're here, why don't we just discuss it

for a second. If our mike guys get a little

bit ready here. So how do we think this game

should be played? Well let's ask somebody at

random perhaps. Ale, do you want to ask this

guy in the blue shirt here, does Player I have a dominated

strategy? Student: No,

Player I doesn't have a dominated strategy.

For instance, if Player II picks left then

Player I wants to pick bottom, but if Player II picks center,

Player I wants to pick center. Professor Ben Polak:

Good. Excellent.

Very good. I should have had you stand up.

I forgot that. Never mind.

But that was very clear, thank you.

Was that loud enough so people could hear it?

Did people hear that? People in the back,

did you hear it? So even that wasn't loud enough

okay, so we we really need to get people--That was very clear,

very nice, but we need people to stand up and shout,

or these people at the back can't hear.

So your name is? Student: Patrick.

Professor Ben Polak: What Patrick said was:

no, Player I does not have a dominated strategy.

Top is better than bottom against left -- sorry,

bottom is better than top against left because 6 is bigger

than 5, but top is better than bottom

against center because 11 is bigger than 0.

Everyone see that? So it's not the case that top

always beats--it's not the case that top always does better than

bottom, or that bottom always does better than top.

What about, raise hands this time, what about Player II?

Does Player II have a dominated strategy?

Everyone's keeping their hands firmly down so as not to get

spotted here. Ale, can we try this guy in

white? Do you want to stand up and

wait until Ale gets there, and really yell it out now.

Student: I believe right is a dominated strategy because

if Player I chooses top, then Player II will choose

center, and if-- I'm getting confused now,

it looks better on my paper. But yeah, right is never the

best choice. Professor Ben Polak:

Okay, good. Let's be a little bit careful

here. So your name is?

Student: Thomas. Professor Ben Polak: So

Thomas said something which was true, but it doesn't quite match

with the definition of a dominated strategy.

What Thomas said was, right is never a best choice,

that's true. But to be a dominated strategy

we need something else. We need that there's another

strategy of Player II that always does better.

That turns out also to be true in this case,

but let's just see. So in this particular game,

I claim that center dominates right.

So let's just see that. If Player I chose top,

center yields 3, right yields 0:

3 is bigger than 0. And if Player I chooses bottom,

then center yields 2, right yields 0:

2 is bigger than 0 again. So in this game,

center strictly dominates right.

What you said was true, but I wanted something

specifically about domination here.

So what we know here, we know that Player II should

not choose right. Now, in fact,

that's as far as we can get with dominance arguments in this

particular game, but nevertheless,

let's just stick with it a second.

I gave you the definition of strict dominance last time and

it's also in the handout. By the way, the handout on the

web. But let me write that

definition again, using or making use of the

notation from the class. So definition so:

Player i's strategy "s'_i" is strictly

dominated by Player i's strategy "s_i,

" and now we can use our notation, if "U_I"

from choosing "s_i," when other people choose

"s_-i," is strictly bigger than

U_I(s'_i) when other people choose

"s_-i," and the key part of the

definition is, for all "s_-i."

So to say it in words, Player i's strategy

"s'_i" is strictly dominated by her strategy

"s_i," if "s_i" always does

strictly better -- always yields a higher payoff for Player i --

no matter what the other people do.

So this is the same definition we saw last time,

just being a little bit more nerdy and putting in some

notation. People panicking about that,

people look like deer in the headlamps yet?

No, you look all right: all rightish.

Let's have a look at another example.

People okay, I can move this? All right, so it's a slightly

more exciting example now. So imagine the following

example, an invader is thinking about invading a country,

and there are two ways -- there are two passes if you like --

through which he can lead his army.

You are the defender of this country and you have to decide

which of these passes or which of these routes into the

country, you're going to choose to

defend. And the catch is,

you can only defend one of these two routes.

If you want a real world example of this,

think about the third Century B.C., someone can correct me

afterwards. I think it's the third Century

B.C. when Hannibal is thinking of

crossing the Alps. Not Hannibal Lecter:

Hannibal the general in the third Century B.C..

The one with the elephants. Okay, so the key here is going

to be that there are two passes. One of these passes is a hard

pass. It goes over the Alps.

And the other one is an easy pass.

It goes along the coast. If the invader chooses the hard

pass he will lose one battalion of his army simply in getting

over the mountains, simply in going through the

hard pass. If he meets your army,

whichever pass he chooses, if he meets your army defending

a pass, then he'll lose another battalion.

I haven't given you--I've given you roughly the choices,

the choice they're going to be for the attacker which pass to

choose, and for the defender which pass

to defend. But let's put down some payoffs

so we can start talking about this.

So in this game, the payoffs for this game are

going to be as follows. It's a simple two by two game.

This is going to be the attacker, this is Hannibal,

and this is going to be the defender,

(and I've forgotten which general was defending and

someone's about to tell me that).

And there are two passes you could defend:

the easy pass or the hard pass. And there's two you could use

to attack through, easy or hard.

(Again, easy pass here just means no mountains,

we're not talking about something on the New Jersey

Turnpike.) So the payoffs here are as follows,

and I'll explain them in a second.

So his payoff, the attacker's payoff,

is how many battalions does he get to bring into your country?

He only has two to start with and for you, it's how many

battalions of his get destroyed? So just to give an example,

if he goes through the hard pass and you defend the hard

pass, he loses one of those

battalions going over the mountains and the other one

because he meets you. So he has none left and you've

managed to destroy two of them. Conversely, if he goes on the

hard pass and you defend the easy pass, he's going to lose

one of those battalions. He'll have one left.

He lost it in the mountains. But that's the only one he's

going to lose because you were defending the wrong pass.

Everyone understand the payoffs of this game?

So now imagine yourself as a Roman general.

This is going to be a little bit of a stretch for

imagination, but imagination yourself as a Roman general,

and let's figure out what you're going to do.

You're the defender. What are you going to do?

So let's have a show of hands. How many of you think you

should defend the easy pass? Raise your hands,

let's raise your hands so Jude can see them.

Keep them up. Wave them in the air with a bit

of motion. Wave them in the air.

We should get you flags okay, because these are the Romans

defending the easy pass. And how many of you think

you're going to defend the hard pass?

We have a huge number of people who don't want to be Roman

generals here. Let's try it again,

no abstentions, right?

I'm not going to penalize you for giving the wrong answer.

So how many of you think you're going to defend the easy pass?

Raise your hands again. And how many think you're going

to defend the hard pass? So we have a majority choosing

easy pass -- had a large majority.

So what's going on here? Is it the case that defending

the easy pass dominates defending the hard pass?

Is that the case? Is it the case that defending

the easy pass dominates defending the hard pass?

You can shout out. No, it's not.

In fact, we could check that if the attacker attacks through the

easy pass, not surprisingly, you do better if you defend the

easy pass than the hard pass: 1 versus 0.

But if the attacker was to attack through the hard pass,

again not surprisingly, you do better if you defend the

hard pass than the easy pass. So that's not an unintuitive

finding. It isn't the case that

defending easy dominates defending hard.

You just want to match with the attacker.

Nevertheless, almost all of you chose easy.

What's going on? Can someone tell me what's

going on? Let's get the mikes going a

second. So can we catch the guy with

the--can we catch this guy with the beard?

Just wait for the mike to get there.

If you could stand up: stand up and shout.

There you go. Student: Because you

want to minimize the amount of enemy soldiers that reach Rome

or whatever location it is. Professor Ben Polak: You

want to minimize the number of soldiers that reach Rome,

that's true. On the other hand,

we've just argued that you don't have a dominant strategy

here; it's not the case that easy

dominates hard. What else could be going on?

While we've got you up, why don't we get the other guy

who's got his hand up there in the middle.

Again, stand up and shout in that mike.

Point your face towards the mike.

Good. Student: It seems as

though while you don't have a dominating strategy,

it seems like Hannibal is better off attacking through--It

seems like he would attack through the easy pass.

Professor Ben Polak: Good, why does it seem like

that? That's right,

we're on the right lines now. Why does it seem like he's

going to attack through the easy pass?

Student: Well if you're not defending the easy pass,

he doesn't lose anyone, and if he attacks through the

hard pass he's going to lose at least one battalion.

Professor Ben Polak: So let's look at it from--Let's do

the exercise--Let's do the second lesson I emphasized at

the beginning. Let's put ourselves in

Hannibal's shoes, they're probably boots or

something. Whatever you do when you're

riding an elephant, whatever you wear.

Let's put ourselves in Hannibal's shoes and try and

figure out what Hannibal's going to do here.

So it could be--From Hannibal's point of view he doesn't know

which pass you're going to defend, but let's have a look at

his payoffs. If you were to defend the easy

pass and he goes through the easy pass, he will get into your

country with one battalion and that's the same as he would have

got if he went through the hard pass.

So if you defend the easy pass, from his point of view,

it doesn't matter whether he chooses the easy pass and gets

one in there or the hard pass, he gets one in there.

But if you were to defend the hard pass, if you were to defend

the mountains, then if he chooses the easy

pass, he gets both battalions in and if he chooses the hard pass,

he gets no battalions in. So in this case, easy is better.

We have to be a little bit careful.

It's not the case that for Hannibal, choosing the easy pass

to attack through, strictly dominates choosing the

hard pass, but it is the case that there's a weak notion of

domination here. It is the case -- to introduce

some jargon -- it is the case that the easy pass for the

attacker, weakly dominates the hard pass for the attacker.

What do I mean by weakly dominate?

It means by choosing the easy pass, he does at least as well,

and sometimes better, than he would have done had he

chosen the hard pass. So here we have a second

definition, a new definition for today, and again we can use our

jargon. Definition- Player i's

strategy, "s'_i" is weakly dominated by her strategy

"s_i" if--now we're going to take advantage of our

notation--if Player i's payoff from choosing "s_i"

against "s_-i" is always as big as or equal,

to her payoff from choosing "s'_i" against

"s_-i" and this has to be true for all things that

anyone else could do. And in addition,

Player i's payoff from choosing "s_i" against

"s_-i" is strictly better than her payoff from

choosing "s'_i" against "s_-i,"

for at least one thing that everyone else could do.

Just check, that exactly corresponds to the easy and hard

thing we just had before. I'll say it again,

Player i's strategy "s'_i" is weakly

dominated by her strategy "s_i" if she always

does at least as well by choosing "s_i" than

choosing "s'_i" regardless of what everyone else

does, and sometimes she does strictly

better. It seems a pretty powerful

lesson. Just as we said you should

never choose a strictly dominated strategy,

you're probably never going to choose a weakly dominated

strategy either, but it's a little more subtle.

Now that definition, if you're worried about what

I've written down here and you want to see it in words,

on the handout I've already put on the web that has the summary

of the first class, I included this definition in

words as well. So compare the definition of

words with what's written here in the nerdy notation on the

board. Now since we think that

Hannibal, the attacker, is not going to play a weakly

dominated strategy, we think Hannibal is not going

to choose the hard pass. He's going to attack on the

easy pass. And given that,

what should we defend? We should defend easy which is

what most of you chose. So be honest now:

was that why most of you chose easy?

Yeah, probably was. We're able to read this.

So, by putting ourselves in Hannibal's shoes,

we could figure out that his hard attack strategy was weakly

dominated. He's going to choose easy,

so we should defend easy. Having said that of course,

Hannibal went through the mountains which kind of screws

up the lesson, but too late now.

Now then, I promised you we'd get back to the game from last

time. So where have we got to so far

in this class. We know from last time that you

should not choose a dominated strategy, and we also know we

probably aren't going to choose a weakly dominated strategy,

and we also know that you should put yourself in other

people's shoes and figure out that they're not going to play

strongly or strictly or weakly dominated strategies.

That seems a pretty good way to predict how other people are

going to play. So let's take those ideas and

go back to the numbers game from last time.

Now before I do that, I don't need the people at home

to see this, but how many of you were here last time?

How many of you were not. I asked the wrong question.

How many of you were not here last time?

So we handed out again that game.

We handed out again the game with the numbers,

but just in case, let me just read out the game

you played. This was the game you played.

"Without showing your neighbor what you are doing,

put it in the box below a whole number between 1 and a 100.

We will (and in fact have) calculated the average number

chosen in the class and the winner of this game is the

person who gets closest to two-thirds times the average

number. They will win five dollars

minus the difference in pennies."

So everybody filled that in last time and I have their

choices here. So before we reveal who won,

let's discuss this a little bit.

Let me come down hazardously off this stage,

and figure out--Let's get the mics up a bit for a second,

we can get some mics ready. So let me find out from people

here and see what people did a second.

You can be honest here since I've got everything in front of

me. So how many of you chose some

number like 32,33, 34?

One hand. Actually I can tell you,

nine of you did. So should I read out the names?

Should I embarrass people? We've got Lynette,

Lukucin, we've Kristin, Bargeon;

there's nine of you here. Let's try it again.How many of

you chose numbers between 32 and 34?

Okay, a good number of you. Now we're seeing some hands up.

So keep your hands up a second, those people.

So let me ask people why? Can you get your hand into the

guy? What's your name?

If we can get him to stand up. Stand up a second and shout out

to the class. What's your name?

Student: Chris. Professor Ben Polak:

Chris, you're on this list somewhere.

Maybe you're not on this list somewhere.

Never mind, what did you choose? Student: I think I chose

30. Professor Ben Polak:

Okay 30, so that's pretty close. So why did you choose 30?

Student: Because I thought everyone was going to be

around like the 45 range because 66 is two-thirds,

or right around of 100, and they were going to go

two-thirds less than that and I did one less than that one.

Professor Ben Polak: Okay, thank you.

Let's get one of the others. There was another one in here.

Can you just raise your hands again, the people who were

around 33,34. There's somebody in here.

Can we get you to stand up (and you're between mikes).

So that would be--Yep, go ahead. Shout it out.

What's your name first of all? Student: Ryan.

Professor Ben Polak: Ryan, I must have you here as

well, never mind. What did you choose?

Student: 33, I think. Professor Ben Polak: 33.

Oh you did. You are Ryan Lowe?

Student: Yeah. Professor Ben Polak: You

are Ryan Lowe, okay.

Good, go ahead. Student: I thought

similar to Chris actually and I also thought that if we got

two-thirds and everyone was choosing numbers in between 1

and 100 ends up with 33, would be around the number

(indiscernible). Professor Ben Polak: So

just to repeat the argument that we just heard.

Again, you have to shout it out more because I'm guessing people

didn't hear that in room. So I'll just repeat it to make

sure everyone hears it. A reason for choosing a number

like 33 might go as follows. If people in the room choose

randomly between 1 and 100, then the average is going to be

around 50 say and two-thirds of 50 is around 33,

33 1/3 actually. So that's a pretty good piece

of reasoning. What's wrong with that

reasoning? What's wrong with that?

Can we get the guy, the woman in the striped shirt

here, sorry. We haven't had a woman for a

while, so let's have a woman. Thank you.

Student: That even if everyone else had the same

reasoning as you, it's still going to be way too

high. Professor Ben Polak: So

in particular, if everyone else had the same

reasoning as you, it's going to be way too high.

So if everyone else reasons that way then everyone in the

room would choose a number like 33 or 34, and in that case,

the average would be what? Sorry, that two-thirds of the

average would be what? Something like 22.

So the flaw in the argument that Chris and Ryan had -- it

isn't a bad argument, it's a good starting point --

but the flaw in the argument, the mistake in the argument was

the first sentence in the argument.

The first sentence in the argument was,

if the people in the room choose random,

then they will choose around 50.

That's true. The problem is that people in

the room aren't going to choose at random.

Look around the room a second. Look around yourselves.

Do any of you look like a random number generator?

Actually, from here I can see some of the people,

but I'm not going to put. Actually looking at some of

your answers maybe some of you are.

On the whole, Yale students are not random

number generators. They're trying to win the game.

So they're unlikely to choose numbers at random.

As a further argument, if in fact everyone thought

that way, and if you figured out everyone was going to think that

way, then you would expect everyone

to choose a number like 33 and in that case you should choose a

number like 22. How many of you,

raise your hands a second. How many of you chose numbers

in the range 21 through 23? There's way more of you than

that. I'll start reading you out as

well. Actually about twelve of you,

raise your hands. There should be twelve hands

going up somewhere. There's two,

three hands going up, four, five hands going up.

There's actually 12 people who chose exactly 22,

so considerably more if include 23 and 21.

So those people, I'm guessing,

were thinking this way, is that right?

Let me get one of my 22's up again.

Here's a 22. You want to get this guy?

What's your name sir? Stand up and shout.

Student: Ryan Professor Ben Polak: You

chose 22? Student: I chose 22

because I thought that most people would play the game

dividing by two-thirds a couple of times,

and give numbers averaging around the low 30's.

Professor Ben Polak: So if you think people are going to

play a particular way, in particular if you think

people are going to choose the strategy of Ryan and Chris,

and choose around 33, then 22 seems a great answer.

But you underestimate your Yale colleagues.

In fact, 22 was way too high. Now, again, let's just iterate

the point here. Let me just repeat the point

here. The point here is when you're

playing a game, you want to think about what

other people are trying to do, to try and predict what they're

trying to do, and it's not necessarily a

great starting point to assume that the people around you are

random number generators. They have aims- trying to win,

and they have strategies too. Let me take this back to the

board a second. So, in particular,

are there any strategies here we can really rule out?

We said already people are not random.

Are there any choices we can just rule out?

We know people are not going to choose those choices.

Let's have someone here. Can we have the guy in green?

Wait for Ale, there we go. Good.

Stand up. Give me your name.

Student: My name's Nick. Professor Ben Polak:

Shout it out so people can hear. Student: No one is going

to choose a number over 50. Professor Ben Polak: No

one is going to choose a number over 50.

Okay, I was going--okay that's fair enough.

Some people did. That's fair enough.

I was thinking of something a little bit less,

that's fine. I was thinking of something a

little bit less ambitious. Somebody said 66.

So let's start analyzing this.

So, in particular, there's something about these

strategy choices that are greater than 67 at any rate.

Certainly, I mean 66 let's go up a little bit,

so these numbers bigger than 67.

What's wrong with numbers bigger than 67?

What's wrong with--Raise your hands if you have answer.

What's wrong? Can we get the guy in red who's

right close to the mike? Stand up, give me your name.

Stand up. Shout it out to the crowd.

Student: Peter. Professor Ben Polak: Yep.

Student: If everyone chooses a 100 it would be 67.

Professor Ben Polak: Good, so even if everyone in the

number--everyone in the room didn't choose randomly but they

all chose a 100, a very unlikely circumstance,

but even if everyone had chosen 100, the highest,

the average, sorry, the highest two-thirds

of the average could possibly be is 66 2/3,

hence 67 would be a pretty good choice in that case.

So numbers bigger than 67 seem pretty crazy choices,

but crazy isn't the word I'm looking for here.

What can we say about those choices, those strategies 67 and

above, bigger than 67,68 and above?

What can we say about those choices?

Somebody right behind you, the woman right behind you,

shout it out. Student: They have no

payoffs for… Professor Ben Polak:

They have no payoffs. What's the jargon here?

Let's use our jargon. Somebody shout it out,

what's the jargon about that? They're dominated.

So these strategies are dominated.

Actually, they're only weakly dominated but that's okay,

they're certainly dominated. In particular,

a strategy like 80 is dominated by choosing 67.

You will always get a higher payoff from choosing 67,

at least as high and sometimes higher,

than the payoff you would have got, had you chosen 80,

no matter what else happened in the room.

So these strategies are dominated.

We know, from the very first lesson of the class last time,

that no one should choose these strategies.

They're dominated strategies. So did anyone choose strategies

bigger than 67? Okay, I'm not going to read out

names here, but, turns out four of you did.

I'm not going to make you wave your--okay.

So okay, for the four of you who did, never mind,

but … well mind actually,

yeah. So once we've eliminated the

possibility that anyone in the room is going to choose a

strategy bigger than 67, it's as if those numbers 68

through 100 are irrelevant. It's really as if the game is

being played where the only choices available on the table

are 1 through 67. Is that right?

We know no one's going to choose 68 and above,

so we can just forget them. We can delete those strategies

and once we delete those strategies, all that's left are

choices 1 through 67. So can somebody help me out now?

What can I conclude, now I've concluded that the

strategies 68 through 100 essentially don't exist or have

been deleted. What can I conclude?

Let me see if I can get a mike in here.

Stand up and wait for the mike. And here comes the mike.

Good. Shout out.

Student: That all strategies 45 and above are

hence also ruled out. Professor Ben Polak:

Good, so your name is? Student: Henry

Professor Ben Polak: So Henry is saying once we've

figured out that no one should choose a strategy bigger than

67, then we can go another step and

say, if those strategies never existed, then the same argument

rules out -- or a similar argument rules out -- strategies

bigger than 45. Let's be careful here.

The strategies that are less than 67 but bigger than 45,

I think these strategies are not, they're not dominated

strategies in the original game. In particular,

we just argued that if everyone in the room chose a 100,

then 67 would be a winning strategy.

So it's not the case that the strategies between 45 and 67 are

dominated strategies. But it is the case that they're

dominated once we delete the dominated strategies:

once we delete 67 and above. So these strategies -- let's be

careful here with the word weakly here -- these strategies

are not weakly dominated in the original game.

But they are dominated -- they're weakly dominated -- once

we delete 68 through 100. So all of the strategies 45

through 67, are gone now. So okay, let's have a look.

Did anyone choose -- raise your hands, Be brave here.

Did anyone choose a strategy between 45 and 67?

Or between 46 and 67? No one's raising their hand,

but I know some of you did because I got it in front of me,

at least four of you did and I won't read out those names yet,

but I might read them out next time.

So four more people chose those strategies.

Now notice, there's a different part of this,

this argument. The argument that eliminates

strategies 67 and above, or 68 upwards,

that strategy just involves the first lesson of last time:

do not choose a dominated strategy,

admittedly weak here, but still. But the second slice,

strategies 45 through 67, getting rid of those strategies

involves a little bit more. You've got to put yourself in

the shoes of your fellow classmen and figure out,

that they're not going to choose 67 and above.

So the first argument, that's a straight forward

argument, the second argument says,

I put myself in other peoples shoes, I realize they're not

going to play a dominated strategy,

and therefore, having realized they're not

going to play a dominated strategy,

I shouldn't play a strategy between 45 and 67.

So this argument is an 'in shoes' argument.

Now what? Where can we go now?

Yeah, so let's have the guy in the beard, but let the mike get

to him. Yell out your name.

Student: You just repeat the same reasoning again and

again, and you eventually get down to 1.

Professor Ben Polak: We'll do that but let's go one

step at a time. So now we've ruled out the

possibility that anyone's going to choose a strategy 68 and

above because they're weakly dominated,

and we've ruled out the possibility that anyone's going

to choose a strategy between 46 and 67,

because those strategies are dominated, once we've ruled out

the dominated strategies. So we know no one's choosing

any strategies above 45., It's as if the numbers 46 and

above don't exist. So we know that the highest

anyone could ever choose is 45, and two-thirds of 45 is roughly

… someone help me out here …

30 right: roughly 30. So we know that all the numbers

between 45 and 30, these strategies were not

dominated. And they weren't dominated even

after deleting the dominated strategies.

But they are dominated once we deleted not just the dominated

strategies, but also the strategies that were dominated

once we deleted the dominated strategies.

I'm not going to try and write that, but you should try and

write it in your notes. So without writing that

argument down in detail, notice that we can rule out the

strategies 30 through 45, not by just examining our own

payoffs; not just by putting ourselves

in other people's shoes and realizing they're not going to

choose a dominated strategy; but by putting our self in

other people's shoes while they're putting themselves in

someone else's shoes and figuring out what they're going

to do. So this is an 'in shoes',

be careful where we are here, this is an 'in shoes in shoes'

argument, at which point you might want to invent the sock.

Now, where's this going? We were told where it's going.

We're able to rule out 68 and above.

Then we were able to rule out 46 and above.

Now we're able to rule out 31 and above.

By the next slice down we'll be able to eliminate -- what is it

-- about 20 and above, so 30 down to above 20,

and this will be an 'in shoes, in shoes, in shoes'.

These strategies aren't dominated, nor are they

dominated once you delete the dominated strategies,

nor once we dominated the strategies dominated once we've

deleted the dominated strategies,

but they are dominated once we delete the strategies that have

been dominated in the--you get what I'm doing here.

So where is this argument going to go?

Where's this argument going to go?

It's going to go all the way down to 1: all the way down to

1. We could repeat this argument

all the way down to 1. Notice that once we've deleted

the dominated strategies, you know I had said before

about four people chose this strategy,

and in here, about four people chose this

strategy, but in this range 30 through 45,

I had lots of people. How many of you chose a number

between 30 and 45? Well more than that.

I can guarantee you more than that chose a number between 30

and 45. In fact, the people who chose

where we started off 33 chose in that range.

A lot more of you chose numbers between 20 and 30,

so we're really getting into the meat of the distribution.

But we're seeing that these are choices, that perhaps,

are ruled out by this kind of reasoning.

Now, I'm still not going to quite reveal yet who won.

I want to take this just one step more abstract.

So I want to just discuss this a little bit more.

I want to discuss the consequence of rationality in

playing games, slightly philosophical for a

few minutes. So I claim that if you are a

rational player, by which I mean somebody who is

trying to maximize their payoffs by their play of the game,

that simply being rational, just being a rational player,

rules out playing these dominated strategies.

So the four of you who chose numbers bigger than 67,

whose names I'm not going to read out, maybe they were making

a mistake. However, the next slice down

requires more than just rationality.

What else does it require? Yes, can I get this guy again,

sorry? Shout out your name again,

I've forgotten it. Student: Nick.

Professor Ben Polak: Shout it out.

Student: Nick. Professor Ben Polak: Yep.

Student: The assumption that your opponents are being

rational as well. Professor Ben Polak:

Good. To rule out the second slice,

I need to be rational myself, and I need to know that others

are rational. That's illegible,

but what it says is rational and knowledge that other people

are rational. Now how about the next slice

after that? Well now I need to be rational,

I need to know that other people are rational,

and I need to know that other people know that other people

are rational. So to get this slice,

this next slice here, I need rationality;

as some of you know that's widely criticized in the social

sciences these days. Are we right to assume that

people are rational? To get this slice I need

rationality, I need knowledge of rationality, let's call that KR

and I need knowledge of knowledge of rationality.

As I go down further, I'm going to need rationality,

I need to know people are rational;

I need to know that people know that people are rational,

and I need to know that people know that people know that

people are rational. Now let's just make this more

concrete for you. These people,

the four people who chose this, they made a mistake.

What about the four people who chose numbers between 45 and 67?

What can we conclude about those people?

The people who chose between 45 and 67?

Should I read out their names? No, I won't,

perhaps I better not. What can we conclude about

these people? Yeah.

We're never going to get the mike to this -- try and get the

mike in there. Come forward as far as you can

and then really shout, yep.

Student: They think their classmates are pretty

dumb. Professor Ben Polak:

Right, right. It's not necessarily that the

four people who chose between 46 and 67 are themselves "thick,"

it's that they think the rest of you are "thick."

Down here, this doesn't require people to be thick,

or to think the rest of you are thick,

they're just people who think that you think,

sorry, they're just people who think that you think that

they're thick and so on. But again, all the way to 1

we're going to need very, very many rounds of knowledge,

of knowledge, of knowledge …

of rationality. Does anyone know what we call

it if we assume an infinite sequence of "I know that you

know that I know that you know that I know that you know that I

know that you know" something? What's the expression for that?

Believe it or not, technical expression.

The technical expression of that in philosophy is common

knowledge, which I can never spell, so I'm going to wing it.

Common knowledge is: "I know something,

you know it, you know that I know it,

I know that you know it, I know that you know that I

know it, etc., etc.

etc.: an infinite sequence. But if we had common knowledge

of rationality in this class, then the optimal choice would

have been 1. How many of you chose 1?

Look around the room. Let's just pan the room.

Keep your hands up a second. How many of you chose 1?

So actually a lot of you chose 1.

1 was the modal answer in this class.

A lot of you chose 1. So those people did pretty well.

They must have done--they must be thinking they're about to

win… but they didn't win. So it turns out that the

average in this class, the average choice was about 13

1/3, which means two-thirds of the average was 9.

Two-thirds of the average was 9 and some of you chose 9,

so if you are here, stand up.

The following people chose 9, that's not right,

where are the people who chose 9?

I've got them here somewhere? I'm sorry there's so many pages

of people. Here we go.

The following people chose 9. So stand up if you're here and

if you're that person's roommate if they're not here.

So Leesing Chang: is Leesing Chang here?

Stand up if you're here. A G.

Christopher Berrera: you can stand up,

if you're here. And William Fischel:

are you here? I don't know if he is here.

Jed Glickstein: are you here? Jed Glickstein:

stand up if you're here. And Jeffrey Green:

stand up if you're here. And Allison Hoyt:

stand up if you're here. No Allison Hoyt, okay.

There's John Robinson. All right so these people,

stay up a second so the camera can see you.

There you go, all the way around.

Wave. Wave to mom at home.

Can we get a round of applause for our winners?

So Jude has trustworthily brought back the five dollars.

I've got to focus for a second just to get it.

Here is the five dollars, we're going to tear this into

nine pieces, except I'd get arrested and deported if I did

that, so we're going to find a way to

break this into change later. Come and claim it afterwards,

but you're all entitled to whatever a ninth,

whatever that fraction of five dollars is.

Okay, so why was it after all that work -- why was it that 1

wasn't the winning answer? Why wasn't 1 the winning answer?

Let's have someone we haven't had before.

Can we get the mike in way in the back there?

Can we get the mike in there on the row you're on?

See if you can point. Actually good.

Stand up. Shout.

Shout away. Student: 1 would have

been the winning answer [inaudible]

Professor Ben Polak: Louder, louder,

louder. Student: 1 would have

been the winning answer had everyone assumed that the

average would have been constantly compounded down to 1,

but since a couple of people chose the, I mean not incorrect

answers, but the higher averages,

then it was pushed up to 13. Professor Ben Polak:

Right, so to get all the way, -- good -- so to get all the

way -- thank you -- So to get all the way to 1,

we need a lot. We need not just that you're

all rational players, not just that you know each

other's rational, but you know everyone else's

rational. I mean I know you all know each

other because you've met at Yale, but you also know each

other well enough to know that not everyone in the room is

rational, and you're pretty sure that not

everyone knows that you're rational and so on and so forth.

It's asking a lot to get to 1 here, and in fact,

we didn't get to 1. In previous years we were even

higher, so this was low this year.

In 2003, the average was eighteen and a half.

And in 2004, it was twenty-one and a half.

And in 2005, we had a class that didn't

trust each other at all I guess, because the average was

twenty-three. And this year,

it was thirteen and a third. We're getting better there I

think. One nice thing,

by the way -- this is just chance I think -- the median

answer in the class was nine, which is spot on,

so the median hit this bang on. Now what I wanted you to do,

is I want you all to play again.

We haven't got time to do this properly, even though I've given

you the sheets. So write down -- don't tell

this to your neighbors -- write down a number.

Don't talk among yourselves that's cheating.

Write down a number. If you haven't got a sheet in

front of you, just write it on your notepad.

Write down a number. Has everyone written down a

number? I'm going to do a show of hands

now. How many -- we'll get the

camera on you -- how many of you chose a number higher than 67?

Oh there's some spoil makers in the class.

How many of you chose a number higher than 20?

How many of you chose a number higher than 10?

How many chose a number between 5 and 10?

How many chose a number between 0 -- I'm sorry -- between 1 and

5? How many of you,

excluding the people who chose 1 last time, how many of you

chose a number that was lower than the number you chose last

time? Now keep your hands up a second.

So almost all of you came down. Why?

Why are seeing this massive contraction?

I'm guessing the average number in the class now is probably

about 3 or 4, maybe even lower.

Why are we seeing this massive contraction in the numbers being

chosen? The woman in green,

I've forgotten your name, I'm sorry?

Student: Because we've just sat in lecture and you've

told us we're not being rational if we pick a high number.

Professor Ben Polak: So part of it is,

you yourselves have figured out, some of you,

that you shouldn't choose a high number.

What else though? What else is going on here?

Let's get somebody. There's a guy waving an arm out

there. Do you want to stand up behind

the hat? You.

Student: Because we've repeated the game.

Professor Ben Polak: It's true we've repeated it.

It's true we repeated it but what is it about repeating it?

What is it about talking about this game that makes a

difference? Let me hazard a guess here.

I think what makes a difference is not only do you,

yourselves, know better how to play this game now,

but you also know that everybody around you knows

better how to play the game. Discussing this game raised not

just each person's sophistication,

but it raised what you know about other people's

sophistication, and you know that other people

now know that you understand how to play the game.

So the main lesson I want you to get from this is that not

only did it matter that you need to put yourself in other

people's shoes and think about what their payoffs are.

You also need to put yourself into other people's shoes and

think about how sophisticated are they at playing games.

And you need to think about how sophisticated do they think you

are at playing games. And you need to think about how

sophisticated do they think that you think that they are at

playing games and so on. This level of knowledge,

these layers of knowledge, lead to very different play in

the game. And to make this more concrete,

if a firm is competing against a competitor it can be pretty

sure, that competitor is a pretty

sophisticated game player and knows that the firm is itself.

If a firm is competing against a customer -- let's say for a

non-prime loan -- perhaps that assumption is not quite so safe.

It matters in how we take games through to the real world,

and we're going to see more of this as the term progresses.

Now I've got five minutes, do I have five minutes left?

So I've got five minutes to take a little small aside here.

We've been talking about knowledge and about common

knowledge. I just want to do a very quick

experiment, so everyone stay in their seat.

I'm going to get two T.A.'s up here, why don't I get Ale and

Kaj up here. And I wanted to show that

common knowledge is not such an obvious a concept,

as I've made it seem on the board.

Come up on the stage a second. You can leave the mike its okay.

Here we have two of our T.A.'s, actually these are the two head

T.A.'s, and I want you to face forward so you don't see what

I'm doing. I'm about to put on their heads

a hat. Here's a hat on Ale's head,

and here's a hat on Kaj's head. Let's move them this way so

they're in focus. Now you can all see these hats,

and if they turn around to each other, they can see each other's

hat. Now I want to ask you the

question here. Here is a fact,

so is it common knowledge that -- is it common knowledge that

at least one of these people has a pink hat on their head?

Is it common knowledge? So I claim it's not common

knowledge. What is known here?

Well I'll reveal the facts now: that in fact Ale knows that Kaj

has a pink hat on his head. So it's true that Ale knows

that at least one person in the room has a pink hat on their

head. And it's true that Kaj knows

that Ale has a pink hat on his head.

They both look absurd, but never mind.

But notice that Ale doesn't know the color of the hat on his

own head. So even though both people

know, even though it is mutual knowledge that there's at least

one pink hat in the room, Ale doesn't know what Kaj is

seeing. So Ale does not know that Kaj

knows that there's a pink hat in the room.

In fact, from Ale's point of view, this could be a blue hat.

So again, they both know that someone in the room has a pink

hat on their head: it is mutual knowledge that

there's a pink hat in the room. But Ale does not know that Kaj

knows that he is wearing a blue, a pink hat, and Kaj does not

know that Ale knows that Kaj is wearing a pink hat.

Each of their hats -- each of their own hats -- might be blue.

So notice that common knowledge -- thanks guys -- common

knowledge is a rather subtle thing, thank you.

Common knowledge is a subtle thing.

Mutual knowledge doesn't imply common knowledge.

Common knowledge is a statement about not just what I know.

It's about what do I know the other person knows that I know

that the other person …and so on and so forth.

Even in this simple example, while you might think it's

obviously common knowledge, it wasn't common knowledge that

there was a pink hat in the room.

Does anybody have smaller siblings or children of their

own. They can have a pink hat at the

end of the class? We'll see you on Wednesday.

Download Subtitles Download Video Download Audio

↑ Return to Top ↑




ContDict.com - contextual dictionary