Skip to content
Who's in the Video
Daniel Koretz is the Henry Lee Shattuck Professor of Education at Harvard Graduate School of Education.  He focuses his research primarily on educational assessment, particularly as a tool of education[…]

Daniel Koretz believes fostering accountability in the system will, in turn, foster critical twenty-first century skills.

Daniel Koretz:


21st century skills means, the phrase means different things for different people.  And the only common denominator is that most people agree that these in this mix are higher order of cognitive skills of various kinds, solving of complex [ill-structured] problems, for example.  The kinds of skills you need in the “a knowledge economy” rather than the manufacturing economy.  But when you get down to details of what that actually means, then you find, I think a lot less agreement.  I do think it’s critically important.  It was, frankly, critically important in the 20th century as well that the particular skills were somewhat different.  So, for example, in many high school science classes, a large portion of what kids do is memorization.  In biology classes, there’s a tremendous amount of memorization of vocabulary.  Some of that is necessary.  You can’t work in the field without knowing terminologies.  Some of it is not.  I mean, we have books, we have Google, and you can look things up.  What’s very hard for teachers to do and this is one reason why I think encouraging more talented people to come into teaching is so important, what is extremely hard to do from the point of view of a teacher is to focus kids on complex application of knowledge, solving problems that are really hard to solve.  We’ve never been good at that in American schools.  Back in the 1960s, the National Science Foundation funded a lot of scientists in different fields to come up with new curriculum.  And one of the most famous was a physicist at Berkeley who said high school science classes don’t include experiments.  What they call experiments are demonstrations.  The ridge to give you the answer you want.  You just basically manipulating stuff to show something you could have read.  So that’s not what scientists do.  Scientists do things that produce dirty data, puzzling findings, anomalies, and they have to sort through information to try to find a pattern rather than saying, oh, I got the result the teacher wanted me to get.  We’ve never been good, in my view, in most American schools, at teaching kids those kinds of skills.  In mathematics, for example, we teach all kids now how to solve simple linear equations.  Almost all kids take algebra.  It’s one of the first things you learn.  If you ask adults to actually apply that to anything in their real lives, most of them have no idea how to do this.  So what we need is to back off from an emphasis on the lower level skills to make room for the higher.  The problem here, and this is, I think, an extremely [thorny] problem is that the more interesting, more valuable high order thinking skills are extremely hard to test.  Some people will tell you that’s not true.  We know how to test them.  We know how to test them on a small scale.  We have lots of people who’ve generated interesting [tasks], for instance, that classroom teachers can use to measure these skills.  We find it, in my view, extremely hard to test them on a large scale.  And to just give you one reason to test whether kids are solving complex problems well, you have to give them [novel] problems.  And if you take a test and drop it in from the state capital, you don’t know which kids will find the test [novel] and which kids will say, oh, I recognize that.  We did something like that three weeks ago.  The classroom teacher would know but the test [author] does not.  So we’re caught in a real paradox.  On the one hand, we want higher overall performance.  We see externally imposed tests as part of the path to getting that.  And I agree with that, that assumption, I just think we’re doing it badly now.  But at the same time, we also want this more complex, more interesting, and potentially more valuable high order thinking skills, and we don’t know really how to set up the accountability system to encourage them.  I met recently with a group of, a couple [hundred] teachers discussed, this exact question came up.  And I said, well, in my class, one of my classes, I give two tests, the midterm and the final.  Those are solely to judge students’ mastery of content knowledge.  And, frankly, they’re mostly a method to force the reluctant students to review the material.  The interesting stuff, whether kids, and I shouldn’t say kids, they’re not kids, whether students can apply those skills to solve really vexing real problems, I learn from papers.  I give them realistic problems, sometimes real, sometimes made up that are very, as [IB] scientists would say, ill structured.  There’s not a clear path to one solution.  And I give them a couple of weeks and say, you know, figure out a solution.  Justify your solution.  Critique your solution.  That kind of thing is extraordinarily hard to do with large scale testing.  So, in my view, one of the pieces of unfinished business is to figure out how to incorporate that kind of thing into an accountability system.  If you really want to use accountability to generate 21st century skills, that is probably one of the two or three most important unsolved problems in education.


Related