ASSESSING TRUE ACADEMIC SUCCESS;

THE NEXT FRONTIER OF REFORM

by Dan Kennedy, Baylor School, Chattanooga, TN

Suppose that your doctor's idea of a successful medical practice is to teach his patients how to pass every diagnostic test that he gives to them. How will you ever know if you are healthy? Suppose that the diagnostic tests keep showing that you are sick, but that you feel fine. Should you keep seeing that doctor? Suppose that the diagnostic tests are flawed, identifying many sick people as healthy and many healthy people as sick. Will your doctor ever know? And, given his idea of a successful medical practice, should he care?

These are facetious questions for doctors, but serious questions for teachers. I must admit that as a teacher, I never really thought too much about the impact that my tests were having on the course that I was teaching. I did realize that tests were important, because a student had to do mathematics in order to really learn mathematics. I also realized that it was important for students to "stand and deliver, "to borrow a term that Hollywood has forever linked to my profession, for no matter how much my students seemed to appreciate my classroom exposition, it was only in testing them that I would discover if they were truly getting it. Tests, in other words, were the validation of the contract between teacher and student, the proof that learning had taken place. That was fine with me, and even though my students did not always appreciate my tests, the general premise seemed to be fine with them. And so it went, for roughly twenty years of my teaching career. It was only after NCTM began to challenge the ideas, or lack of ideas, behind traditional assessment practices that I really began to look at what my tests were doing. I was not receptive to the message at first. For one thing, I felt that the emphasis on the word "assessment" rather than the time-honored word "testing" reeked of educational jargonism; for another thing, I found very little in the NCTM document on assessment that resonated with my experiences the way that the document on curriculum and evaluation did. Moreover, I said to myself, what does "assessment" do that "evaluation" does not? Then I got caught up in the calculus reform movement, and since I was chair of the AP Calculus Test Development Committee at the time, I found myself being frequently confronted about assessment. One of the most persistent complaints about the AP examinations in the early days of calculus reform was that they were too predictable. It was part of the lore among AP teachers that you could always count on a particle motion problem, an area/volume problem, a graphical analysis problem, a series problem if you taught BC, and a "theory" problem at the end to challenge the best students. There was enough variation from year to year to enable our committee to deny that we worked from such a precise template, but the fact remained that if we deviated too far from what they thought that template was, teachers would complain. You could leave off a particle problem one year with few repercussions, but if you left one off for two consecutive years, AP workshops all over the country would have to explain whether or not this meant a change of direction for the program. Teachers felt that they had a right to know what would be tested, and why not? It was their job, after all, to prepare their students for the test. Moreover, since the test would be the only measure of their students' success that would matter in the end, the extent to which calculus was both taught and learned in their classrooms would be, like it or not, defined by that one experience on that one morning in May. Faced with that stark reality, teachers were reduced to uttering in dead seriousness the six words that they hated most when emanating from the lips of their students: "Will this be on the test?" At about the same time that I was being challenged to question the predictability of the AP examinations, I began to worry about the extent to which I was doing all the interesting mathematics in my own classroom. This was the other, more visible, side of mathematics education reform, and it was one that made a lot of sense to me. I had been raging for years against the way students seemed to lose the ability to think as they progressed further and further through high school, but I had never suspected the extent to which we had been drumming it out of them by making them play our educational game.

The rules of that game are simple: we, the teachers show them what to do and how to do it; we let them practice at it for a while, and then we give them a test to see how closely they can match what we did. What we contribute to this game is called "teaching," what they contribute is called "learning," and the game is won or lost for both of us on test day. Ironically, thinking is not only absent from this process, but in a curious way actually counterproductive to the goals of the game.

Think about it. Thinking takes time. Thinking comes into play precisely when you cannot do something "without thinking." You can do something without thinking if you really know how to do it well. If your students can do something really well, then they have been very well prepared. Therefore, if both you and your students have done your jobs perfectly, they will proceed through your test without thinking. If you want your students to think on your test, then you will have to give them a question for which they have not been fully prepared. If they succeed, fine; in the more likely event that they do not, then they will rightfully complain about not being fully prepared. You and the student will have both failed to uphold your respective ends of the contract that your test was designed to validate, because thinking will have gotten in the way of the game. Considering how we mathematicians value thinking, it is a wonder that we got ourselves into this mess at all.

But the more I thought about it, the more I realized that we were in it, right up to our pocket protectors. Now we had to find a way out. By the way, I threw in that reference to "pocket protectors" to highlight another aspect of my epiphany of the early nineties. It occurred to me that there was a huge gap between the popular image of mathematicians and the popular image of mathematics teachers. While mathematicians were always being portrayed as brilliant, eccentric, and creative to the point of unworldly, mathematics teachers were being portrayed as rigid, systematic, and uncreative to the point of inflexibility. In the teen movies it was always the English teachers that had the cool insights, the ability to connect with the minds of their students, and the creativity to bring out the creativity in others. These heroic virtues were usually accentuated by inviting the audience to contrast them with those of a teacher of, yes, mathematics, who would often be pictured discussing the Pythagorean theorem with the blackboard, totally oblivious to the behavior of his or her students in the background. So ingrained was this formula in the Hollywood mind that even when Jaime Escalante came along in Stand and Deliver, the antihero against whom he was pitted was the cold-blooded, inflexible math chairman, one of the greatest cinema villains since the Wicked Witch of the West. But I digress. We were talking about thinking. Thinking is a creative act. Thinking and learning ought to go hand in hand, but many of the things that we learn to do are not dependent on creativity, so thinking is not really involved. We learn how to walk, how to tie our shoes, and how to ride a bike. On a slightly higher intellectual plane, we learn how to read, how to write, and how to do 'rithmetic (a non-creative act that I will not count as mathematics, even if millions of people do). These are things that we learn how to do. There are other things that we simply learn, such as history, or appreciation of Shakespeare, or quantum mechanics. The extent to which these things are learned seems to be proportional to the extent to which we think about them, but notice that they are not things that we do. Indeed, once you get past the primitive skills of reading and writing, there seems to be only one classical intellectual pursuit in our academic lexicon that we have relegated to the status of learning how to walk, ride a bike, or tie our shoes. We learn about everything else; we learn how to do mathematics. When educated adults meet us at parties and chuckle that they "never could do mathematics," they are not talking about arithmetic, but nor are they talking about thinking and learning. Educated adults do not forgive themselves readily for not thinking or for not learning. They are talking about a non-creative act that some people do, like juggling, and that they, by fate or by preference, simply do not. Ironically, these same people might have no problem balancing checking accounts, comparing stock portfolios, cutting recipes in half, estimating their gas mileage, computing restaurant tips, or figuring out how long to cook a twelve-pound turkey. We might call that doing mathematics, but they don't, especially because they can do it. What they couldn't do was defined for them long ago, in those tests at the end of the game, and that was what our educational game had defined as doing mathematics. Some teacher had shown them repeatedly how to factor, and when the time had come for them to show it back, they had failed. And that forever was that. You might think that science would be in the same fix as mathematics as far as these perceptions are concerned, but physics, chemistry, and biology are still subjects primarily to be learned about, at least in high school. When you do an experiment in chemistry, you discover something about how the world works; you are playing a creative role in your own learning process. Think of how backwards the entire experimental procedure would be if it began with the teacher explaining the theory, progressed to the teacher performing the experiment several times to show how it is done, and then finally ended with the student trying to do the experiment as well as the teacher did. The point is not to do a titration; the point is to discover that water is composed of oxygen and hydrogen. Students understand this. But in mathematics, if there is a point to factoring that is beyond the act of factoring, students do not understand what it is. Indeed, since the arrival of equation-solving technology, neither do many of their teachers. Besides, science, even in high school, is a living subject. Nobody talked about big bangs or plate tectonics or recombinant DNA research or AIDS or black holes or quarks when I was in school, because my teachers and my textbooks, through no fault of their own, did not even know what they were. Like most people in my generation, I have learned about these since high school. My science teachers, I suppose, would be proud to know this, except that they would point out that I am hardly a special case. On the other hand, anyone who admits to having learned something about mathematics since high school is a special case indeed, at least outside this room. In fact, not many people outside this room would admit to having learned anything about mathematics in high school either; they will only say that they were good or bad at doing it. So, to get back to my own thinking, I began to realize a few years ago that the mathematics in my own classroom was not a living subject, that it was not a creative, thinking act for my students, and that they did not view it as something to be learned, but rather as something to be done. No matter what else I did, the bottom line was that I should show them how to do it so that they could do it back for me when the time came. In other words, we were both playing our roles in the game to perfection. That was when I decided to try not playing the game. The first thing I did was to let them use their graphing calculators all the time. With very little guidance from me, they soon knew more about the machines than I did, and it was obvious that they were learning from each other. Then I tried to see if they could pick up mathematics that same way if I gave them the opportunity, and by golly they could. The game, at that point, was over for me. I began starting each class with a problem rather than with a monologue at the blackboard. Then I would walk around to see how they solved it in collaboration with each other. Sometimes I had to do a little coaching, but they eventually discovered the mathematics themselves. Then we talked about it. If I needed to, I would give it a name or a historical context. If there were some subtlety that would affect the solution, I would give them another problem and watch them deal with the subtlety. Then they would explain it. Quite often, they came up with alternate solutions, so we talked about those. They were still doing the mathematics, but now they were learning about it and thinking about it at the same time. Best of all, the students could finally appreciate the need for creativity in really doing mathematics. Homework became an extension of the classroom experience a continuation, rather than the beginning, of their own performance. Now when they came upon a problem that was unfamiliar to them, their first instinct was to learn how to solve it, rather than to blame me for not playing my part in the game successfully. I found that they even read the book without being told to! (Has a parent ever told you that "last year's teacher was so good that my child never had to open the book"?) It was only after I had thoroughly quit the game that I began looking critically at my old tests and wondering what good I could ever have imagined coming from them. They had obviously been designed to push the computational envelope, to see what the students could do with the skills that I had taught them, but they were consequently lacking in creativity and no longer exemplary of what I was trying to get them to do in my post-game classroom. They obviously would have to change, but how far could I go in changing the rules of the game? How could I demand creativity on a test for which I had fully prepared my students, and how could I not fully prepare them when our mutual success depended on their performance? I realized that the answers to these questions lay outside my personal experience, so I began studying and thinking a little more deeply about assessment. I discovered several things by reading articles by other people. Many of them had far too much to say, but there were two important points that were stressed by every author I read: (1) we must assess what we value and value what we assess, and (2) we assess students most fairly and effectively when we use a variety of different assessments. These were two claims that made eminent sense to me, even though I was not doing either of them very well. I had always felt that my tests were a valuable piece of the learning process; however it was clear that I was not testing everything I valued, and I certainly did not value everything I was assessing. I had also always used a variety of assessments: tests, quizzes, and homeworks, and I realized that the three types of assessments were allowing different strengths and weaknesses to be measured. However, I was still a long way from group assessments, oral presentations, portfolios, and the like. There was also the matter of grades, which I found most of the assessment pundits curiously unwilling to confront. Since the goals of assessment were ideally independent of grades, I supposed in an ideal world that grades would be irrelevant. However, I did not teach in an ideal world; I taught in a competitive prep school from which students expected to graduate and attend their colleges of choice. Students, parents, administrators, and faculty colleagues not only saw grades as relevant, but believed in their hearts that they were the most significant output of the so-called assessment process. In the dark and hidden depths of my own reform-minded bleeding heart, so did I. But my experience with AP Calculus had given me a different perspective on grades, and I was able to use that to get past this stumbling block that has stymied so many of my colleagues. I am willing to share my secret with you, but let me warn you in advance that I am about to make so much sense that it will render many teachers in this audience very nervous, if not downright appalled. Grades at any school lie along some numerical or alphabetical continuum that is, by itself, fairly meaningless. It is therefore necessary to attach meanings to certain cutpoints along the continuum so that the absolute worth of a grade can be interpreted by an interested observer, such as a parent or a college admissions committee. The AP Calculus program grades its examinations on a scale from 0 to 108, with four cutpoints to determine five intervals along that scale. Students in the top interval receive a "grade" of 5, which is interpreted as "very well qualified"; students in the next interval receive a 4 for "well qualified", followed by 3 for "qualified", 2 for "possibly qualified", and 1 for "no recommendation." The SAT's are graded on a scale from 200 to 800 with no reported cutpoints other than percentiles (which establish only relative worth); however, important cutpoints for absolute worth are established by college admissions departments. Many states now have minimum competency tests, for which there is a single cutpoint to separate success from failure. The starkness of that particular cutpoint is dramatic, but it is interesting to note that virtually every grading continuum in use in education anywhere has that particular cutpoint in common with all the others. It is difficult for me or anyone else in this room to imagine what it is like to spend year after year of education in fear of that cutpoint, but many do. More on that later. The grading continuum in use at my school happens to be a percentage scale, from 0 to 100. The failure cutpoint is at 65, effectively eliminating about half of that scale for any practical purposes. Does this mean that a student should be successful on 65% of my test in order to pass? Let's think about this for a minute. No major league baseball player has ever come close to batting .650 for a season. The best basketball players in the country shoot less than 65% from the field. A salesman who makes a sale two out of three times is a miracle worker. A BC student in AP Calculus who solves 65% of the problems correctly will most likely earn a 5, the highest possible score. These people are experts in their fields. How can we justify demanding 65% from mere learners to demonstrate minimal competency? And if a beginner who is minimally competent can handle 65% of my test, what kind of competence could that test possibly be measuring? My theory is that we ought to present students with challenging, relevant, useful, and varied assessments all of the time, and then scale the grades to conform to our expectations. We can do this; we are mathematicians. I will share with you my own method for scaling, although this might not be the best method for everyone. The only thing I am advocating for everyone is that we all be freed from the tyranny of numbers insofar as they limit our freedom when it comes to assessing true learning. I tell each of my classes on the first day of school what their class average is: 82 for a regular section of a required course, 85 for an elective, and 90 for an advanced section. (These numbers are based on school-wide empirical data. Whether I like them or not is as irrelevant as my opinion about the price of a first-class stamp, and carries about as much weight.) I tell them that, from that point on, they can raise the class average by exceeding my expectations and can lower the class average by disappointing me, but it is that class average that will determine the scaling of my tests and quizzes. The better they are, the more they can expect me to challenge them and the better will be their chances of showing me how high their class average should be. If I overreach on some test, then they are protected by the fact that the class average moves sluggishly: say from 86 down to 84. I can understand how an 86 class might become an 84 class in the few weeks between tests, but how could they suddenly plunge to 78, unless at least one of those tests was a faulty indicator of how good that class was? So, let us say that I give a challenging test to an AP class whose average stands at 91. They handle the stuff I expect them to handle, and several of them surprise me on the hard ones. They make the usual careless mistakes, but everyone is doing the right kind of mathematics. Grading on an AP scale, I find that the test average is 75. I look back on the homework effort for the past few weeks, the class participation, and so on, and I decide to raise the class average to 92. This gives me an ordered pair (75, 92) for scaling raw grades to real grades. Now suppose that my top student has managed a raw score of 93, a lovely paper, which I decide to scale to 99. That gives me a second ordered pair (93, 99). Those two points determine a linear equation that enables me to scale anyone's grade in a fair and objective manner. Mathematically, the effect of this scaling is to adjust the mean (a primary goal) and to reduce the standard deviation (a secondary effect that helps me accomplish the primary goal of teaching mathematics to my entire class). For example, let us suppose that this test really catches one student dismally unprepared, for any number of academic or other reasons. Say the student gets a raw score of 20. My scale brings that up to a 71, where it is still an outlier in terms of a much smaller standard deviation, but where the student can still believe that a comeback is possible. Notice that the class average is very significant here; if we change that class average to 82 rather than 92 and leave everything else the same, the raw score of 20 scales to a real score of only 30. (TI-83 demonstration if there is time.) By the way, I do not scale homework. I do drop a certain number of lowest scores, my concession to the reality of students making choices due to the demands of other courses, but homework is a reflection of diligence, so a lazy student must pay the price. I have had very sharp students with homework grades of 40 and very dull students with homework grades of 90. Either way, the students get the message that diligence is valued in my assessment system. Most students are actually quite good about doing the homework, since it is the grade over which they have the most control.   But back to the effects of scaling tests and quizzes. Freed from the shackles of unreasonable numbers, I can now challenge my students to do just about anything, then see how far they can go. They, in turn, have been freed from the burden of getting a certain percentage right, so that they can concentrate on doing as much as they can as well as they can. Moreover, they realize that the better they do as a class, the better the benefits of the scaling. In fact, both my students and I now see grades as measuring two things, both of them defensible on a loftier scale, namely: the quality of the class as a group, and their relative standing within that group. If grades are to play a role in defining success at all, that seems to me to be a pretty good start. That still leaves the matter of what sort of things we ought to be grading, or in a more general sense, assessing. If you are a typical mathematics teacher, I would submit that you currently assessing five things: knowledge, cleverness, diligence, context, and luck. I probably do not need to explain the first three virtues, except to note that knowledge, cleverness, and diligence are indeed quite different. To be truly successful in most academic pursuits a student should possess a generous helping of each. We ought to be assessing these qualities, because, as a society, we value them. The word "context" refers to miscellaneous other judgments that we make about a student's work, based on qualities we value such as neatness, punctuality, clarity of expression, elegance, and creativity. "Luck" might seem like a strange addition to this short list, but teachers assess luck all the time whether they intend to or not. You intend to assess luck if you threaten your students with pop quizzes or spot checks on homework. You indirectly assess luck if you give everyone in your class the same test on the same day-- something that we all do. There is luck involved in being "ready" for a test, whether it involves emotions, health, the pages you study, or what you have for breakfast. We might not like to admit that we value luck, but it is hard to avoid testing it. The least we can do is to try to minimize the effect of bad luck on our efforts to teach our students. If we do value knowledge, cleverness, diligence, and various aspects of context, then we need to tell our students that, and we need to find as many ways as possible to let them show us what they've got. We should not sacrifice the assessment of any of these positive qualities in order to encourage or reward others, nor should we be afraid to assess one of them for fear that we might discourage or deny others. That is why a variety of assessments is so essential if we really want to assess what we value while being fair to our students. There should be a place in our assessments to test memorization (as that is part of knowledge), just as there should be a place for the problem that is deliberately intended to trick the knowledgeable student (as that is how to assess cleverness). There should be a place for the diligent student to shine, even if it might bore the clever, and there should be a place for the clever to shine, even if it might dismay the diligent. We need never apologize for holding students to such standards as correct spelling, or typed essays, or complete sentences, or oral communication, or collaboration, because if we do not value context then neither will our students. My own assessments for many years were heavily skewed toward testing knowledge. That was the game that I referred to at the beginning of this talk: the validation of the learning process, the definition of success. (Show old test.) My major tests still focus on knowledge, but now I am careful to vary the context, with algebraic problems, visual problems, problems that require writing, and problems that call for creative solutions. (Show new test.) I assess cleverness primarily on weekly quizzes, which I tell them up front will not necessarily be "fair." I require collaboration in class and encourage it outside of class, because I value it; it is how people learn all their lives. I give occasional collaborative quizzes and feel fine about assigning them grades. I can challenge the students even more on a collaborative quiz, and they do better work! Who loses in that deal? I also have begun looking at portfolios. One of the most disturbing aspects of traditional testing is that students can not demonstrate what they have learned unless they are given the appropriate stimulus by the teacher in a testing situation. Not only does this introduce the element of luck, it reduces the person who knows the most about what the student has learned, namely the student, to a passive participant in the assessment of that learning. I make my students keep a portfolio of 8 items a year, and I specify only that each item should tell me something about the student's learning that I do not already know. A perfect quiz makes a bad portfolio item, since I have already given it a perfect grade. A bad quiz might make a great portfolio item, if it alerts the student to some nugget of unlearned or mislearned knowledge, which can then be learned correctly and reflected upon in an introspective essay. I have learned fascinating things about the thought processes of my students by reading their portfolios. For example, this is what one girl, a boarding senior in my BC Calculus class, wrote about helping her roommate prepare for a precalculus test: This year was my first year to be a peer tutor, and I enjoyed helping the girls in the dorm a lot. Last night, though, I finally saw the importance of my peer tutoring. My roommate came in at 10:00 extremely upset over her Precalculus test that was the next day. I calmed her down and told her that I would help her if I could. Carrie, who had been in the play, had gotten behind in her work, so she didn't understand what they were doing. She showed me the problem. I knew the answer, but I wasn't sure how to explain it to her in a way that was not confusing. I thought about it for a while, and I ended up trying several approaches (with Clara's help) that I had learned in Calculus, until I finally got through to her. Then I made her work a few problems for me, and she did them perfectly. She understood! I was so happy to be able to help her that I had forgotten I was supposed to be studying for my own Calculus test. She was so happy she understood that she began to cry. She really began to cry. It's great to be able to use the things you have learned to help other people learn too. Now, I have had students cry in my classroom before, but never from the sheer joy of learning! We have no idea what kinds of learning occur outside of our classrooms, nor can we fully appreciate the quality of that learning, nor the impact that it has on our students. At least with a portfolio we have a chance to tap into that experience and include that evidence of learning in our overall assessment package. A footnote to that story is that Carrie scored 93 on that test the next day a personal best, and a full 9 points above the class average. I feel that my Calculus student enjoyed a richer learning experience helping Carrie with her precalculus problems than she would have had in studying for her own calculus test. By my feedback on her portfolio I am able to affirm that learning experience, and I have no qualms about rewarding it with a good grade. With all of these different kinds of assessments, is it possible to fail the course? Sure it is. But notice that a failing student now must be bad at many things bad with versatility, so to speak and then must resist improvement in a variety of ways. As rare as it is to find a student blessed with abundances of knowledge, cleverness, diligence, and contextual excellence, I believe that it is even rarer to find a student who is lacking in them all and who can remain lacking for a year in spite of our best efforts. A student who fights against failure is already a success by one measure; so why can't we find ways for that student to succeed on our tests, or more importantly, to learn mathematics? A student who surrenders to failure is failing far more than a course in mathematics; so how can we be accomplices to that kind of suicide? Nobody learns less mathematics than the student who stops taking it. We who are entrusted with teaching mathematics must consequently find ways to keep our students learning, and I am not convinced that failure is an effective strategy for any student in the long run. Let's face it: students who confront failure in classroom after classroom are the ones for whom failure becomes quite literally our long-run strategy; the problem is, such students never run with us for very long. I must admit that I become a little impatient with folks who say that we need to keep standards high by weeding out the students who can't do mathematics, and that to deny that premise amounts to sacrificing the mathematics in favor of student self-esteem. This is not about self-esteem; this is about teaching mathematics! So, yes, I do try to keep my students around from semester to semester. They will never hear me tell them that they are not cut out to do mathematics. I will not call bad mathematics good, but I will admit that good students can do bad mathematics. Moreover, I am willing to correct them as many times as I must while we move ahead. I am not going to blame some Algebra I teacher when my Precalculus student squares x plus 3 and gets x2 + 9, nor will I assume that the student believes it forever to be true. I will correct the student and move on. Tomorrow I may have to correct the student again. Heck, I might even damage some self-esteem. But we will move on. And if that student learns some mathematics, shows some diligence, leaves my class with a 73 average, and goes on to make that same mistake in a college calculus class, then I would hope that the professor will not conclude that my student "cannot do algebra" and is therefore unworthy of taking a Calculus course from him. He should correct the student again and move on. If the student should fail, which can happen for any number of reasons, then that student will have failed Calculus under that professor, not Algebra I post facto. And if that student had hoped to learn some calculus from that professor, then it will be a darned shame. In summary, then, here are some problems with traditional tests: - They assess only a fraction of what we value. - They depend too much on luck (e.g., pop quizzes). - There is often no feedback (e.g., final exams). - They are dependent on teacher stimulus. - They are usually taken alone. (Is that what we value?) - They are usually timed (unlike most work in which quality matters). - They are frequently taken under artificial, stressful conditions. - They are often devoid of creativity. (Students are prepared for the test). - They favor one narrow kind of student performance. - Success is short-term and non-transferable. - The emphasis in the end is on what student can not do or does not know. - They can inhibit further learning. To improve matters in our own classrooms, here are some possible strategies: - Test what you value, and value what you test! - Assess often, with different kinds of assessment. - Give meaningful and prompt feedback. - Give partial credit for partially correct work. - Give feedback on everything, including presentation, spelling, etc. - Explain all assessment goals and strategies to the students. - Test diligence, cleverness, knowledge in focused ways. - Encourage creativity through your assessments. - Avoid testing luck whenever possible. - De-mystify grades and control them through scaling. - Only fail students who are failures. - Find a way to grade homework frequently. - Encourage collaboration on homework. - Use group assessments to encourage collaborative learning. - Do less mathematics in class and have the students do more. - Try portfolios. - Encourage improvement (the carrot effect). - Discourage bad habits (the stick effect). - Keep every student in the game. - This is not about student self-esteem; this is about students learning mathematics! I hope that some of the things I have said today will make sense for you in your classroom, but if some of them did not, then that's fine. Remember: The goal is for your students to learn mathematics, and assessment is only a means to that end. If, however, you suspect that your assessment is getting in the way of your true goal -- to teach mathematics to all of your students -- then I urge you to tame whatever beast your assessment has become. The success of your students is in your hands. Remember: You are the one who defines that success. Return to Baylor Mathematics | Dan Kennedy sketch | Baylor School