Final evaluations: square peg in a round hole?

One of the best aspects of the TLLP is the ability of our team members to have release time to discuss our classroom experiences, to visit other schools and classrooms and to plan for what we will do in our classrooms. During today’s meeting, we all had a level of frustration around final evaluations. Every secondary school in the board has been helping teachers to revise and update their final evaluations so that they more closely meet our vision for empowering modern learners with “informative and purposeful assessment.” This has been a rewarding but also difficult process where all aspects of our final evaluations have to be examined, compared to a set of success criteria, and then revised to be more equitable, engaging and purposeful.

This revision process is not where I had frustration, however. It is the breakdown of marks between term work (70%) and final evaluations (30%). It’s hard to articulate the reasons why this doesn’t fit a feedback-focused class. It’s like a square peg in a round hole. You can force it to make it work, but it’s not authentic. All of a sudden we are switching strategies. Traditionally, there is no feedback for a final evaluation, just the mark. So all semester, no grades are traded in exchange for the students’ work and now, for the 30%, you have to just slap a number on it.

Still alive...
Source: Thibaud Saintin

Also, in a feedback-focused assessment classroom, these divisions between term and final are problematic. We work closely together with the students to help them progress towards the overarching learning goals. At least for me, all activity in class is “assessment for learning” and “assessment as learning.” At the end of the semester, in their portfolio, I ask students to (1) choose and reflect on artifacts that show their growth and (2) to choose and reflect on artifacts that show their highest achievement. Of course, we also keep in mind how consistent they have been with this achievement.

Then I turn around and say, “Okay, show me what you can do one more time. And this time it’s going to count A LOT. Plus, you don’t get to reflect on how well you did. It’s only my professional judgement that will determine how well you did.” I know this will sound strange to a teacher who is not “grade-less” because that is our job, really. We teach, then we judge. But in a feedback-focused class, the students judge themselves. We guide them and help them see what they might miss, but the ownership of the “grade” is the student’s.

I have an idea on how to make this final evaluation process more authentic for next semester. It’s a pretty radical idea, so I will need to think it over and chat with my mentors before I share, but I hope it will make the end-of-semester evaluation period match what happened all semester.

If you are in a feedback-focused classroom, how do you handle final evaluations?

When giving feedback, relationships matter, but so does what you say and how you say it

“However, the thing that really matters in feedback is the relationship between the student and the teacher. Every teacher knows that the same feedback given to two similar students can make one try harder and the second give up. When teachers know their students well, they know when to push and when to back off. Moreover, if students don’t believe their teachers know what they’re talking about or don’t have the students’ best interests at heart, they won’t invest the time to process and put to work the feedback teachers give them. Ultimately, when you know your students and your students trust you, you can ignore all the “rules” of feedback. Without that relationship, all the research in the world won’t matter.”
~from “Is the Feedback You’re Giving Students Helping or Hindering?” by Dylan Wiliam

This quote from Dylan Wiliam is resonating strongly with me today. As a team we spent today mostly looking at student self-assessment. We visited Jonathan So’s grade 6 classroom in the morning, then worked together in the afternoon summarizing the data from student assessment literacy surveys and creating self-assessment task requirements and success criteria. Throughout the day, though, I kept thinking about the recent feedback I had given students and how it was received.

In Jonathan So’s feedback-focussed classroom, his students were using a tracking form for math expectations. They needed to indicate if they had met or not met the learning goals, then plan next steps for the ones they had not yet met. This was in preparation for a student-written update of their progress for their parents. I asked one student how she knew if she met the learning goals. She said that she looked at Mr. So’s feedback and determined if she was able to do what was expected without further instructions. Not only were the students able to self-reflect, they were able to articulate the process succinctly. I know that Jonathan has purposely cultivated a climate of trust in his classroom which celebrates each student and that he lets them know that they matter. I could tell that they trust him and know that he has their best interests at heart. This resulted in a calm, reflective attitude in the students where they received his feedback with a growth mindset.

Jonathan So’s You Matter board 

In contrast, the feedback I recently gave students on a large project was received with many different attitudes. Most students were academically thoughtful and satisfied with the feedback, but several were visibly affected by the feedback and this eventually resulted in tears on both sides, mine and theirs. This huge emotional response to descriptive feedback (there were other factors, but the most emotion was around the written comments) has prompted a lot of self-reflection since I know that I have the students’ best interests at heart and that I worked very hard to make sure I gave good feedback to help the students move forward in the next inquiry. 

The learning goals and task requirements were in typed text on the feedback document and I highlighted everything that was met. What was not highlighted was accompanied by a handwritten comment. I’m starting to think that this is the problem. All those highlighted, typed sentences were not considered important to the students and parents who were upset. They seemed to focus on the small details that were handwritten. Somehow, I did not convey the positive aspects. My tone or word choice or maybe even the size of writing (compared to the typed text) created a message that I had not intended.

Is this a reflection of my failure to cultivate a classroom climate where the students feel they matter and that I care about their success? I don’t think so, since this is something very important to me, but it is something to think more about.

Is it the personal nature of written comments versus the “coldness” of typed and highlighted text? Today, when we looked at the data from our assessment literacy surveys for all of our classes, “written descriptive feedback” was considered very effective by 63% of the students, compared with only 41% for “typed descriptive feedback.” Surely we are not giving better feedback when handwriting than when typing–but the students think it is more effective.

When giving feedback, relationships matter, but so does what you say and how you say it. So, I have taken out my books. I will refresh my skills on descriptive feedback by seeing what the experts say. Even in just a quick perusal of the headings in Dylan Wiliam and Siobhan Leahy’s “Embedding Formative Assessment,” I can see interesting areas to explore. For example:

  • Feedback should focus on what’s next, not what’s past
  • Don’t give feedback unless you allocate class time for students to respond (something we discussed with Jonathan So today)
  • Provide an appropriate balance of critical and supportive feedback
  • Make feedback into detective work (hmm.. I wonder what that’s about!)

I am also working on providing feedback through screencasting. I hope that this will be an effective tool to give feedback while also conveying the positive “growth mindset” spirit that I feel about each student. I hope to rebuild trust with these students.

Edited to add:

The example of feedback gone awry that was described above was actually from a class where I give grades for summative assignments. I was thinking this morning about the difference between that course and the one where the grades are determined by a portfolio and negotiated with the student. In the latter case, the student learning starts with a blank slate and the artifacts in the portfolio build evidence of learning and skills. In the former case, essentially students start with 100% and I feel like I have to justify why they do not get 100. What are they missing? What was done well, but also what was misunderstood or incorrectly completed? Of course it results in comments that are critical as I justify the grade. Another thing to consider as I move forward.

Can you make a 3D map of Canada? Constructionist vs. Instructionist Strategies

This post is also posted here.

I’ve been teaching grade 9 Geography for over 15 years now and when I say 15 years, multiply that by two semesters and multiply that by at least two sections each semester. So many, many times. I’ve never been happy with how my “Landform Regions of Canada” lessons have turned out. I don’t know why but it’s very difficult for the students to connect their theoretical learning with pictures. I have tried graphic organizers with notes from the textbook, slide shows with many pictures, picture books and art from each landscape, videos, webquests, starting from the geological history, starting from issues based in each region, starting from national parks in each region, students presenting different regions/ecozone to the class. I wish I could take all the students on a cross-country drive so they can see it for themselves so I’ve been looking for a good VR experience (if you know of one, PLEASE let me know!).

Then there are the philosophy shifts. They need to know every region→ They only need to know that you can make regions based on different physical and human characteristics→ They can learn about one region in depth to understand interrelationships of land and people→ By doing an inquiry situated in one region, they will learn all the human and physical characteristics of the region and develop geographic thinking skills of spatial significance and interrelationships (depending on the issue)→ They need to know every region.

But isn’t that the beauty of teaching? We design learning experiences for our students, try them out, gather data through conversations, observations and products, reflect on how effective the learning experience was and redesign for the next course. In addition, what works for one group of students might not work with the next.

This big buildup is to introduce my newest iteration of Landform Regions of Canada. This time, I decided to take a constructionist approach.

  • We started with the learning goals and how they connect with the course overarching learning goals.
  • I discussed the learning theory of constructionism with the students.
  • Then students were put into small groups and each student was assigned two landform regions to research. This was a strategy to foster increased positive interdependence of the group members. Each student’s research was required for the group to be successful.
  • The groups were given a tiled map of Canada to assemble like a puzzle (from Canadian Geographic). I printed the document with four pages per letter size sheet and when completed, the map was about the size of chart paper.
  • The students were then required to create a 3D map of the landforms, vegetation and population distribution of Canada. This was facilitated by all the “low tech” makerspace supplies I have gathered including such items as plasticine, popsicle sticks, left over game pieces, styrofoam balls, fabric, tissue paper, Legos, Mechanics, blocks and all sorts of other craft supplies.
  • What followed over the week was a lot of discussion and negotiation in the groups. Trial and error. My student teacher and I conferenced with the groups every day, asking probing questions about how they were going to represent each feature. Whole class discussions occurred about the importance of a legend, if many groups missed a feature and when they started putting too many people in the arctic, for example.

  • Once the maps were done, we created a peer assessment Google Form with about 20 “look fors” for the map and “two stars and a wish.” Each student filled this out individually, not as a group, for three different maps→ meaning they read through the look fors once as a class while we made it, then at least three more times during peer assessment.

  • The Google Form was used with DocAppender which pushed the answers from filling out the form to a specific student’s assessment document. Students read all the feedback they were given by their peers, then completed a self-assessment Google Form, also pushed to their personal document.

  • This was followed by a Kahoot quiz on the Landform Regions. Unfortunately we ran out of time to do the full quiz, so we will do another one on Monday or I might do a formal quiz. We will develop the success criteria for the learning goals together.
  • Eventually, the students will post their assessment document, a picture of the map and a reflection on their achievement of the learning goals in Sesame.

I don’t know yet if this constructionist strategy was successful. My gut feeling is that it has to be better since the students have touched the regions with their hands. They discussed, often passionately, how to represent each feature. Students co-constructed and used a list of look fors which are the main features of Canada’s landscape and population distribution. They literally constructed a map with four layers on it (provinces and territories, landform regions, vegetation regions and population density) so I am sure that they will understand GIS mapping concepts easier when I introduce digital mapping next week.

As a bonus, it was fun!

Reflections on my first year in a “gradeless” or feedback-focussed classroom

This is cross-posted on our group blog: THROWING OUT GRADES TO ENHANCE LEARNING: FEEDBACK-FOCUSED EVALUATION

We have now had two weeks of school and the rhythm is returning. Clubs and teams are up and running and classes are even going on their first field trips. It’s amazing how quickly everyone gets into the swing of things. However, I have been taking it pretty slowly in my classes. This is partially because all the “official documents” that I need to give the students are still not complete and partially because I don’t want to overwhelm students with the whole gradeless, feedback-focused, place-based and inquiry-based program all at once.

I ran my grade 9, Issues in Canadian Geography, classes as gradeless last year. Essentially, the whole course was inquiry-based and we used five overarching learning goals that followed the inquiry cycle and that were organized into a learning map. Students completed guided and then open inquiries based on the curriculum. I consulted with students as they moved through the inquiry cycle and gave verbal and some written feedback (usually through Google Forms and docappender). Each overarching learning goal was described by success criteria. Near mid-term reports, students created a digital portfolio with artifacts showing growth and highest achievement of the success criteria and then used the learning map rubric to determine a grade. We had an individual portfolio conference where we discussed their achievement and negotiated their grade. The same thing occurred near the end of the course, which determined their 70% term work grade. An individual inquiry project, also evaluated using the success criteria of the overarching learning goals, was completed for the 30% final evaluation.

Some positive outcomes of my first year of gradeless classroom:

  • Students learned about Canadian issues through inquiry. 
  • Many authentic action projects to make Canada a more sustainable place to live were designed and some were enacted in the local community, with social media campaigns or with submissions to local and federal governments. 
  • After the first few weeks, once they started to experience feedback-focussed assessment, students did not ask what activities were “worth” or if it was being marked. 
  • This led to risk-taking and “thinking big” because students were not afraid to fail. 
  • Students were self-motivated and nearly all students completed all their work. 
  • We honoured the process of learning, not just the end result. All parts of the inquiry cycle were assessed, not just the final product. 
  • I became more and more convinced that feedback-focussed assessment was good for my classroom. I was quite shy to tell anyone what I was doing at the beginning (although I had very supportive admin and none of the parents complained). By the end of the year, I started sharing more.

Some not-so-great outcomes of my first year of gradeless classroom:

  • I tried many different documentation tools during the last two semesters. Nothing really worked to my satisfaction. I had many assessments for the students but since they were not entered in a “markbook,” were not quantified, but in text or were verbal, and were all over the place (Google Docs, Google Classroom, emails, etc.), it was difficult to get an overall snapshot of how a student was doing except after the first portfolio interview. This is too late. 
  • Although, during the second semester, students reflected more about how they demonstrated the success criteria, I still need to help the students to learn more about self-assessment and goal setting and what the success criteria mean–what do they look like. 
  • There wasn’t enough individual accountability during group inquiries.
  • The creation of a digital portfolio (the way we did it, anyway) was too onerous on the students. All inquiries had to stop for at least a week to accomplish this. A few students did not complete their portfolio in time for us to have a conference before the mid-term report. Frankly, assessing all the portfolios at once, especially at the end of the year, was quite onerous on me, as well. 
  • Not all students did inquiries that lead to learning of all the overall expectations of the course. The open nature of how I allowed students to pick issues and to explore them in depth was not conducive with breadth. 

There were many more successes and setbacks, but these are the main ones. So how to move forward? There are two main changes I am making for this semester. The first is to add an overarching learning goal that is about “content” of the curriculum. Last year I had the content of the curriculum mixed in with the inquiry cycle learning goals. I still think this is valid because we do not learn “knowledge” in isolation when doing inquiry-based learning. However, it was difficult to assess or track. The “content overarching learning goal” will hopefully increase my ability to balance depth and breadth of curriculum learning.

The second change is that I am using Sesame as a documentation and feedback tool instead of digital portfolios. I am hoping that this will allow me to get a snapshot, at any time, of what each student has completed (where they are in the inquiry cycle, how well they have met the success criteria, and which overall expectations they have learned). Also, all the feedback and assessment will be in one place and students will be in charge of documenting their learning and reflecting on this learning. Parents will also be able to access the program.

Some of the other issues I encountered in a gradeless classroom are part of our TLLP project learning goals–such as how to teach students to reflect on the success criteria, how to give effective feedback that moves learning forward, and how to make the whole process more time and energy efficient for the teacher and students. What a great opportunity we have to be supported through this learning journey with release time to collaboratively build knowledge and skills. I would like to leave off my first TLLP blog post with this tweet that really sums up why I am a Teacher Throwing Out Grades (TTOG). Hence our hashtag: #TTOGTLLP