I have a really hard time interpreting and using ACTFL rubrics in a real way with students in the classroom. I’ve been doing interpersonal speaking assessments in all levels and all classes all this week, so hours and hours of assessing students levels 1-3. I have rewritten my rubrics numerous times to try to make sense of them myself and so that students can accurately assess themselves, and I’m still really unsure. Here’s some of my main concerns.
Let’s just take novice for example. Here’s ACTFL’s rubric for novice interpersonal from the appendix of the Implementing IPAs publication. If you can’t read it, you can find all the ACTFL rubrics here:
Upon a lot of reflection, I think I understand the rubric’s performance descriptors, but I still have a few niggling ambiguities:
Language Control: What is the tolerance for “accuracy is limited to memorized words.” For example, if I have a student say “Yo me gusta juego fútbol americano.” Does that demonstrate a sufficient lack of accuracy to say that the student is only accurate with memorized words? What about “Yo me gusta jugar fútbol americano.” What about “Yo va a Virginia playa.” I just need some annotated samples from ACTFL to tease out some of the subtitles in the performance descriptors.
Language Function: I also have trouble with how the language function row is set up. It’s all about how complex the communication can be within comfortable, consistent, spontaneous, non-hesitant production. I have many students who are able to communicate complex sentences with linking words, sometimes strings of sentences, but with some level of discomfort or hesitation (meaning a few seconds at the beginning to process and/or a few seconds of hesitation strewn throughout the sentence or string of sentence response…) How am I supposed to level this. Really, I don’t know what level of communication the student can do without hesitation, because they went straight to multiple sentences with hesitation. So am I just supposed to infer that they are able to do simple sentences without hesitation if they do complex sentences or strings of sentences with hesitation?
In response to this question, I have moved hesitation/comfort away from the domain description and into the qualifications section so that each column features a level of hesitation as an additional qualifier. That way students can perform at the exceeds expectations level for text type (strings of sentences) and basic meets expectations for comfort/hesitation (hesitates with many pauses throughout response). Then I can take an average of all the different performance descriptors and describe which column overall best describes the student’s performance.
Despite those difficulties I have, the real challenge is in translating this rubric to language the students can understand, can use to self-assess, and can use to shape future learning goals. It’s got to be much shorter language that a 6th grade Spanish 1a student can clearly understand.
The descriptors also have to be much more specific for students to self-assess (and it helps me out too). I originally wrote qualifiers like “somewhat hesitant” and “quite comfortable,” but kids don’t know what that means and they never feel comfortable during a speaking evaluation. So, I adjusted to say how many seconds of hesitation they can have in their response to each question, despite that being too simplistic for my liking in evaluating a student’s comfort in responding.
So, after probably 10 hours this week during planning periods and trial runs the next class with more students, here’s my sixth or seventh draft. I have even made adjustments during class as I sat in front of a computer with a student and tried to phrase things in ways that made sense to them:
Concerns with this version:
- I haven’t figured out how to describe accuracy because I don’t understand what ACTFL’s tolerance for accuracy is. As long as I can understand what the student is trying to communicate, the rubric assesses the content of the communication not the form. If it was so bad I couldn’t tell, then the below expectations descriptor of “responses are confusing” comes into play.
Other thoughts/questions:
- For the novice interpersonal rubric above, does “exceed expectations” mean intermediate low, and “meets expectations high” mean novice high and so on? Or is it that exceeds expectations means novice low? Or does it not correlate at all? ACTFL, I need clarification with these rubrics!!!
- I would love for ACTFL to publish student-facing versions of their rubrics. That would be lovely. I’m sure they are immensely more qualified than myself in simplifying the rubrics and making them accessible to younger developmental levels while still capturing their essence. This would help a lot of teachers out.
- I would also love it if ACTFL would publish a whole bunch of student samples for each sub-level in all four language skills. Right now, they only have one, I repeat, one student example for writing and two or three speaking for each OVERALL proficiency level (novice, intermediate, etc…) It would be great to have 5 examples of novice mid presentational writing, another five novice high presentational writing, another 5 novice mid presentational speaking, and so on for all of the sub-levels. I don’t imagine this would take that much time or effort given how many samples ACTFL has worked with over the years. It’s necessary to have sub-level examples because we’re talking about multiple years of study to move from novice to intermediate and another 2-3 years to move from intermediate to advanced. It’s crazy to have just 1 sample for all of novice, intermediate, etc…Even better, I’d love it if they reproduced one of their rubrics with each sample and circled the performance descriptors that describe the sample so that we can all interpret the rubrics better.
- Why does ACTFL allow writing interpersonal communication on assessments? It would be one thing if there was a time limit to respond and the students could only view the next portion of the conversation after the window for their current response closed. But that’s not what happens in the real world. In practice, students are presented with all of the conversation partner’s conversation and can take their time to respond. It’s 80% a presentational task and 20% interpersonal in my opinion.Plus, even if it were that scenario described above with a timed response and one piece of the conversation at a time, the task would be way easier than speaking in a live conversation. Writing takes more time and allows for more reflection and less pressure. Of course student performance is going to be harder for a speaking interpersonal assessment. And the same goes for speaking verses a writing presentational assessment. I mean my level 1 students could write paragraphs of connected text by November, but they could have barely spoken a few phrase in a presentational setting. Yet, there’s only one rubric for novice interpersonal and novice presentational. I understand that the communication mode is the same writing and speaking, but the difficulty level is vastly different and justifies different assessment in my opinion.
A great webinar on these rubric is found at Vista Higher Learning, although the presenter only has time to talk about two of the categories (text type and communication strategies). It could help? Here’s the link: https://vistahigherlearning.wistia.com/medias/z17ylebiws
LikeLike
Thanks so much Paulina! I also got a chance to attend a MOPI (modified oral proficiency interview) training for 2.5 days this fall which was really informative. I’ll definitely check that link out!
LikeLike
SAAAMMMEEEEE HERE. I may as well have written this post. 100% … thanks for sharing your end result.
LikeLike