A theme here at Proof Points is that many things that go on at schools are at odds with the conclusions of rigorous education research. Teaching kids abstract critical thinking skills is unlikely to help them think critically. The length of lectures often exceeds children’s attention spans. Most anti-bullying programs don’t work.
Teachers ought to be learning more about what the last 50 years of rigorous, well-designed research has uncovered, confirmed or refuted when they start their profession. However, experts say schools of education — the institutions that train teachers — often don’t do that. A group of six teaching institutions, which include University of North Carolina – Charlotte and University of Missouri – St. Louis, is trying to change that and add more research-driven insights in professional instruction. A nonprofit organization, Deans for Impact, is working with them toward this goal, and as part of this effort, it administered a test to more than 1,000 teaching students at these six schools in the fall of 2019 to see how much learning science the teacher candidates understood.
The results were “sobering,” according to a March 2020 report, “Learning by Scientific Design; Early insights from a network informing teacher preparation.” By my math, teacher candidates scored an average of 57 percent or 31 questions correct on a 54-question test — an F. But the report did not use an overall score because, as Jim Heal of Deans for Impact said, “Any overall score report using an instrument of this nature does little to address exactly what is happening.” Deans for Impact instead reported the results in three separate categories: 49 percent on 14 basic cognitive science principles; 58 percent correct on 32 questions about applying these concepts in the classroom, and 67 percent on eight questions about beliefs about how kids learn.
“Based on the results of our assessment, teacher candidates possess a shallow understanding of basic principles of learning science – and, perhaps as a result, they struggle to make instructional decisions that are consistent with our best scientific understanding of how students learn,” the report said.
One common misunderstanding, according to the report, is mistaking student engagement for learning. In one question, student teachers were asked to pick between two classroom activities to teach students the difference between types of newspaper articles. One activity asked students to read the same three articles and answer three questions in small discussion groups. One example: “Make a list of differences between the news article and the opinion pieces. Which of these can be attributed to the authors’ differing purposes?” The second activity had students go on a newspaper scavenger hunt and sort articles into three categories: persuade, inform and entertain.
The question specifically asked teachers to pick the activity that would help students learn the ways that an author’s purpose influences their writing. And for the education researchers who helped create the assessment, it wasn’t a close call. “None of these are gotcha questions,” said Heal, a consultant with Deans for Impact.
But only 22 percent of future teachers picked the first activity, which was the correct answer, because it requires students to make their thinking visible and identify key features of each text. That helps students build a mental model that they can apply again in the future. The second activity doesn’t require much analysis but teacher candidates gravitated toward it. Why? “The first activity is very boring, I didn’t even want to read the questions,” wrote one test taker. “The second activity is more inviting, seems more hands on and is more inquiry learning.”
The test also revealed that many teacher candidates embrace the myth of learning styles, believing that individual students are either visual, auditory or kinesthetic learners. The research consensus is that differentiating instruction this way doesn’t boost learning.
Related: Teachers often ask youngsters to learn in ways that exceed even adult-sized attention spans
To be sure, the science of learning isn’t straightforward. Renowned scholars argue over what the research says to do in classrooms. Often, there aren’t clear answers, and studies point in opposite directions. What worked in one classroom may not work in another. Deans for Impact’s Heal explained that the test questions for teacher candidates were based on a consensus view of cognitive science from 50 scholars that the organization had commissioned in 2015. (Deans for Impact disclosed that the Chan Zuckerberg Initiative provided funding for their learning-science assessment and network of six institutions. The Chan Zuckerberg Initiative is also among the many funders of the Hechinger Report.)
Deans for Impact is still evaluating its new assessment to see how reliably it measures whether teacher candidates understand learning science. It plans to compare these baseline scores with future student responses in 2020 and 2021 to see if efforts to teach learning science are successful. The six schools are American University in Washington, D.C., Endicott College in Massachusetts, Louisiana Resource Center for Educators, University of Missouri-St. Louis and University of North Carolina-Charlotte. (Most are training graduate students. The Louisiana school is a nontraditional, alternative teacher certification program.)
Related: Scientific research on how to teach critical thinking contradicts education trends
The 1,036 students who took the test weren’t necessarily representative of teacher preparations programs across the nation. Faculty at each institution who are working with the network either had their students take it in class or sent them a link to take it on their own time. Most test takers were white women, reflecting the actual teaching force, but Hispanic teacher candidates were significantly underrepresented. (Test takers were 80 percent female, 75 percent white, 12 percent black and 3 percent Hispanic. The nation’s 3.2 million public school teachers are 77 percent female, 80 percent white, 7 percent black and 9 percent Hispanic, according to the most recent federal data.)
The failure rate among teacher candidates could be exaggerated, but Deans for Impact said it could be understating the national problem because the institutions that participated in the test weren’t randomly chosen but volunteered to participate because of their interest in learning science. The results give us insight into why so many novice teachers gravitate toward substandard lessons online. A recent expert review of the most popular English lessons on websites, such as Teachers Pay Teachers, found that most were mediocre and not worth using.
Many of the test takers were at the beginning of their teacher training programs and one might not expect them to know much learning science yet. But even experienced instructors struggled with many of the questions. Twenty-two teaching instructors at the six schools volunteered to take the test themselves. They also failed the section on basic cognitive science principles but they passed the section on practical applications in the classroom with an average score of 77 percent correct. Maybe you don’t need to know the details of the science as long as you know how to apply them.
This story about teacher education was written by Jill Barshay and produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for the Hechinger newsletter.
How then can I become effective in teaching