You've got a new piece of hardware, app, or website you want to try out in your classroom. You've thought through some of your students' potential difficulties, what could potentially go wrong, and what you can do to mitigate those problems, but there is still an element of the unknown until you or a colleague try it out in the context or your own district or school.
"Pilot!" you think, excited to see what may come of your idea. But who pilots for the the early adopters in your building? What about the late adopters who are slow to adopt a tool until they've seen it work in their classroom?
Where Do I Start?
Pilot for yourself! I got the idea of using my own classroom as both experimental and control group from a university study on clickers I found while working on an action research plan in graduate school. The professors wanted to incorporate and investigate the use of a student response system (clickers) for generating feedback for their students, but had trouble getting colleagues to participate in their study.
To get as large of a test/control group population as they could, the professors each had half of their students acting as control and half acting as experiment groups for a portion of the semester. Having students in the same section serve as both control and experimental groups in the course of the study mitigated variables of individual student achievement, hour of the day, day of the week, and individual teaching style. It boiled the clicker study down to this question:
I'm sure my data team has not been alone in the past in wondering when we collect data on common assessments if the differences in student scores were the result of any of several factors, including but not limited to teaching style, hour of the day, and students in the class. These are all factors that in comparing two sections of the same course are important to consider.
Establishing (and rotating) experimental and control classes within your day eliminates all of the uncertainty of what those other factors may be contributing and narrows your focus to (as close as possible) ONE single variable.
Can it be replicated?
Once ONE of the teachers in your subject area or data team have had positive results using a new technology or strategy, others can have confidence in trying it out for themselves, in their own context.
"Pilot!" you think, excited to see what may come of your idea. But who pilots for the the early adopters in your building? What about the late adopters who are slow to adopt a tool until they've seen it work in their classroom?
Where Do I Start?
Pilot for yourself! I got the idea of using my own classroom as both experimental and control group from a university study on clickers I found while working on an action research plan in graduate school. The professors wanted to incorporate and investigate the use of a student response system (clickers) for generating feedback for their students, but had trouble getting colleagues to participate in their study.
To get as large of a test/control group population as they could, the professors each had half of their students acting as control and half acting as experiment groups for a portion of the semester. Having students in the same section serve as both control and experimental groups in the course of the study mitigated variables of individual student achievement, hour of the day, day of the week, and individual teaching style. It boiled the clicker study down to this question:
Is there a significant different in these students learning using clickers to generate immediate feedback, or not?What's that mean for my team and my classroom?
I'm sure my data team has not been alone in the past in wondering when we collect data on common assessments if the differences in student scores were the result of any of several factors, including but not limited to teaching style, hour of the day, and students in the class. These are all factors that in comparing two sections of the same course are important to consider.
Establishing (and rotating) experimental and control classes within your day eliminates all of the uncertainty of what those other factors may be contributing and narrows your focus to (as close as possible) ONE single variable.
Can it be replicated?
Once ONE of the teachers in your subject area or data team have had positive results using a new technology or strategy, others can have confidence in trying it out for themselves, in their own context.
No comments:
Post a Comment
Thanks for sharing!