Skip to main content

Students need to have positive and beneficial educational experiences, now more than ever. Today’s students experienced unprecedented educational disruptions t during the COVID pandemic, and are faced with a future in a world forever changed. One primary change:  technology is no longer playing a supporting role and is now  a lead character in each students’ daily learning. Students deserve access to edtech that is highly effective and research is the only way to ensure this happens.

Research-based tools are critical for the modern classroom. When an edtech tool is research-based, product developers and designers consider  prior research on student learning and meaningfully incorporate these practices into the tool. For example, an edtech tool may include personal reflection activities because learning sciences research shows that having students elaborate on newly learned information helps them retain it. When edtech companies use prior research in product development, they take a critical first step to ensuring that their products positively impact student outcomes. This first step is aligned with an ESSA Level IV designation.

The next step that edtech companies must take to effectively use research to improve product impacts on student learning is to conduct rigorous studies. All edtech companies should use two study types: implementation and effectiveness.

Implementation studies provide information about how a tool is used by educators and students. There are several ways that edtech companies can utilize these study results.

  • To identify gaps in intended and actual product use. For example, an edtech company can use implementation results to identify unexpected barriers for educators and students to access their product (e.g., having a required pretest that takes more than one class period to complete). This insight can inform product improvements to increase implementation (and, in turn, drive renewals).
  • To uncover ways that a tool may be unintentionally exacerbating educational inequities. For example, an implementation study may reveal that students who are designated as English language learners are using the edtech tool at substantially lower rates than their peers, suggesting a need to provide translations that were not included in a product’s initial design.
  • To understand how product use differs depending on unique contextual factors. The day-to-day use of edtech tools can look different than what is originally envisioned by product developers. For example, multiple tools may be used together to help students learn a new skill, which may result in students shifting between tools, which can make measuring the amount of implementation difficult using traditional metrics (e.g., time on platform).

Implementation studies are often an important initial step for edtech companies to take before examining whether a tool positively impacts students’ learning outcomes.

Effectiveness studies are where the rubber of research-based product design meets the road. Just because a product is designed to include effective practices (according to prior research), it does not mean these findings will be replicated when packaged into a new product and used in new contexts. Edtech companies must regularly evaluate the efficacy of their products to ensure they are making a positive impact on student outcomes. There are several different approaches to evaluating the efficacy of a product and each can provide meaningful insights for edtech companies, educators, and students.

  • Correlative studies for ESSA Tier III Validation: A good place to start in evaluating the efficacy of an edtech tool is with a correlative study examining whether more use of a product (e.g., lessons completed) is associated with better student outcomes (e.g., test scores). A correlative study that uses rigorous methods that control for potential confounding variables and students’ baseline achievement is aligned with an ESSA Level III designation. Since Title I funds used for school improvement typically require at least an ESSA Level III designation, we highly encourage edtech companies to pursue this type of study.
  • Quasi-experimental studies for ESSA Tier II Validation: Another option for an efficacy study is to use a quasi-experimental design to examine whether there are significant differences in learning outcomes between students who use the edtech tool and students who do not. This design allows for enhanced claims that a product is effective since there is a comparison group and is aligned with an ESSA Level II designation.
  • Randomized-controlled trial for ESSA Tier I Validation: A final approach to evaluating efficacy is to use a randomized-controlled trial design that uses a similar comparative approach as quasi-experimental designs, but rather than comparing groups of users and non-users that organically emerged, students are randomly assigned to use the edtech tool or not. Studies that use a randomized-controlled trial design are aligned with ESSA Level I evidence. While this design is highly rigorous, it’s important to consider the burden of this logistically challenging approach placed on school partners, and the potential harms of withholding an efficacious intervention from students in the control group.

As a teacher, I made tough decisions about which edtech tools to use to provide quality instruction that helped all of my students learn. These decisions were particularly challenging due to a lack of available information that ensured an edtech tool would be effective for use in my classroom. Teachers and students deserve to have tools that are proven to positively impact learning. Regardless of which approach is used for evaluating the efficacy of edtech tools, companies must prioritize evidence. The stakes for edtech could not be higher. Research is the best way to ensure we can set students up for the brightest possible future. Invest in evidence for the students.

Alexandra Lee, Ph.D. Senior Researcher, Instructure

Alexandra Lee, Ph.D.   Senior Researcher, Instructure

Alexandra has over seven years of experience with the design, coordination, and analyses of educational research and evaluation. She is also an experienced educator and has spent 8+ years teaching in diverse K-12 and higher education settings. Alexandra received her Ph.D. in Educational Psychology and Educational Technology from Michigan State University. Her subject matter expertise spans social-emotional learning, student and teacher motivation, teacher professional development, educational equity, career and technical education, and STEM instruction. Alexandra uses advanced quantitative methods in her research including structural equation modeling, mixture modeling, multilevel modeling, multivariate analysis, and social network analysis. Additionally, she employs qualitative methods to contextualize quantitative findings and to inform the development of new measures. 

Leave a Reply