Healthy Indicators Analysis

This document helps teams understand what each of the indicators tell us about our MTSS practices and the effectiveness of our system. As you review and discuss your reports, we’ve provided a few important ways to explore the data within the platform as well as questions to consider as you make sense of the data and use it to take action to improve processes and outcomes for students. 

See also: Healthy Indicators Overview

Indicator

Healthy Indicator #1

This report shows the percentage of students assessed on the selected default assessment for the building and grade during the screening window.

A healthy system has a robust screening program that includes all students possible. Screening essentially all students (95%) ensures the data are representative of the whole system and ensures we catch all students who need support while gaps are smaller and relatively easier to address.

What is the big idea?

Does our screening process have gaps? Are we reaching essentially all of the students, or are some slipping through the cracks? Does the rest of the data meaningfully represent ALL of our students?

Why is it important? 

Universal screening is an important practice because it is the foundation of all other work and analysis! Students who are not screened are not included in any of the other indicators. If many students are missed, the time and effort spent on interpreting the data may be less valuable and valid.

What should we ask ourselves as we
review this data?

What is the percentage of students screened in your school, district, or AEA? 
  • Use the “view by” and filter features to look for trends across groups. Are there some grades, schools or subsets of students that seem to be missing from the screening data?
  • What may have caused this? Is it systematic? Intentional? Accidental?
  • If there are large gaps in screening, how will this affect the rest of the data? Will this cause misinterpretation? Does anything need to be done for the students missed?
  • What needs to happen to close the gap in testing for the future?

Report Calculation

The indicator is expressed as a percentage. The denominator of the calculation is the number of enrolled K–6 students. The numerator is the number of those students who received a score on the selected default assessment during a screening window.

Other

There is no expectation that 100% of students are screened, and no consequence tied to it other than the natural consequences for students not screened (potential for missing risk, less holistic picture of your system). There will be students on alternate assessment who are not yet included in the data here or students who were missed due to extended absence or unusual circumstances.

See Also

Using Healthy Indicator #1: K-6 Screened

Indicator

Healthy Indicator #3

This report shows the percentage of students who are at/above benchmark on the school’s selected default assessment.

A healthy school has universal instruction that enables the majority of students to meet benchmarks for “low risk,” without a lot of additional support and resources.

What is the big idea?

Is our universal instruction meeting the needs of most of our students? Are there patterns of lower/higher data by grade, building, or demographic that point out areas of concern?

Why is it important? 

This is important because an effective universal instructional program should be able to meet the needs of most students (at least 80%). Lower numbers indicate that something about what is taught and how it is taught is not working for the students. Rather than continuing to intervene with many struggling students, it is far more efficient to fix the instruction problem so that more students are successful with universal instruction alone. Making changes in universal instruction is critical to reduce stress on the system and use resources most efficiently.

What should we ask ourselves as we review this data?

What is the percentage of students meeting benchmark?
  • Use the “view by” and filter features to look for trends across groups. Are there some grades, schools or subsets of students that seem to have better or worse percentages? Student data can be viewed by school, district, or AEA.
  • If 80% is the target, how far off are you? Is there a pattern of grades or locations? 
  • If there are many students not meeting the benchmark, are there classwide interventions that can be used to efficiently get help to more kids at once? 
  • How will we review and fix universal instruction? Is there enough uninterrupted time for instruction? Are the right things being taught?

Report Calculation

The indicator is expressed as a percentage. The denominator of the calculation is the number of K–6 students who received a score on the selected default assessment during a screening window. The numerator is the number of those students whose score was at/above benchmark. This means students who were not assessed are not included in this indicator.

Other

One of the unfortunate side effects of ELI is an emphasis on going directly to individual interventions. Intervening with large numbers of students is not sustainable, and doesn’t fix the underlying cause. Examining universal instruction may cause anxiety for staff, who are already stressed and concerned they don’t have the ability or skills to help students who struggle. However, meeting the majority of student needs in universal instruction will reduce the number of students needing interventions. Approach the issue in a way that is supportive and respectful of the professionalism of the teaching staff.

See Also

Using Healthy Indicator #3: K-6 at Benchmark

Indicator

Healthy Indicator #2

This indicator displays the percentage and number of students that have progress monitoring data for at least 80% of the weeks between screening windows

A healthy school monitors student progress regularly in order to quickly adjust if an intervention is not working as well as needed to close the gap. In a healthy school, teachers value having frequent data and use it often to improve outcomes for their students.

What is the big idea?

Of the students who scored below benchmark, what percentage had frequent, consistent progress monitoring data? If there appear to be issues, what does the weekly monitoring report show (see below)? Are there patterns of lower/higher data by grade, building, or demographic that point out areas of concern?

Why is it important? 

This is important because incomplete, inconsistent PM data reduces the ability to evaluate whether interventions are working to close the gap. “Progress monitoring” is monitoring not only student progress, but also the effectiveness of each instructional intervention.

What should we ask ourselves as we review this data?

Note: This report is most actionable for analyzing system strengths and weaknesses with at least 10 weeks of data, just before the next screening window. Week by week the percentages will vary so much based on the remaining weeks that the percentage is not helpful to track monitoring as a system early in the screening window. Use the weekly monitoring report instead (see below). The main purpose of this report is as a macro level indicator of problems, not detailed or ongoing analysis.

What is the percentage of students being regularly monitored in your school, district, or AEA?
  • Use the “view by” and filter features to look for trends across groups. Are there some grades, schools or subsets of students that seem to have better or worse percentages?
  • If 80% is the target, how far off are you? Is there a pattern of grades or locations? Is the reporting period near or more than 10 weeks?
  • If there are many students not being monitored regularly, are there systemic barriers to keeping up with monitoring? Is this a logistical problem (i.e., hard to get PM going after screening) or an issue of lack of understanding or valuing the PM data? Are teachers not using PM data to evaluate intervention effectiveness?
  • What is the best way to address the identified problems (ie.planning PM to occur early in the week so there’s more chance to catch up after an absence)?

Report Calculation

The indicator is expressed as a percentage. The denominator of the calculation is the number of K–6 students who scored below benchmark on the most recent screening using the selected default assessment. The numerator is the number of those students who were progress monitored at least 80% of the weeks following that screening window until the beginning of the next screening window at their school (i.e. 8 out of 10 weeks).

Other

One of the unfortunate side effects of ELI is an emphasis on progress monitoring as a compliance activity, rather than valuable data that can help teachers identify and replace ineffective interventions. We need to value consistent PM as a tool for improved outcomes.

See Also

Using Healthy Indicator #2: K-6 Progress Monitored

Indicator

Weekly Monitoring

This report shows the percent of students who scored below benchmark on the selected default assessment during the most recent screening window and are monitored weekly within a 12 week period. This display helps identify patterns of weekly monitoring activities that may be helpful for identifying areas for improvement.

A healthy school values the availability of frequent progress monitoring data for students receiving interventions, not for compliance, but because infrequent data makes it difficult to evaluate whether interventions are working as intended and when to intensify or change instruction

What is the big idea?

Are there issues with monitoring completion for students in some grades or buildings? Are there patterns where support can be provided to increase monitoring?

Why is it important? 

While there are legitimate reasons for progress monitoring to be skipped, there may be systematic patterns that represent areas that can be improved. Looking at trends and breaking down data can help identify where to focus efforts.

What should we ask ourselves as we review this data?

Where and when are PM completion rates lower (out of line with expectations?)
  • Use the “view by” and filter features to look for trends across groups. Are there some grades, schools or subsets of students that seem to have better or worse percentages? Is PM completion lower before/after holidays or screening windows or near the beginning or end of the school year? 
  • What are the potential causes? Is this a technical problem or one of understanding or motivation? Are there legitimate reasons for low completion such as absences, snow days, or holidays? Do those responsible for collecting PM understand why the data are valuable? Do they use the PM data regularly to check student progress?
  • Are specific students more likely to be missed, and if so, how will this affect decision making about the effectiveness of interventions?
  • What needs to happen to fix the issue for good?

Report Calculation

The graph and table show data points expressed as a percentage. The denominator of the calculation is the number of K–6 students who scored below benchmark on the selected default assessment they took during the most recently closed screening window. The numerator of the calculation is the number of those students for whom a valid progress monitoring score has been logged for a given week. This percentage will start at 0% in the beginning of the week (each Sunday) and increase as the week goes on as students receive monitoring. The graph shows the most recent 12 weeks.

Other

This report always displays the most recent 12 weeks. The goal of 90% monitored is intended to leave some leeway for illness, technical problems, etc. preventing some students from being monitored. The current week reflects data collected and imported. Be aware of lags in data imports when considering the current week.

See Also

Using the Weekly Progress Monitoring Report
Weekly Progress Monitoring Percentage Appears Low

Indicator

Healthy Indicator #5

This report shows the percentage of students who scored below benchmark on the last two screening periods on the selected default assessment and have an intervention in place in Student Success.

A healthy school provides robust, evidenced based interventions to improve outcomes for students who need them. The interventions are documented both to keep a record for the student, and also to allow for systematic evaluation of the effectiveness of interventions.

What is the big idea?

Are we getting students the help they need? Are we systematically documenting the interventions being provided? If the interventions are not documented and monitored there is no way to learn which interventions work best, and which ones are not effective, for all students, as well as for individual students.

Why is it important? 

Students who are at risk are unlikely to improve on their own. These students will require intervention to get back on track. Inconsistent or absent tracking of interventions is an indication that students are not receiving the support they need, or the school is not tracking those supports for effectiveness.

What should we ask ourselves as we review this data?

What is the percentage of students receiving an intervention in your school, district, or AEA?
  • Use the “view by” and filter features to look for trends across groups. Are there some grades, schools or subsets of students that seem to have better or worse percentages?
  • What are the potential causes? Do people understand how and where to document interventions? 
  • Are students receiving interventions that are not documented, or are some simply not receiving interventions at all? Is this an issue with knowledge and skills or with beliefs and attitudes?
  • What needs to happen to fix the issue for good?

Report Calculation

The indicator is expressed as a percentage. The denominator of the calculation is the number of K–6 students who scored below benchmark on their selected default assessment during the two most recent consecutive completed screening windows. The numerator is the number of those students who had an active intervention plan between the two screening windows for which the report is being viewed.

Other

This report is only part of the picture of the delivery of interventions. It does not look at appropriateness of the intervention, fidelity of implementation, whether the intervention was delivered for a long period, or at the correct level of intensity; it just shows whether there is ANY intervention entered for the student. Those familiar with the school and the interventions will need to look more closely at this by examining other information, including individual interventions, as well as the other intervention analysis tools

See Also

Using Healthy Indicator #5: K-6 Interventions

Indicator

Healthy Indicator #4

This report shows the percentage of students at/above benchmark who remain at/above benchmark across screening windows on the selected default assessment.

A healthy school is able to sustain learning for students who are not at risk, with few students falling into the risk category over time. The combination of universal instruction and interventions maintains success for students.

What is the big idea?

Is student growth between screening windows robust enough to continue to meet “low risk” benchmarks, therefore reducing the likelihood of struggling with important academic areas? Does universal instruction continue to identify unfinished learning and accelerate learning for students at/above benchmark across the school year? This indicator is another view into the effectiveness of universal instruction.

Why is it important? 

This is important because we’ve established that universal tier is the most effective and efficient way to meet student needs. If percentages are low, it means that over time more students are losing ground. This may indicate the cumulative effects of a universal instruction program that is not as powerful as it needs to be. It also may reflect a “revolving door” problem where students are intervened to success, intervention is removed, and they later slide back to risk. It could be that evidence-based practices need to be added to the universal tier, not just provided during intervention.

What should we ask ourselves as we review this data?

What is the percentage of students remaining at/above benchmark across screening windows in your school, district, or AEA?
  • Use the “view by” and filter features to look for trends across groups. Are there some grades, schools or subsets of students that seem to have better or worse percentages?
  • What are the potential causes? Is this a steady decline across grades/years? Are the students who are just falling below benchmark now ones who were below benchmark in the past?
  • Are the students scoring below benchmark in need of intervention? Could this be a universal instruction issue? Are some students who had received intervention in the past and may have still needed some support?
  • What needs to happen to fix the issue for good? Are there Universal instruction improvements to be made? Are teachers using the most effective, research based interventions, implemented with fidelity?

Report Calculation

The indicator is expressed as a percentage. The denominator of the calculation is the number of K-6 students who were screened in both screening windows and scored at/above benchmark in the starting window on the selected default assessment. The numerator is the number of those students who also scored at or above benchmark in the ending window.

Other

This report only reflects students remaining at/above benchmark. It will not capture information about students who remain below benchmark or remain at/above benchmark but improve or decline in performance within that range.

See Also

Using Healthy Indicator #4: K-6 Maintained

Indicator

Healthy Indicator #6

This report shows the percent of students moving from below benchmark in one screening window to at/above benchmark in a subsequent screening window on the selected default assessment

A healthy school is able to provide students at risk with strong universal instruction as well as evidence based, effective interventions and improve their performance, moving them out of risk status.

What is the big idea?

Are our interventions effectively closing gaps? Are students receiving intervention moving from at risk to low risk? If interventions are not closing the gap, a next step would be to learn which interventions are most and least effective.

Why is it important? 

Lower percentages indicate that of the students who are below benchmark, relatively few are improving to get out of the risk category. This indicator is an indicator of the effectiveness of the intervention programs in use. Interventions should be able to help students become successful over time. While the gap may not close as immediately as from one screening window to the next, some students should be improving to the point they reach or exceed benchmark.

What should we ask ourselves as we review this data?

What is the percentage of students moving at/above benchmark across screening windows in your school, district, or AEA?
  • Use the “view by” and filter features to look for trends across groups. Are there some grades, schools or subsets of students that seem to have better or worse percentages?
  • Are the students who scored below benchmark receiving interventions? Are the interventions high quality, evidence based interventions? Are they being delivered as intended? Are teachers reviewing the progress monitoring data regularly and intensifying, altering or replacing interventions that are not closing the gap as indicated by the PM data?
  • What system steps can be taken to improve outcomes? Are teachers entering intervention plans in Student Success to enable data analysis? Using the intervention evaluation functions to review and compare interventions efficacy?

Report Calculation

The indicator is expressed as a percentage. The denominator of the calculation is the number of K-6 students who were screened in both screening windows using the selected default assessment and scored below benchmark in the starting screening window. The numerator of the calculation is the number of those students who also scored at/above benchmark in the ending screening window. Students who are missing a score in either screening window will not be included in the report.

Other

Do not be discouraged by the 65% goal, which was set relatively arbitrarily. The key is to think about whether there are fewer or more students moving at/above benchmark over time. This is a global indicator of the effectiveness of the entire intervention process.

See Also

Using Healthy Indicator #6: K-6 Growth Report

Additional Healthy Indicator Resources:

Still need help? Contact Us Contact Us