My (Current) Data Nightmare

So, I may be playing a familiar tune here, but I’m tired of being asked to work in new ways without being given the knowledge, skills, time and tools to complete new tasks.  It’s the whole reciprocal accountability soap-box that I jump on a dozen times a year. 

In this case, I’m struggling with a common nemesis:  Using data to drive my decision-making.  Now, in theory, I completely embrace the thought of using concrete, tangible evidence of individual student learning at the skill level to:

    1. Identify students in need of enrichment and remediation.
    2. Identify and then amplify effective instructional practices across an entire hallway. 

Makes perfect sense to me.  Fits the “work smarter, not harder” ethos that defines the best practitioners in every field.  Ensures that every child has a learning experience tailored to their individual needs.  Heck, I’ve even written chapters in two different Solution Tree anthologies on assessment!

But I’m also frustrated with the lack of efficient tools to collect, manipulate and disaggregate student learning data at the skill level. 

You see, like many teachers, the vast majority of my efforts to collect and analyze student learning data continues to start and end with the sticky notes that I use as exit cards several days a week—-a decidely low-tech, cost-effective system when you look at the materials required for implementation but a miserable slodge when it comes to trying to make sense of learning trends and patterns across an entire team of students over a nine-week period. 

Even the digital solution provided by my district to collect data and analyze learning results struggles to make my work more efficient. 

Called Blue Diamond, the system has a ton of potential.  Common formative reading assessments are available for every middle grades student in every class in our 130,000 student district.  Delivered every three weeks, these assessments do a reasonable job focusing my instruction on the kinds of specific skills that students need to learn in order to be effective readers. 

They’re short—so they don’t consume a ton of my class time—they’re automatically scored by a Scantron machine—giving me instant access to results—and they’re generally well written, so I believe in the learning trends that I spot.

Heck, I’ll even readily admit that until Blue Diamond was introduced in my district, my reading instruction was just plain appalling!  While I’m sure that my students loved to read, there was very little that was systematic about my teaching.  Blue Diamond changed all that by giving me a clear picture of exactly what it is that my students were supposed to be learning. 

The problem is that spotting learning trends that I can act on is darn near impossible because Blue Diamond reports student performance at the objective—instead of skill—level. 

Need an example of why reporting student performance at the objective level (which sounds pretty logical) is inefficient at best?

Check out the language of the three state objectives covered on the most recent Blue Diamond assessment that I gave to my students:

Reading, Grade 06 Objective 5.01
Respond to various literary genres using interpretive and evaluative processes


Increase fluency, comprehension, and insight (reading strategies; figurative language, dialogue, and flashback; plot, theme, point of view, characterization, mood, and style; distortion and stereotypes; underlying messages)

Reading, Grade 06 Objective 5.02
Respond to various literary genres using interpretive and evaluative processes


Study the characteristics of literary genres: fiction, nonfiction, drama, and poetry (novels, autobiographies, myths, essays, magazines, plays, pattern poems, blank verse; interpreting impact of genre-specific characteristics; exploring author’s choices; exploring impact of literary elements: setting, problem, resolution)

Reading, Grade 06 Objective 6.01
Grammar and language usage


Types of sentences, punctuation, fragments, run-ons; subject-verb agreement, verb tense; parts of speech, pronouns, prepositional phrases, appositives, dependent and independent clauses; vocabulary development (context clues, a dictionary, a glossary, thesaurus, structural analysis: roots, prefixes, suffixes); dialects, standard English

On the bright side—and I really do believe in the potential of this program as a tool for informing teachers—the reports features of Blue Diamond allow me to instantly generate a list of students who struggled with and/or aced each of these three objectives.  I can break those reports down by student subgroup, I can create new break points allowing for more careful sorting, and I can watch progress in each objective area over time. 

I can compare performance between the different classes that I teach.  I can see how other classes are performing in our school and I can see how students across the district are performing on the same objective.  I can even find out automatically how many questions individual students answered correctly under each objective.

Pretty impressive, huh? 

But because each objective covers about a dozen discrete skills, knowing mastery at the objective level is meaningless to me as a teacher!

If a child falls into the “needs improvement” section for Objective 5.01, I have no idea if it is because they’re struggling with point of view or characterization—two wildly different skills that require different kinds of remediation experiences—unless I go back to each identified child’s exams, manually look over their individual responses, and figure out the skills covered by the questions that they’ve missed. 

(What are the chances that already overworked teachers are going to actually tackle that task?)

The good news is that this is a seemingly easy digital problem to fix.  In a world where tagging has become the default way to sort any kind of content on sites that contain heaping cheeseloads of information, every single question in our district’s formative assessment system could be quickly tagged with the discrete skill that it is designed to teach.

Then, when student reports are generated, teachers could see the questions that students were missing and the skills that each question was designed to assess.  In my wildest dreams, a tag cloud could be generated where the skills missed the most frequently were bigger than those students had no troubles with, providing visual cues that were quick and easy to pick up on. 

Most importantly, though, data—which is now too vague to be meaningful—would become instantly useful because teachers could quickly develop remediation experiences that target the kinds of skills students haven’t mastered yet. 

The bad news is that these kinds of changes are unlikely to happen in time to help me with the students in my classroom today. 

Don’t get me wrong:  Our district is constantly improving Blue Diamond, so I believe that someday we’ll be able to access information at the skill level—but until then, I’m stuck aggregating sticky notes if I want to find actionable trends in student learning outcomes.

And I honestly worry about the consequences of asking teachers to be “data driven” while failing to provide them with practical tools that make data action possible.  Are we turning teachers off to data by promoting an idea that is professionally responsible but nearly unmanageable? 

I don’t know the answers to my questions.  I don’t work with enough teachers to have a good perspective about the state of data-driven decision-making in our country. 

I just know that each time I sit down to crunch numbers with antiquated or inefficient tools, my heart gets a bit harder towards the whole process—and that is a frightening outcome with real consequences for the kids in my classroom.

7 thoughts on “My (Current) Data Nightmare

  1. K. Borden

    Mr. Ferriter:
    Ah, thank you for the clarification. I know of which you speak :). Early in planning for this adventure, I broke the NC curriculum down for the grades 4-9 onto 3×5 index cards. Then, I compared it to the curriculum guidelines of other states and those stated by various national education organizations. It was a grueling process and those bullet points were immediately tagged with numerical identifiers (5.1 (1)1 for example).
    It is rather interesting that they did not take that extra step in developing the assessment to further tag the data generated. As the guidelines undergo revisions hopefully your suggestion will be taken to heart in the production of associated updated assessments to match. A teacher with so many students to consider would be much better served by what you suggest.
    I wonder though if even this would satisfy the need you have. Those “skills” while illustrative of competencies related to the objective in question, still don’t summarize for you if a particular student (or subset of students) is struggling or excelling at a particular skill set or ability in general terms. It doesn’t provide you a snapshot that would help you recognize without further analysis whether a particular student appears to have more ease inductively or deductively (as one example).

  2. Tad Sherman

    Bill-
    Great information. I always love reading your blog. One of the things that stood out to me was the number of times you said “I can” as you discussed all of the things that can be done with data.
    As a person recently coming out of the classroom and moving into the role of assistant principal I suppose the thing that I think of is how can we move from “I (the teacher) can” to “My administrators do”. Does that make sense?
    What I’m getting at is the idea that as school administrators we need to be crunching the numbers and giving you the data in a way that easily read and understood.
    This means that you spend more time adapting your instruction based on data. You know…the idea of “Data Driven Decision Making”!
    Any other administrators out there? Is it realistic for us to be the number crunchers so our teachers can focus on instruction? I know I hope to be an administrator that can do that!

  3. Bill Ferriter

    K. Borden asked:
    Do you think a reason the Blue Diamond does not currently tag skills may be because they are so difficult to define, quantify, categorize and recognize?
    Good questions, K.—and proof that a shared vocabulary would be a great thing for moving conversations in education forward!
    In our state, “objectives” are generally really broad concepts—things like “uses interpretive and evaluative processes”—that require a broad set of “skills” to master and to assess.
    “Skills” are actually pretty easy to teach and to test because they are do-able. They include things like “can identify bias” and “can explain the impact of an author’s choices on readers.”
    In our curriculum guides, “skills” are listed as bullet points under “objectives”—-which is awesome because it means that the heavy lifting (identifying the skills necessary to master objectives) has already been done.
    But our assessment tools only provide feedback at the objective level. While that may get teachers in the ballpark when trying to determine where to provide targeted remediation and instruction, the “ballpark” is huge, often covering a whole range of discrete skills.
    All that I need is for our existing products to be redesigned to provide us with feedback on the bullet points in the curriculum guide instead of on the objectives only—-and it seems like that would require nothing more than tagging questions and finding a way to report out on tags.
    Does this make any sense?
    Bill

  4. Simon Oldaker

    Oh, dear. You’ve mentioned the elephant in the room. We are most of us under increasing pressure to tailor instruction. Also, this is to happen in an environment of specific, predefined objectives, but as you say, all this remains a pipe dream without good tools to manage “student learning data at the skill level”. This might be where the real revolution in schooling has to come.

  5. Dave

    I think you’re feeling the pain of trying to transition the entire world of education to be data-driven. Obviously, you’re ready to dive in, but it’s going to take years to get the entire boat moving in a new direction. For now, I guess we go data-driven when we can, and push for improvement when the tools and people aren’t yet ready for it.

  6. K. Borden

    Mr. Ferriter:
    It does seem rather intuitive. Your suggestion to “tag” each question with the skill objective competency it seeks to assess, extends the usefulness of the assessments to address individual student strengths and weaknesses. I am assuming you would also like the ability to sort and filter the data at the individual student level using various criteria.
    Your distinction between a skill as compared with an objective also deserves highlighting.
    Let me try my untrained hand at seeing if I understand this distinction. An objective is a curriculum goal (ex: recognition of figurative language). An objective is the strand of information the state establishes as a goal of the year’s instruction. A skill, is the ability of the student to employ self possessed “tools” (experience, prior knowledge, reasoning, intuition,….) to approach and meet the curriculum goal (example: each question on the assessment that called upon the student to make comparisons/contrasts the student was more likely to miss than correctly answer.) Am I understanding the general distinction?
    Do you think a reason the Blue Diamond does not currently tag skills may be because they are so difficult to define, quantify, categorize and recognize?
    I may be entirely misunderstanding what it is that you are suggesting the assessment could do a far better job of providing you data to assess on the individual student level.

Comments are closed.