Validation and Authority in a Web 2.0 World

By far, the most interesting discussion of the week for me started with a simple tweet:

Did I ever share this data literacy survey that I created with you? http://bit.ly/4Clagn It's from my book on building PLCs. about 17 hours ago from TweetDeck

My goal was pretty direct:  I wanted to point people to a tool to help learning teams measure their comfort levels with data as a tool to drive instruction. 

I created the survey with my learning team in mind because we’ve always struggled to figure out just what to do with data, and I’ll bet that other teams are in the same boat.  The way I see it, if my experiences and resources can help other teams, life is grand. 

Jon Becker—an assistant professor for Ed Leadership at VCU, a provocative, well respected scholar, and one of the people following my posts in Twitter—replied with a question that got me thinking:

@plugusin sorry if I sound "professorly," but have you validated the instrument at all? about 15 hours ago from TweetDeck in reply to plugusin

The conversation that followed was pretty fascinating.  First, I tried to explain to Jon that teachers aren’t hung up on validated tools.  Instead, we’re looking for quick and easy resources that we can use immediately to start conversations with each other. 

Here’s a piece of that conversation.  For those who aren’t Tweeting yet, My comments to Jon are colored red.  His comments to me are colored blue:

@jonbecker : Professorially validated? No. Validated based on years of experience actually doing the work? Yes. about 13 hours ago from TwitterBerry

@plugusin no, psychometrically validated. How do you know you're measuring what you're claiming to measure? about 13 hours ago from TweetDeck in reply to plugusin

@jonbecker The tool isn't for research. Instead, its for gathering information at the team/school level for setting direction. about 13 hours ago from TwitterBerry

@plugusin information comes from data, so the information you end up with is only as good as the data collection process. about 13 hours ago from TweetDeck in reply to plugusin

@jonbecker Waiting for validated tools developed by "experts" has never helped our team move forward. about 13 hours ago from TwitterBerry

@plugusin there are existing instruments that have been validated for the same purposes, including ones developed by @mcleod about 13 hours ago from TweetDeck in reply to plugusin

@jonbecker : I think you're over estimating exactly what teams do with these tools. They aren't about research. They're about conversations. about 13 hours ago from TwitterBerry

Over time, I started to wonder out loud whether the concept of validation is changing in the Web 2.0 world.  To traditionally trained scholars, school leaders and experts, validation means putting your work through a semi-scientific review process that—as you can imagine—is time consuming and intimidating, covering questions like:

  • Does this test accurately measure the surveyed domain?
  • Does this test measure the range of behaviors possible within the surveyed domain?
  • How close does this test correlate with other measures that it should theoretically correlate with?
  • Can the creator of this survey demonstrate that it is valid by relating it to another measurement known to be valid?
  • Has the survey creator controlled for any extraneous variables that may interfere with validity?
  • To what degree can causal inference be generalized from the sample studied to the target population of the survey?

As a result, teachers and other “lay people” have traditionally been locked out of the community of creators. 

Because we didn’t have the sophisticated understanding of the steps involved in making our work “valid” as defined by authorities, we were left to sit on the sidelines taking direction from those “in the know.”  And because our society has always been driven by hierarchies, we had no chance of becoming recognized experts with worthwhile ideas to share. 

Times are changing, though:

@jonbecker :  I also wonder if our definitions of validation are changing. Someone's got to like the stuff I create. Otherwise they wouldn't follow me. about 13 hours ago from TwitterBerry

@jonbecker :  That's the beauty of digital tools. My ideas/materials stand on their own merits. I don't need anyone else's permission to publish them. about 13 hours ago from TwitterBerry

@jonbecker :  If you like what I have to offer, great! If you don't, great. But at least I can be heard too—and at least you have the choice to listen. about 12 hours ago from TwitterBerry

@jonbecker :  For classroom teachers, that's pretty empowering, actually. I don't need a doctorate or a principal's license to be considered credible. about 12 hours ago from TwitterBerry

@jonbecker :  I live in a check your title at the door, Jon. Credibility comes from producing ideas that make my work eaiser… about 11 hours ago from TwitterBerry


Interesting stuff, huh?  This is a conversation that still has me thinking.  Here’s what I’m wondering:

  • Are there any real risks associated with using surveys and tools that haven’t been psychometrically validated?  I’m pretty confident that teams and teachers are going to use any tool that they find to be valuable, whether it has been tested or not.  Does that carry consequences that I haven’t considered?
  • Has “empowering the masses” actually harmed education?  I can’t even begin to explain how much I learn every day from people who have no formal credentials.  Social media—blogs, Twitter, Skype, Facebook—give me instant access to minds that I wouldn’t have ever found before.  But in a sense, social media cheapens the formal knowledge earned through systematic study of classical practices and texts.  Will that carry any consequences that I haven’t considered?
  • Do teachers stand a chance of ever being seen as the intellectual equals of those who have “advanced” within the existing system?  Even as digital tools make it possible for teachers to raise their voices, our P-20 school systems still function as hierarchies—and in those hierarchies, teachers remain at the bottom with little organizational power.  Does that mean our contributions will always be questioned by those who have “moved up?”  If so, what consequences does that carry for our profession?

Looking forward to your responses….

16 thoughts on “Validation and Authority in a Web 2.0 World

  1. TeachMoore

    I agree with Cristine (and thanks for sharing this discussion, Bill). One of the much-touted potentials of Web 2.0 technology is that it could bridge the gap between researchers and practitioners in the field. Imagine the possibilities when our best educational research minds are examining and working with the tools and data developed in real classrooms (as opposed to for them–where teachers and their students become just test kitchens for somebody else’s recipes).
    Also, let’s not forget there is a tradition of teacher researchers–classroom teachers who also do academic research on their own practice (and vice versa). This breach between theory and practice has been a false and harmful dichotomy in education for too long.

  2. Cristine Clarke

    Imagine a student wrote a paper with a thesis that he or she really really believed was true but all of the evidence was from the experiences of a few friends. How would you grade it? Here’s the real opportunity here putting aside the ivory tower and the local politics. Can the web give practitioners quick access to the validation process?

  3. Mike H

    Bill,
    We have two methods in my school system for share work with students, an LMS and a Virtual Share. Each has their benefits and negatives. The district is planning on eliminating one of the methods which would be unpopular. So I simply sent a poll out asking what teachers used. It was an objective question…”which do you use:” and they can choose: tool A, tool b, or both. Anyway, long story short, I was then told that I cannot just send out polls like that, that it needs to go thru our data dept. first. Makes no sense.

  4. Cary Kirby

    Bill,
    You definitely have some really interesting questions as the end of your post. To me, the conversation about validation leads to even bigger “fish” out there – is validation another tool to keep the hierarchy of power in place? That doesn’t mean I disagree with the posters who regarding the benefits of peer-review, etc. with regards to things like the anti-vaccine movement, but I do wonder if we are really on the cusp of realigning the power structure of schools.
    Have you read “Disrupting Class” yet? Many of your thoughts and questions (both here and in other posts) make me think that you would find it intriguing.

  5. Bill Ferriter

    Wow, guys….this has turned out to be a really interesting comment strand. Your contributions are all incredibly valuable to me—they’re helping me push my thinking on this topic.
    I think you’re echoing what I feel: That there is a real tension between using what works—best evidenced in Kevin’s comments about his Swiss Army knife—and what has been scientifically proven to be worthwhile—best evidenced in Matt’s assertion that our observations are meaningless.
    I guess my real frustration is that my observations are considered meaningless! There’s got to be some value in what I know—even if that knowledge is anecdotal. After all, my guess is that anyone studying data literacy in a formally valid way would come to the same conclusions that I have.
    Then, they’d get credit and be celebrated as experts even though they’ve brought nothing new to the table.
    Does any of this make sense? I guess what I’m fighting for is professional respect. I’m tired of being told that my thoughts and ideas aren’t valid.
    Bill

  6. concretekax

    Wow, first of all I was lurking in Twitter during this conversation. Although I really did not care about your survey, I found the discussion about validation fascinating. I tend to side with your take about the democracy of ideas now afforded us by the web.
    After you dropped out of the discussion, myself and @smartinez asked Jon Becker about what makes a survey “valid.” He explained that it was a process through peer-reviewed journals. He also said, “peer-review is flawed.” and “peer-review is typically carried out through journals. That’s a highly flawed system.” and “these days, ideas should be published in the wide open and subject to peer-review by anyone. Just my opinion.”
    So I do not believe that Jon Becker is necessarily against your position. He has agreed to do a session explaining the process of “validation of a survey.
    I blogged about this conversation the other day including the opportunity to introduce another teacher from my building to Twitter at http://concretekax.blogspot.com/2009/12/twitter-addict-confession.html
    The funny thing to me is that I have been following you on Twitter for awhile and reading your blog but never knew that Bill Ferriter was @pludusin. Thanks for starting a great discussion.

  7. twitter.com/mrscienceteach

    As a teacher who started my life in the world of life sciences, my only fear of the “Here Comes Everyone” shift in education is that some ideas need to be tested before they should be accepted. In science, we use the tenets of peer review to build trust in the conclusions that come out of our experiments.
    Pseudoscience followers (the anti-vaccine movement and intelligent design proponents to name a few) use the democracy of the internet to promote untested hypotheses without merit. In some situations, the processes put in place to validate data serve a necessary purpose.
    Like Parry, however, I see that simple data collection devices like yours don’t seem to need the lengthy validation process. Let’s just not throw out the scientific process in the name of convenience.
    Paul

  8. Kevin Karplus

    The key question is whether a tool does what you want, even if it was not intended for the purpose (I use the corkscrew on my Swiss Army knife to clear wax out of candleholders, for example).
    The problem in education research is that there have been a lot of charismatic snake-oil salesmen getting money and fame from things that demonstrably don’t work. The children who have been mis-educated as a result of following their lead have suffered.
    Teachers can and should try a variety of tools, keeping the ones that seem to work and quickly discarding the ones that seem to be harmful. People making recommendations about tools should be more careful, as they can affect many more children.

  9. Matt

    I think we see here the difference between educators with STEM backgrounds and those with elementary education or humanities backgrounds. Over and over again in science education we learn that personal experience, observation and anecdotes are flat out wrong. No matter what we see, believe and observe each day, does not make it right, true or even beneficial. (The sun does revolve around the earth, how could it not? Just look it at!) Sometimes we need to omit to ourselves what we see going on in front of us is not actually what is going on.

  10. Parry

    I see both sides of this.
    On the one hand, practitioners have limited time. It is difficult to find a tool that answers the questions you want answered, is inexpensive (or free), and is available right now, much less a tool that has also gone through a psychometric validation process.
    On the other hand, K-12 education is full of tools and resources that are unlikely to accomplish what they are intended to accomplish. As one example, think of all the time and money put into “learning styles” workshops and survey instruments, when there is little to no scientific basis supporting the “learning styles” approach.
    But is education any different from any other field? Hands-on work is messy, with no easy answers and no clear paths toward success. Yes, high-quality research is invaluable in suggesting approaches and methodologies that may be more likely (or less likely) to lead to positive outcomes, and K-12 education would be well served to consider high-quality research in decision-making processes. But in the meantime, practitioners have to work with what they have, and it simply makes sense to me for practitioners to share those tools that they have found to be valuable.
    Parry

  11. Chris Wondra

    Wow. Fascinating discussion. I read this piece through the lens of implications for “action research.” We love action research–which results in valuable feedback and discussions for teachers, but, as you say, may never be recognized as “valid” by experts.

  12. Larry Ferlazzo

    Bill,
    I really like this post. I think research and data are important and useful, and I do my own in the classroom (though it certainly wouldn’t stand-up to “academic” standards).
    But for some in the world of academia, it seems like real-life experience counts for very little, and there also appears to be minimal acknlowledgment that research and data can be easily manipulated.
    My book on teaching English Language Learners is coming out in April. There’s tons of research cited in it, but more than one publisher had it reviewed by academics who said it needed even more. I wanted it to be accessible and actually used by teachers, so I pulled it and went to a publisher who “got it.”
    I’m not suggesting it has to be an either/or situation — both experience and research have their place. I know there are academics who agree. I just wish there were more of them.
    Larry

Comments are closed.