Skip to content
Guest Thinkers

Please remember the leaders

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Listen to this post!


The December 2006 issue of NASSP Bulletin has an article by Drs. Chien Yu and Vance A. Durrington, assistant professors at Mississippi State University, on practicing and preservice administrators’ perceived levels of proficiency on ISTE’s National Educational Technology Standards for Administrators (NETS-A). The researchers surveyed 57 aspiring administrators and 16 practicing administrators who were serving as their mentors. The professors asked the respondents to rate themselves on a Likert scale from 1 to 5 regarding their perceived ability to perform the NETS-A standards and performance indicators. So what did the researchers find out?

In the researchers’ own words, “there were no significant differences between mentors’ and mentees’ perceived ability to the criteria for each standard.” That’s another way of saying that the folks mentoring the administrator newbies said that they knew no more than the folks they were mentoring when it came to technology-related leadership issues. Ouch.

The other interesting finding was that both the mentees and the mentors rated themselves above average on every standard. For example, the mentees’ average responses ranged from 3.32 to 3.75 across the six standards, while the mentors’ averages ranged from 3.21 to 3.78. The overall average for the mentees (3.61) was slightly higher than that of the mentors (3.58).

Now the number of participants in this study is small, particularly on the mentor side of things, but nonetheless the results are both depressing and inspiring. Depressing because there’s not going to be much good mentoring occurring on the technology leadership front when the mentees know more than the mentors. And, of course, inspiring because it’s good to know that there’s above-average technology leadership occurring in these participants’ schools. You know it is because they said it was.

This study raises a couple of thoughts in my mind:

  • First, never rely on participants’ self-ratings as evidence of anything, at least not without corroborative evidence from elsewhere. For example, for our study of data-driven decision-making readiness here in Minnesota, we asked similar questions of teachers, principals, and superintendents. If teachers said they were using formative assessments for student progress monitoring, we wanted to check their responses against what the administrators in their district said. If administrators said they were modeling data-driven practices for staff, we wanted to see what the teaching staff thought. You get the idea. In this study the authors didn’t comment on the validity of the averages, they simply reported them, so at least they didn’t make the same mistake that was made in this NASSP article, in which the researcher asked principals to rate themselves and then used those self-reports to conclude that “principals with technology training are stronger leaders in this realm.” Again, just because participants say it doesn’t make it true.
  • Second, I’m guessing that the finding of no significant differences between practicing and preservice administrators probably is pretty accurate, despite the methodological concerns that I have raised. I personally think that honest, accurate self-reporting would have resulted in much lower averages, but my own experiences working with schools show that teachers know at least as much about technology leadership as principals. Whatever leadership knowledge advantage principals have is typically outweighed by the technology advantage that teachers, who are generally younger, often possess. The bottom line, however, is that neither group knows much because neither group has received much training.
  • As I near the end of this post, I’ll give a nod to our own Principals Technology Leadership Assessment (PTLA). As far as I know, the PTLA is still the nation’s only psychometrically-validated assessment of principals’ technology leadership inclinations and activities. The PTLA was created to help assess principals’ actual behavior (as opposed to simple self-ratings of proficiency). Although the PTLA also is based on self-perceptions, the questions ask principals about their frequency of action, not just how good they think they are. This was the closest we and the American Institutes for Research could get to a performance assessment without direct observation, collecting a portfolio, etc. If you’re interested in using the PTLA, contact me. The PTLA is free to K-12 school organizations and educational leadership preparation programs; we’ll even host the online survey for you.

    Happy New Year, everyone. As you work with schools on various technology-related issues, please remember the leaders. As I said in my very first post at Dangerously Irrelevant, sustainable success in schools never occurs without effective leadership. There are innovative, technology-using educators in almost every school and district. Their potential impact runs smack into the brick wall of their administrators’ lack of knowledge and/or training. We need more effective technology leaders in formal administrative roles like principal or superintendent. We need them now.

    This post is also available at the TechLearning blog.

    Sign up for the Smarter Faster newsletter
    A weekly newsletter featuring the biggest ideas from the smartest people

    Related

    Up Next
    Okay, at the risk of being labeled a Scrooge, I’m going to say it, because one of the things we bloggers do is challenge each other (hopefully politely) to spark […]