In this two-part series, education consultant and former headteacher Daniel Taylor discusses the changing data landscape in UK schools.
Since the advent of league tables over 20 years ago, data for school leaders has always been foremost in their minds and integral to decision-making. The need to identify how well your school is performing in terms of attainment and progress, in addition to any exclusion and attendance data, has provided a core of evidence for a school’s accountability measures. It has so often been the cornerstone of decisions around personnel, funding and budgets and, as such, has played a fundamental role in schools – whilst causing many a sleepless night for school leaders. But has the elevated role of data now started to diminish?
For many years, I was the ‘go-to’ person for staff unsure about any aspect of performance or progress-related data. As a former head teacher now working as an education consultant, I frequently speak to school leadership teams. Emerging from the inception of the new inspection framework, a significant majority of head teachers appear in agreement about two key issues. Firstly, their apparent isolation from the process beyond their initial briefing conversation and, secondly, as I will briefly focus on here, an unwillingness from inspection teams to even contemplate the inclusion of internal data in conversations about attainment and progress.
The move to Progress-based key performance indicators had been viewed by many in education to be a big step in the right direction – a fairer measure allowing particularly for those schools with less able cohorts to demonstrate their effectiveness in delivering a high-quality education to their youngsters, even if achievement outcomes weren’t exceeding national thresholds. As a consequence, schools consistently strived to demonstrate firstly ‘expected’ levels of progress and then positive Progress 8 scores. Of course, an inability to predict this accurately (or impossibility in the case of Progress 8) meant that the validity of such data was oft called into question. Is it little wonder however? The muddied waters of ‘life after levels’ gave schools the ‘freedom’ to devise their own methods of gauging progress. Although welcomed by some, it provided little scope for comparison, was time-consuming for staff and required a re-education for staff, governors and students alike. Although the cynics amongst us would have little sympathy for inspection teams, it doesn’t take a genius to appreciate the difficulties they encountered when moving from one school to the next in trying to interpret what was put before them. Furthermore, what incentive was there for any school, on the cusp of inspection, to present a poor set of figures during inspection visits? For those who spent many hours, weeks and even months sweating over new comprehensible tracking systems, this move away from internal data might come with considerable frustration (or perhaps in some cases relief).
But why the move? Put simply, one key reason I believe was that the validity of the data was too often called into question. One wonders on how many occasions school inspectors have waited as nervously as head teachers for August results day, hoping that outcomes have supported judgements. (I remember one of my previous head teachers, upon receipt of the school’s very positive Summer GCSE results, immediately emailed the lead inspector from the school’s spring section 5 Ofsted inspection!) For thousands of teachers the drive to performance-related (data-related) pay had the potential to cause considerable hardship; perhaps the shift to curriculum and teaching will provide for fairer reward?