Cross-Subject Change: General vs Subject-Specific

When a school's Reading scores rise, do Writing and Math rise too? This page decomposes year-over-year school-level changes into a general cohort factor and subject-specific residuals.

Sample: English-language schools with complete, non-suppressed data in all three years (2022–23, 2023–24, 2024–25) and where Math registered count does not exceed Reading registered count by more than 15%. The 2021–22 school year is excluded entirely. YoY pairs cover 2022–23 → 2023–24 and 2023–24 → 2024–25.

Key findings

~% of change variance

explained by a single general cohort factor shared across Reading, Writing, and Math (Grade )

r = Reading–Writing

Pearson correlation of year-over-year changes between subjects — consistent across both year pairs

r = Literacy–Math

r > 0.94 participation changes

Participation rate shifts are nearly identical across subjects — a key driver of the general factor


Section 1: Scatter plots of cross-subject changes

Each dot is one school's year-over-year change (pp) in one subject plotted against another. The strong positive correlation confirms that most of the movement at a school moves in lockstep across subjects.


Section 2: Variance decomposition — General vs subject-specific

Principal component analysis on the joint distribution of (ΔReading, ΔWriting, ΔMath) extracts the first component that maximally captures shared variance. Call this the general cohort factor (PC1). The remaining two components capture subject-specific divergence.

The general factor accounts for ~75% of G3 and ~67% of G6 cross-subject change variance. The two subject-specific components share the remaining ~26–33% roughly equally, meaning no single subject systematically diverges from the others — subject-specific variation is scattered.

Why is G6 slightly lower? G6 Math–literacy coupling (r ≈ 0.43) is weaker than in G3 (r ≈ 0.60), likely reflecting greater specialisation in Grade 6 instruction and cohort-level Math trajectory differences.


Section 3: Year-to-year consistency

The general-factor structure is consistent across both year pairs (2022–23 → 2023–24 and 2023–24 → 2024–25).

Both year pairs show very similar correlations — the cross-subject structure is consistent across 2022–23 through 2024–25 with no meaningful shift between the two transitions.


Section 4: Does school size change the structure?

If the general factor were merely a statistical artifact of correlated noise, small schools (high noise per subject) should show lower cross-subject correlations than large schools (low noise). They don't.

The SD of changes halves from small (n < 20) to large (n ≥ 50) schools — confirming that small schools are noisy. Yet the cross-subject correlations barely budge. This means the noise is correlated across subjects, not independent — see Section 5.


Section 5: Distribution of general factor vs subject-specific residuals

The general factor score (PC1) summarises the school's overall direction that year. The subject-specific residuals show how much each subject deviated from that shared trend.

The subject-specific residuals are centred on zero and symmetrically spread — consistent with genuine subject-specific effects plus independent noise, not systematic bias in any subject.


Section 7: Correlation matrices — full summary


Interpretation and implications

The general cohort factor is real and large. ~75% (G3) and ~67% (G6) of school-level year-to-year change variance is shared across all three subjects. A school that improves in Reading almost certainly improved in Writing and Math too. These figures come from a clean sample: schools with complete, non-suppressed data across three consecutive years and without anomalous Math-vs-Reading registration gaps.

Two mechanisms contribute:

Real cohort/school effects — changes in staffing, leadership, demographics, or the cohort's prior preparation affect all subjects. The general factor cannot be fully explained away by participation alone.

Practical implications for interpreting school scores: