|What||A concordance table which translates the scores resulting from national or regional assessments to scores on international assessments.||An AMPL module calibrated to the MPL is inserted either as an additional booklet or by running parallel assessments.||Matches up definitions of the MPL descriptor using subjective judgment and under certain conditions, allows those assessments to be aligned across countries.|
|Items/test||Different assessments||Same AMPL module across different assessment programmes||Different assessments|
|Calibration||Calibration needs various steps||Accurate to report on the MPL||Depends on assessment programme|
|Alignment with Global MPL||Yes, but needs standard setting to define accurate alignment.||Yes||Depends on alignment and sufficiency of items|
|Sufficient # of items||Yes||Yes||Depends on each assessment tool|
|Measurement skills continuums||Yes||Not now but possible with current developments||Depends on each assessment tool|
|Track progress over time||Yes||Yes||Not clear; depends on quality of tools|
|Frequency||Cycle depending on each assessment||On demand||Once per assessment||n/a|
|Output||Concordance table||Calibrated to the MPL||Identifies the MPL cut-off points||Identifies the MPL cut-off points|
|How||Relies on the participation of countries in two assessments.
Students take the two assessments to help link between the results of both assessments.
|Insert the booklet either as a standalone running parallel test or as a rotating booklet.||Group of experts provide judgment about each item on the test and set initial cut scores based on their understanding of the proficiency levels and the student population.|
|Country ownership||Very low||High||High||Medium|
|Needs||Tests have enough items that could identify linking.||A tool built with items that are aligned and sufficient to measure the MPL.||Good-quality cognitive tool and procedures.
Strong alignment of assessment tools to GPF.
|Pros||Technically rigorous||Technically rigorous||Cost-effectiveness|
|Cons||Costly. Efficient if done between a regional and a global assessment.||Does not allow deep investigation of the construct.||Relatively subjective (less for pairwise). Depends on the quality of the assessment tool and implementation of the linking process.|
|Achieved so far||Rosetta Stone: ERCE (LAC) and PASEC (SSA) participated with IDEA in the Rosetta Stone exercise.||AMPL-b administered
AMPL-c under development (PISA)
AMPL-a under development
|First phase of pilots around 16 countries completed.||Standard setting exercise for MILO (ACER, 2022).|
|Next/remaining steps||Potential expansion to other regions and national assessments.||Scale-up depends on country’s interest and development partners’ support.||Revision of toolkit.||Methodology guidance and analysis.|
|National cost||Between US$250,000 and $400,000.||Printing cost of a booklet. Extra administration costs depends on modality.||Between US$30,000 to $50,000 for national workshop.||None|
|International cost||International US$1 million per region. Regional – US$500,000||Averages US$100,000 for technical assistance||Between US$50,000 and $75,000 per country||US$40,000|
Minimum Proficiency Levels used to report for indicator 4.1.1
Global Content Framework
GPF for Reading
GPF for Mathematics
|3||SDG4 Global Tables 2021|
|4||Zambia Conference Documents
Rosetta Stone Study
MILO: Monitoring Impacts on Learning Outcomes