NSW Selective Test Mock Scores — What They Really Mean (And Don’t Mean)
After your child finishes a practice paper, the first question is always the same: “Is that score good enough?” The honest answer is: it depends — and most of what you’ve been told about mock scores is misleading. Here’s what practice paper results actually tell you, what they don’t, and how to use them effectively.
This matters because families make real decisions based on mock scores — which schools to preference, whether to enrol in extra tutoring, and how much pressure to put on a ten-year-old. If the numbers you’re relying on are meaningless, those decisions are built on sand.
Why mock scores confuse parents
Every tutoring centre, practice book, and online platform uses different questions at different difficulty levels. A score of 75% on one set of papers means something completely different from 75% on another. There is no common scale, no shared question bank, and no agreed-upon difficulty standard.
This creates a problem. Parents naturally compare scores — across platforms, across children, and across years — but all of these comparisons are unreliable. Your child’s 72% on an Alpha One mock has no mathematical relationship to their cousin’s 68% on a Five Senses practice paper or their classmate’s 80% on a tutoring centre test. They are measuring different things.
There is no standardised benchmark outside the real test. The NSW Department of Education does not publish cutoff scores, calibrated practice materials, or any mechanism for parents to compare third-party results against the actual exam. Cambridge University Press & Assessment, which develops the test, has released one official practice test — and it comes without a scoring guide that maps results to placement outcomes.
Tutoring centres understand this gap and, in many cases, exploit it. Some calibrate their mock tests to make students look good — high scores keep parents happy and enrolled. Others calibrate them to make students look bad — low scores create anxiety and justify upselling more hours. Both approaches serve the centre’s business model, not your child’s preparation.
The Facebook groups are full of parents asking “Is 78% on the Alpha One mock enough for North Sydney Boys?” The truthful answer is that nobody knows — not the parents, not the tutoring centres, and not the platforms. The question itself is unanswerable because there is no bridge between any mock score and the real test outcome.
What mock scores CAN tell you
None of this means practice papers are useless. Far from it. Mock scores contain genuinely useful information — you just need to know what to look for and what to ignore.
Relative strengths and weaknesses
Your child’s score breakdown across subjects reveals which areas need more work. If they score 80% in Mathematical Reasoning but 55% in Thinking Skills, the Thinking Skills section should get more practice time. This is true regardless of the absolute difficulty of the paper.
The subject-level pattern is the most valuable information in any mock result — not the overall number, but the relative gaps between sections. A child who scores evenly across all four subjects is in a fundamentally different position from one who aces three sections and struggles with the fourth, even if their total percentages are identical.
Go deeper than subject totals where possible. Within Mathematical Reasoning, is your child losing marks on word problems, geometry, or data interpretation? Within Thinking Skills, are spatial reasoning questions the issue, or is it logical deduction? The more granular the breakdown, the more targeted the study plan. For a detailed guide on interpreting results by topic, see our guide to using practice exam results.
Progress over time
A child who goes from 55% to 72% over six weeks on the same platform is genuinely improving. The key phrase is “same platform.” When you track scores within a single source — same question style, same difficulty level, same marking approach — the trend line is meaningful even if the absolute number is not.
Compare scores within a single source, never across different ones. A drop from 80% on Platform A to 65% on Platform B does not mean your child has regressed. It means Platform B’s questions are harder, or different, or both.
A simple tracking approach: after each paper, record the date, paper name, and percentage per subject. Review the data weekly. You are looking for upward trends in weak areas and consistency in strong areas. If a subject stalls for two or three weeks, that is a signal to change the approach — more targeted practice on specific question types rather than more of the same.
Time management readiness
Finishing a section with five minutes to spare versus running out of time tells you something important about pacing. Consistently not finishing a section is almost always a pacing problem, not a knowledge problem. The student knows how to solve the questions — they just can’t do it fast enough under exam conditions.
If your child scores well on the questions they attempt but doesn’t finish the paper, the fix is straightforward: teach them to skip harder questions and come back to them. Spending four minutes on a single difficult question while leaving three easier questions unanswered at the end is a poor trade. This is a skill, and it improves with deliberate practice.
The NSW selective test is tightly timed — 35 questions in 40 minutes for Mathematical Reasoning, 40 questions in 40 minutes for Thinking Skills, 30 questions in 40 minutes for Reading. Your child needs to be comfortable with the pace before test day, and timed practice papers are the only way to build that comfort.
What mock scores CANNOT tell you
This is where the conversation gets uncomfortable, because these are the questions parents most want answered — and the ones that no mock score can address.
Whether your child will “get in”
No mock score can predict placement. The real test’s difficulty varies from year to year, the student cohort changes, and the standardisation method is not public. Around 15,000 students sit the test each year for approximately 4,000 places across 47 selective and partially selective schools. Your child is competing against a moving target — a different group of students, sitting a different test, scored by an algorithm that nobody outside the Department of Education has access to.
Even if your child scores 90% on every mock paper they attempt, that does not guarantee a place at any particular school. Conversely, a child scoring 60% on practice papers might perform significantly better on the actual test under real conditions. Mock scores and real outcomes are not the same measurement, and treating them as interchangeable leads to bad decisions.
A reliable cutoff score
Tutoring centres that claim “your child needs 85% to get into James Ruse” or “70% for Baulkham Hills” are guessing. They do not have access to official cutoff data because it does not exist publicly. The NSW Department of Education does not publish cutoff scores for any selective school. It never has.
Real placement outcomes depend on a complex mix of factors: the overall cohort’s performance that year, the number of applicants per school, the standardisation algorithm NSW Education uses to combine test scores with school-based assessment marks, and the specific preference list each student submits. No tutoring centre has visibility into any of these variables.
When someone gives you a specific cutoff number, ask them where the data comes from. If the answer is “our experience” or “based on past students,” that is anecdotal evidence with a heavy dose of survivorship bias — not data. For an honest comparison of the two most-asked-about schools, see our James Ruse vs Baulkham Hills comparison.
How your child compares to all other applicants
Mock results only compare your child to themselves over time, or to others on the same platform. They do not compare your child to the 15,000+ students sitting the real test, the vast majority of whom are using different materials.
A ranking of “top 5% on our platform” does not mean top 5% of all selective test applicants. It means top 5% of the self-selected group of students who happen to use that platform — a group that may skew stronger or weaker than the broader applicant pool. Selection bias makes platform rankings unreliable as a measure of anything beyond internal standing.
The only ranking that matters is the one produced by the NSW Department of Education after the real test. Everything before that is a proxy, and treating a proxy as the real thing leads to overconfidence or unnecessary panic.
Red flags: when someone “predicts” your child’s result
Be cautious when a tutoring centre or coaching provider claims to predict specific school placement outcomes. This is a strong claim, and it requires strong evidence — evidence that no centre can provide because the underlying data is not available.
The NSW Department of Education does not publish cutoff scores, scoring algorithms, or standardisation methods. The test itself changes every year. The cohort changes every year. Any prediction based on mock scores is, at best, an educated guess dressed up as expertise.
Some centres use predictions strategically. Watch for these patterns:
- Creating urgency: “Your child is borderline for Sydney Girls — you need two more sessions per week.” This turns a guess into a sales pitch. If the centre cannot prove the cutoff is real, the “borderline” classification is meaningless.
- Generating word-of-mouth: “We predicted James Ruse and she got in!” This is survivorship bias. You never hear about the predictions that were wrong. A centre that makes enough predictions will inevitably get some right — that is probability, not skill.
- Justifying premium pricing: Centres that charge significantly more often justify the cost with “personalised placement predictions” or “school-specific targeting.” If the prediction methodology is “proprietary,” ask yourself why it cannot withstand scrutiny.
Ask yourself: if a centre could reliably predict placement, why would they share that information as a marketing tool? Reliable predictive models are enormously valuable. The answer is that they cannot reliably predict placement — but the claim is effective marketing regardless.
Track Your Child’s Progress with Clear, Honest Data
Instant score breakdowns, question-by-question analysis, and detailed explanations. No fake predictions.
Try Free PapersHow to actually use practice paper results
Forget chasing a target score. There is no number that guarantees a place, and optimising for one creates the wrong incentives — children start guessing to inflate percentages rather than learning to reason through problems. Instead, focus on three actionable metrics that genuinely improve performance.
1. Percentage by subject
Which section needs more time? Allocate practice proportionally to weakness, not strength. If your child scores 80% in Reading but 50% in Thinking Skills, spending an extra hour per week on Thinking Skills will yield more total improvement than polishing an already-strong Reading score. This feels counterintuitive — children prefer practising what they are good at — but diminishing returns are real. The biggest gains come from moving a weak subject from poor to adequate, not from moving a strong subject from good to excellent.
2. Types of questions missed
Are the errors in spatial reasoning? Word problems? Inference questions? Data interpretation? The category of questions missed tells you what to practise, not just how much. A child who consistently loses marks on number pattern questions needs targeted practice on sequences and pattern recognition, not more general Maths papers.
After each paper, spend five minutes categorising the wrong answers. You do not need a fancy system — just note whether the errors were in computation, interpretation, vocabulary, spatial reasoning, or time pressure. After three or four papers, patterns emerge. Those patterns are your study plan.
3. Completion and time management
Did your child finish the section? How many questions were rushed at the end? How many were left blank? Pacing is a skill that improves with practice, and it is one of the highest- value improvements available in the final weeks before the test.
A child who answers 30 out of 35 Mathematical Reasoning questions and gets 25 right (83% accuracy on attempted questions) is in a very different position from a child who answers all 35 and gets 25 right (71% accuracy but 100% completion). The first child’s problem is speed; the second child’s problem is accuracy. The interventions are completely different.
Building a simple tracking system
After each paper, record the following: the date, the paper name or source, the percentage per subject, the number of unanswered questions per subject, and two or three types of questions that were missed. A spreadsheet works. A notebook works. The format does not matter — consistency does.
Review the data weekly. Look for trends: is Thinking Skills improving? Is the number of unanswered questions decreasing? Are the same question types appearing in the “missed” column week after week? If a category of error persists for more than two weeks, that is your signal to change tactics — more of the same practice is not working and you need a different approach for that specific topic.
A realistic framework for the final four weeks
Instead of chasing a “target score,” focus on four concrete goals that actually move the needle in the weeks before the test.
- Improve your child’s weakest subject by 10–15 percentage points. This is where the largest gains are available. Concentrated practice on the weakest area — even just 20 minutes per day — compounds quickly. A child who moves from 45% to 60% in Thinking Skills gains more total marks than one who moves from 80% to 85% in Reading.
- Reduce careless errors. These are the easiest marks to recover. Common culprits: misreading the question, selecting the wrong option after working out the right answer, not checking whether the question asks for the “least” or “most,” and simple arithmetic slips. A review routine — spending the last two minutes of each section checking flagged answers — catches many of these.
- Complete all questions within time. It is better to attempt every question than to leave five blank at the end. There is no penalty for incorrect answers on the NSW selective test, so a guessed answer is always better than a blank one. Teach your child to skip questions that take longer than 90 seconds and return to them after completing the rest.
- Build confidence through familiarity. Confidence on test day comes from repeated exposure to the format and question types, not from achieving a single “good” score. A child who has completed 15 timed papers will feel more comfortable and less anxious than one who has completed 3 papers but scored higher on each. Volume of exposure matters.
For a complete preparation framework, see our NSW selective practice papers guide which includes a week-by-week plan for the final month.
EduSpark’s approach to results
We built EduSpark with a specific philosophy about practice test results: show all the data, make no predictions. After every paper, your child gets an instant score, a question-by-question breakdown, and a detailed explanation for every answer — both the correct option and why each incorrect option is wrong.
We show you the percentage per section and which specific questions were missed. We highlight which topics the errors cluster in so you can target your child’s practice accordingly. Every explanation is written to teach, not just to reveal the answer — because understanding why a wrong answer is wrong is where the real learning happens.
What we do not do: we do not predict school placement. We do not claim cutoff scores. We do not rank your child against an unknown cohort or tell you they are “on track for James Ruse.” Anyone making those claims is guessing, and we would rather give you honest data than comfortable fiction.
What we do offer: 90 NSW selective test papers across three subjects — Mathematical Reasoning, Thinking Skills, and Reading — all timed, auto-corrected, and with full explanations. That is 3,150 practice questions matched to the Cambridge format. Browse the full NSW paper library, or try our free papers to see the results format for yourself.
The score that actually matters
The score that matters is the one on test day — not the one on any practice paper. Mock scores are a training tool, not a crystal ball. Use them to identify weak spots, track improvement over time, and build the pacing skills your child needs to finish every section.
Ignore anyone who tells you they can predict the outcome. The NSW selective test is designed to be unpredictable — that is the entire point of standardisation. What you can control is how your child prepares: consistent, timed practice with thorough review of every incorrect answer. That is what moves the needle. Not a number on a mock paper, and certainly not a tutoring centre’s “placement prediction.”
Focus on the process. The results will follow.
Frequently Asked Questions
What score do you need to get into a NSW selective school?▾
There is no published cutoff score. Placement depends on the overall performance of all applicants each year, school preferences, and school-based assessment data. The NSW Department of Education does not release cutoff scores, so any specific numbers quoted by tutoring centres are estimates, not official data.
How do I know if my child’s mock score is good enough?▾
Focus on progress over time rather than any single score. A student who improves from 55% to 72% over 6 weeks is on a strong trajectory. Compare scores only within the same platform — scores across different providers are not comparable due to different difficulty levels.
Do different practice papers have different difficulty levels?▾
Yes. A score of 75% on one set of practice papers may be equivalent to 55% on another, harder set. This is why comparing scores across platforms, books, or tutoring centres is misleading. Track your child’s progress within a single source.
Should I trust a tutoring centre’s score prediction for the NSW selective test?▾
Be cautious. The NSW Department of Education does not publish cutoff scores, scoring algorithms, or standardisation methods. Any prediction about specific school placement outcomes is, at best, an educated guess. Some centres use predictions to create urgency or justify additional sessions.
What is more important — the mock score or the score improvement?▾
Score improvement is a stronger indicator of readiness. A student who goes from 55% to 75% over 8 weeks has demonstrated growth that will likely continue. Consistent upward trends indicate that preparation is working and the student is building genuine skills.
Related articles
See how your child performs
Try free practice papers — timed, auto-corrected, with instant results and detailed explanations for every question.
Try Free Practice Papers