Stay informed with critical information about one of the most important changes in the new digital SAT.
The College Board's shift to a fully digital format introduces adaptive testing, a methodology designed to tailor the difficulty of test questions to the individual test taker's ability.
While this change primarily aims to make the test more accessible to all students and improve the overall test experience, there are much more profound implications on test scoring, test difficulty, and test preparation strategy.
Adaptive testing dynamically adjusts the difficulty of test questions based on the test taker's performance. It was not invented by College Board and has been used in many other tests before the SAT.
In the old pen and paper SAT, all questions were predetermined before students take the exam. Each exam still had its own scoring curve, but all students received the same reading, writing and math questions.
The digital SAT, on the other hand, employs a 2-module adaptive design for each section (reading/writing or math), where answers to the first module (which contains a mix of easy and difficult questions) determine the difficulty level of the second module, tailoring the test to match the student's skill level.
The 'adaptive' part is actually pretty simple: there are only two possible second modules for a particular section - an easy and hard module. The easy second module contains questions that are on average easier, and the hard module contains more difficult questions.
Students who do well on the first module will receive the hard second module, and their potential score is higher if they also do well on it (see the Scoring section below).
Finally, it's also worth pointing out that the questions themselves are also randomized - different students will receive different questions, but the difficulty of the questions will match the module presented.
To determine whether a student gets the easy or hard second module, College Board uses Item Response Theory (IRT), which takes into account not just the number of correct answers, but also the difficulty of each question and even the probability that the student was guessing.
This means that while guessing is not directly penalized and students should still guess rather than leave questions blank, patterns that suggest guessing could affect whether they receive the hard second module.
Adaptive testing on the digital SAT certainly presents several benefits to students, but it is by no means a perfect strategy. Here are some pros and cons to consider:
The test quickly and accurately measures skills by adjusting to the test taker's ability level.
The overall test time can be shortened while still effectively testing the same skills (digital SAT is ~1 hour shorter than pen and paper SAT).
Students face less stress, receiving questions that match their skill level without being too challenging or too simple.
Students may worry if they perceive the questions to be getting easier, assuming they are not performing well (they do not know whether they received the easy or hard second module until the score report is released).
Students must adapt their study habits to a new format, which requires effort and may challenge traditional preparation methods.
The potential for a high score is capped if a student receives the easy second module, even if he or she aces it.
A question that always comes up is does adaptive testing make the SAT easier? After all, with a shorter test time, no essay sections and plenty of anecdonal posts on the Internet, it's easy to believe that the SAT is easier now.
According to College Board, the digital SAT is designed to be exactly the same difficulty as the previous pen and paper SAT, and the score distributions have not changed. Through Item Response Theory (IRT), of which adaptive testing is a key component, College Board claims they can measure student abilities just as accurately with fewer questions and in less time than the traditional paper format.
However, it's also very much in the interest of College Board and their partners to say this because admitting otherwise would cause massive problems for the integrity and reliability of the exam.
In short, there is no definitive answer to this question. The only recommendation we can give is to only trust the opinions of reputable people around you.
With adaptive testing, each question is weighted differently such that the final result accurately reflects the student's performance.
How the questions are weighted and the actual scoring algorithm are not public knowledge, but enough students take the exam every year to approximate the data.
For example, while College Board doesn't publish the official criteria to get the hard second module for each exam, students, collectively estimate that the max score for an easy module (assuming just getting below the threshold in module 1 and acing module 2) is 560-600.
Adaptive testing and the change in scoring is also changing the way students need to study. Here are 3 things to consider when studying for the digital SAT.
No, we're not saying all students should only adaptive practice tests - that makes no sense if students are aiming for 1400+ because they have to get the hard module in both sections to reach that score. In this case, these students would be better off only practicing difficult questions so they can ace the hard second modules.
Students should take the right practice tests with the right question difficulty depending on what score they want to achieve.
Understanding the digital SAT's interface and navigation is crucial, as is becoming comfortable with the timing and pacing of the exam.
This includes practicing with the official College Board digital testing platform or any other platforms that closely mimic the testing environment. It's important to simulate the actual test conditions as closely as possible, including the adaptive nature of the test, so there are no surprises on test day.
To get a full rundown of the format of the digital SAT, check out our Digtial SAT Format guide.
For many students math may be easier than reading/writing or vice versa, so they spend the majority of their efforts on the weaker subject even though there may still be room for improvement in the stronger one.
Even though it may seem effective, this method of studying may lead to diminishing returns quickly depending on the students' background abilities in each subject. For example if a student has weak English reading comprehension and average math abilities, it doesn't make sense to dedicate 90% of studying time to English just because it's the weaker section - there may be low hanging fruits of opportunity on the math side as well. After all, both sections are weighted equally when it comes to scoring.
Having said all this, we also don't recommend having extremely large gaps (e.g. 500 in Reading/Writing and 800 in Math) in scoring as some schools may look upon these results unfavorably despite the decent composite score.