Every ACT Reading passage is built on relationships — authors introduce multiple people, weave competing ideas together, and place events side by side. Comparison questions account for roughly 20 to 25 percent of all ACT Reading questions, appearing eight to ten times per test. That means mastering this single question type could be worth two to three points on your composite score.
Every ACT Reading passage is built on relationships — authors introduce multiple people, weave competing ideas together, and place events side by side. Comparison questions account for roughly 20 to 25 percent of all ACT Reading questions, appearing eight to ten times per test. That means mastering this single question type could be worth two to three points on your composite score.
The challenge: the information you need is almost never in one place. Details about Element A might appear in paragraph two, while details about Element B are buried in paragraph seven. The ACT tests whether you can hold scattered information together in a coherent framework.
Type 1 — Direct Contrast (~40% of comparison questions): Identify explicit differences between two elements. Signaled by however, but, unlike, whereas, in contrast. Common mistake: selecting an answer accurate about one element but inaccurate about the other. You must verify both claims.
Type 2 — Similarity Identification (~25%): Find what two seemingly different elements share. Signaled by both, similarly, likewise, in common. Common mistake: missing similarities because you're primed to notice differences. Wrong answers cite a trait belonging to only one element.
Type 3 — Evolution Over Time (~20%): Track how something has developed, transformed, or shifted. Signaled by initially, began as, evolved into, once/now, traditionally/currently. Common mistake: the Temporal Mix-Up — confusing what something was with what it became.
Type 4 — Multiple Perspectives (~15%): Distinguish between different viewpoints on the same topic. Signaled by argues, believes, contends, according to, critics point out. Common mistake: misattributing a view — assigning Person A's opinion to Person B.
Contrast: however, nevertheless, on the other hand, conversely, in contrast, unlike, whereas, but, yet, despite, although, rather than.
Similarity: similarly, likewise, both, also, equally, just as... so too, not only... but also.
Evolution: initially/later, originally/eventually, began as/evolved into, once/now, traditionally/currently, used to/has become.
Perspective: argues, believes, contends, according to, from X's perspective, critics argue, supporters maintain, opponents counter.
As you read, mentally mark every signal word. By the time you finish, you'll have a map of where the passage makes its key comparisons.
The act of eating is never merely biological. Across cultures and throughout history, food practices — what people eat, how they prepare it, and especially how they share it — function as powerful social mechanisms that create community identity, reinforce or challenge hierarchies, and transmit cultural knowledge across generations. Research in anthropology, sociology, and public health reveals that the social dimensions of eating may be as consequential for human well-being as the nutritional ones.
The anthropologist Claude Lévi-Strauss argued that cooking is the act that most clearly separates humans from other animals — not because animals lack the ability to eat, but because only humans transform raw materials into culturally meaningful meals and share them according to elaborate social rules. Modern research supports this insight. Dr. Robin Dunbar, whose work at Oxford has explored the neurological foundations of social bonding, found that people who eat communally report greater life satisfaction, wider social networks, and stronger feelings of community belonging than those who eat alone. Notably, the effect holds regardless of what is eaten; the social act of sharing a meal, not its nutritional content, drives the benefit.
The most extensively studied form of communal eating is the family meal. A comprehensive meta-analysis published in the journal Pediatrics examined over 180,000 participants across seventeen countries and found that children who ate family dinners at least three times per week showed 24 percent lower rates of obesity, 12 percent lower rates of disordered eating, and significantly higher consumption of fruits and vegetables compared to those who rarely ate with their families. Dr. Anne Fishel of Harvard Medical School has called the family dinner "the most powerful intervention available to improve children's academic performance, reduce substance abuse risk, and strengthen mental health — and it costs nothing beyond the food itself."
How cultures structure their meals reveals deep assumptions about social relationships. In many Middle Eastern and South Asian traditions, food is served from communal platters, with diners eating from the same dishes using their hands — a practice that emphasizes equality, generosity, and shared experience. In contrast, formal Western European dining traditions assign each diner an individual plate, separate utensils, and a designated place at a rigidly ordered table — reflecting values of personal autonomy, portion control, and social hierarchy. Japanese kaiseki cuisine takes yet another approach, presenting food as a series of small, precisely arranged courses that balance flavor, texture, color, and seasonal reference — prioritizing aesthetic harmony and the relationship between eater and natural world. None of these systems is superior; each reveals what a culture considers most important about eating together.
Food has also served historically as a marker of social distinction. The sociologist Pierre Bourdieu documented how food preferences function as what he called "cultural capital" — signals of social class acquired through upbringing rather than wealth alone. Bourdieu found that working-class French families tended to prioritize quantity, heartiness, and affordability, while upper-class families emphasized presentation, rarity, and culinary refinement. Contemporary food culture has complicated Bourdieu's framework considerably: today, expensive restaurants serve deliberately rustic dishes, while fast-food companies market premium ingredients to aspirational consumers. The traditional alignment between cost and status has fractured.
Despite the documented benefits of communal eating, the practice is declining. The USDA reports that Americans now eat more than half of their meals alone, a proportion that has increased by 30 percent since 1990. Contributing factors include longer work hours, staggered family schedules, the proliferation of convenience foods designed for individual consumption, and the replacement of dining tables with screens. Dr. Sarah Julier's research at Chatham University suggests that this trend is not merely a logistical shift but a cultural one, reflecting a broader movement toward individualism that prioritizes personal convenience over collective experience.
A growing counter-movement seeks to reverse this trend. Community dinner programs such as The Dinner Party — which brings together young adults grieving the loss of a loved one — and Casserole Club — which connects volunteer cooks with isolated elderly neighbors — use shared meals as deliberate social interventions. Food halls and communal dining concepts have replaced individual restaurant tables in many cities, and school lunch programs restructured to encourage conversation have reported measurable improvements in student behavior. These initiatives share a common premise: that the social infrastructure of eating, once embedded naturally in daily life, must now be intentionally rebuilt.
The evidence converges on a conclusion that challenges the modern tendency to treat food as fuel and eating as a solitary, time-efficient activity. Food is a social technology as ancient and essential as language itself. The meal shared around a table is not a luxury but an infrastructure — a regular, low-cost practice that builds the trust, empathy, and mutual obligation on which healthy communities depend. As societies grapple with epidemics of loneliness, declining civic engagement, and weakening social bonds, the humble act of sitting down to eat together may prove to be one of the most powerful tools available for repair.
Fact versus opinion questions test your ability to distinguish between objective statements that can be verified and subjective claims that express beliefs or judgments. You will see 2 to 3 of these directly per test, making up about 6 to 8 percent of the Reading section. But the underlying skill — separating what a passage demonstrates from what the author believes — powers at least 8 to 10 additional questions involving claims, evidence, and argument evaluation.
A fact is a statement that can be verified through evidence, observation, or research. It remains true regardless of who states it. A fact does not need to be something you already know — it simply needs to be checkable.
An opinion is a statement expressing a belief, preference, or judgment. It cannot be proven definitively true or false because it depends on perspective. An opinion does not have to be uninformed. A renowned scientist interpreting data is still expressing an opinion if reasonable experts could interpret the same data differently.
The verification test: Can this statement be proven true or false through observation, measurement, or documentation? If yes → fact. If it depends on who you ask → opinion.
The disagreement test: Could a reasonable, informed person disagree? If yes → opinion, no matter how widely shared.
Zone 1 — Pure fact. "Rats in the enriched environment solved the maze 40% faster."
Zone 2 — Fact with framing. "Rats in the enriched environment solved the maze 40% faster, a statistically significant result." — "Statistically significant" is itself verifiable.
Zone 3 — Mixed statement (the ACT's favorite). "Rats solved the maze 40% faster, a remarkable finding that could reshape developmental psychology." — 40% is fact; "remarkable" and "could reshape" are opinion.
Zone 4 — Informed opinion. "This finding arguably represents the strongest evidence yet for play's cognitive benefits." — "Strongest evidence" and "arguably" signal judgment.
Zone 5 — Pure opinion. "Play is the most important activity in a child's life."
When you encounter a mixed statement, mentally draw a line between the verifiable element and the evaluative element.
In most cultures, play is dismissed as the opposite of productive activity — something children do before they are old enough for real work, and something adults indulge in only when their obligations are complete. This characterization, according to a growing body of research in developmental psychology and neuroscience, is profoundly wrong. Play is not a break from learning; it may be the most powerful form of learning our species has evolved. From the rough-and-tumble wrestling of preschoolers to the strategic board games of retirees, play serves essential cognitive, emotional, and social functions across the entire human lifespan.
The neurological evidence is striking. Dr. Jaak Panksepp's research at Washington State University demonstrated that young rats allowed thirty minutes of daily rough-and-tumble play developed measurably thicker prefrontal cortices — the brain region governing decision-making, impulse control, and social judgment — than rats raised in identical conditions without play. The play-deprived rats also showed heightened anxiety responses and reduced ability to navigate novel environments. When the findings were replicated across multiple laboratories, the pattern held consistently: play was not merely correlated with brain development but appeared to be a necessary condition for it.
The parallel in human children is well documented. A landmark longitudinal study conducted by Dr. Adele Diamond at the University of British Columbia tracked 2,400 children over eight years and found that those with the most unstructured free play time scored 37 percent higher on measures of executive function — the cognitive skills that enable planning, flexible thinking, and self-regulation — compared to children whose time was entirely filled with structured activities. Diamond's team controlled for socioeconomic status, parental education, and school quality. The effect was most pronounced among children from disadvantaged backgrounds, for whom free play appeared to partially compensate for other developmental gaps.
Play's benefits are not limited to children. Researchers at the National Institute on Aging found that adults who regularly engaged in cognitively stimulating play — card games, puzzles, strategy games — showed 47 percent lower rates of cognitive decline over a ten-year period compared to those who did not. Dr. Denise Park of the University of Texas has described these activities as "mental gymnastics that maintain the brain's plasticity well into old age," noting that the social component of group games may be as important as the cognitive challenge itself. The combination of strategic thinking, social interaction, and emotional engagement that games provide appears to activate neural networks that solitary intellectual activities do not.
Cross-species evidence strengthens the case for play's evolutionary importance. Virtually all mammals play, and the amount of play a species exhibits correlates with the relative size of its prefrontal cortex. Dolphins, elephants, and great apes — species known for complex social cognition — are among the most playful animals on Earth. The evolutionary biologist Marc Bekoff has argued that play evolved specifically as a mechanism for developing social skills: through play, young animals learn to read social signals, negotiate boundaries, manage aggression, and repair relationships after conflicts. "Play is the cradle of cooperation," Bekoff writes. "Without it, social species cannot develop the skills they need to live together."
Despite this evidence, opportunities for unstructured play have declined dramatically. A University of Michigan study found that American children's free play time decreased by 25 percent between 1981 and 2019, replaced by organized sports, academic tutoring, and screen-based entertainment. The shift reflects a cultural anxiety about childhood productivity — a belief that every hour should be invested in measurable skill-building. Dr. Peter Gray of Boston College, one of the most vocal advocates for play, calls this trend "the most underrecognized crisis in child development today," arguing that the decline in free play is directly responsible for rising rates of childhood anxiety, depression, and social difficulty.
Not all researchers share Gray's alarm. Dr. Sandra Hofferth of the University of Maryland counters that structured activities — team sports, music lessons, organized clubs — provide many of the same developmental benefits as free play while adding skill-specific instruction. Hofferth's research found that children in well-designed structured programs showed gains in discipline, teamwork, and goal-setting that unstructured play did not consistently produce. The debate, she argues, should not be framed as structured versus unstructured but as determining the optimal balance for each child's developmental needs.
The evidence suggests that both sides capture part of the truth, but that the pendulum has swung too far toward structure. Children need organized activities for skill development and free play for creativity, resilience, and self-directed problem-solving. Adults need cognitive stimulation and social engagement that games uniquely provide. Across the lifespan, play remains not a luxury but a biological necessity — as essential to healthy development as nutrition, sleep, and physical exercise. Societies that treat play as frivolous do so at a measurable cost to the cognitive and social well-being of their members.
Every ACT Reading passage is built around claims — statements the author puts forward as true and defends with evidence. Roughly 5 to 7 questions per test ask you to identify those claims, evaluate the evidence supporting them, recognize counterclaims, and judge whether the reasoning holds together. These questions reward students who can see through the surface of a passage to the argument underneath.
A claim is any statement an author argues is true and then defends with reasoning or evidence. It is more than an opinion ("I like octopuses") and more than a fact ("Octopuses have three hearts"). A claim is arguable — someone could disagree — and it is supported. When an author writes "octopus cognition challenges our fundamental assumptions about how intelligence evolves," that is a claim: it takes a position, and the rest of the passage exists to prove it.
Fact: "Octopuses have approximately 500 million neurons." Verifiable, not arguable. Claim: "Octopus intelligence proves that complex cognition does not require a centralized brain." Arguable, defended with evidence. Opinion: "Octopuses are the most fascinating creatures in the ocean." Subjective, not defended with evidence.
The ACT tests whether you can distinguish these. A common trap puts a fact in place of the author's claim, or offers an unsupported opinion as if it were the thesis.
Claims in a passage form a pyramid. The central thesis sits at the top. Below it are the primary supporting claims. Below those are the specific pieces of evidence. Understanding this hierarchy tells you which claims matter most and how the argument is constructed.
Level 1 — Central Thesis: The one overarching argument the entire passage defends. This is what the author most wants you to believe after reading.
Level 2 — Primary Claims: The 3-5 major supporting arguments that hold up the thesis. Each one typically occupies its own paragraph or section.
Level 3 — Evidence: Specific data, studies, examples, and expert references that back up each primary claim. These are the facts the author marshals in support.
Level 4 — Counterclaims: Opposing arguments the author acknowledges and rebuts. These appear because addressing objections strengthens credibility.
The ACT may ask about any level. "What is the author's main claim?" targets Level 1. "What evidence supports the claim that..." targets Level 3. "How does the author respond to critics?" targets Level 4. Knowing which level the question targets tells you exactly where to look.
Some ACT Reading questions do not simply ask what the passage says. They ask you to judge whether what the passage says is well-supported. These are evidence evaluation questions, and they require a fundamentally different skill from recall or inference. Instead of finding information, you must weigh it. Is the evidence strong or weak? Does the logic hold together? Are there hidden assumptions the author never acknowledges?
Evidence evaluation falls under the ACT's Integration of Knowledge and Ideas reporting category, accounting for roughly 13 to 23 percent of the Reading section. You can expect these questions across all passage types, though they appear most frequently in social science and natural science passages.
Not all evidence is created equal. The ACT tests whether you can distinguish powerful support from weak support.
Tier 1 — Empirical Research with Measurable Outcomes. The gold standard. Controlled experiments with specific, quantifiable results. Hallmarks: clear methodology, adequate sample size, control groups, measurable outcomes, replication by independent researchers.
Tier 2 — Statistical Data. Numerical support for claims. Strong when it includes clear definitions, appropriate baselines, and context. Weak when it lacks context, cherry-picks numbers, or presents correlations as causation.
Tier 3 — Expert Opinion. Carries weight when the expert has relevant credentials and speaks within their field. But expert opinion is inherently weaker than empirical data because it relies on interpretation. Credentials do not convert opinions into facts.
Tier 4 — Case Studies and Specific Examples. Concrete, real-world illustrations. Compelling but limited — one example cannot prove a general pattern.
Tier 5 — Analogies and Logical Reasoning. Comparisons to similar situations. Only as strong as the similarity between the two situations.
Tier 6 — Anecdotal Evidence. The weakest form. Personal stories are emotionally compelling but prove nothing about broader patterns.
When the ACT asks which evidence best supports a claim, rank options using this hierarchy. When it asks which is weakest, look for the option lowest on the hierarchy or the one with the biggest gap between what it proves and what it claims to prove.
These questions announce themselves through distinctive language.
Strength-of-Support: "The author's claim is best supported by..." "Which detail provides the strongest support?" "Which would most strengthen the argument?" Strategy: Identify the claim, locate all evidence, rank using the hierarchy.
Weakness-Identification: "The argument is weakened by..." "Which evidence is LEAST convincing?" "The reasoning is flawed because..." Strategy: Find the gap between what evidence proves and what the author claims.
Logical Validity: "The conclusion follows only if..." "Which assumption is necessary?" "The reasoning requires the reader to accept that..." Strategy: Trace the logical chain and identify missing links.
Source Credibility: "The expert is most qualified to comment on..." "Which citation is most authoritative?" Strategy: Match expert credentials to the specific claim being made.
Bias and Assumption: "The argument assumes that..." "Which underlying assumption is revealed?" Strategy: Ask what must be true for the argument to work — what the author takes for granted.
About 25 percent of ACT Reading questions test your ability to dissect how authors construct their arguments — how they arrange evidence, connect ideas with logical reasoning, address opposition, and build toward conclusions. Think of yourself as an architectural inspector: your job is not to decide whether you agree with the argument but to understand how it was built. Every argument, no matter how complex, is built from a small set of predictable components arranged in recognizable patterns.
Every argument on the ACT is built from six fundamental components.
1. The Claim — what the author wants you to believe. The central thesis, usually in paragraph 1-2 and restated in the conclusion. Look for definitive statements about what is true, what should happen, or what something means.
2. The Evidence — facts, data, and examples supporting the claim. Studies, statistics, expert opinions, anecdotes. Evidence answers: How do you know?
3. The Warrant — the logical bridge connecting evidence to claim. Sometimes stated explicitly ("this suggests," "this demonstrates"), but often unstated — which is what makes assumption questions so challenging. The warrant answers: Why does this evidence matter?
4. The Backing — additional support for the warrant itself. Explains methodology or provides context for why the evidence is reliable.
5. The Qualifier — limits on the claim. Words like often, generally, suggests, may indicate measured claims. Wrong answers frequently remove qualifiers to create false absolutes.
6. The Rebuttal — counterarguments the author addresses. How an author handles opposition (dismissing, acknowledging, partially accepting, refuting) reveals the argument's strength and nuance.
ACT passages rely on four major reasoning patterns. Recognizing which one an author uses helps you predict structure and answer questions about logical flow.
Pattern 1 — Deductive (general to specific). The author starts with a broad principle and applies it to a specific case. If the principle is true, the conclusion must follow. On the ACT: look for passages that establish a principle early and apply it to examples.
Pattern 2 — Inductive (specific to general). The author gathers specific examples and draws a general conclusion from the pattern. Most common in social science and natural science passages. On the ACT: questions ask what general conclusion the evidence supports, or whether the evidence is sufficient.
Pattern 3 — Causal (cause and effect). The author traces chains of connection: A causes B, which causes C. On the ACT: questions test whether you can trace the full chain. Watch for the critical distinction between correlation and causation.
Pattern 4 — Analogical (comparison). The author argues that because two situations share key features, what is true of one is likely true of the other. On the ACT: questions ask whether the comparison is valid or what the analogy demonstrates.
Most passages use multiple patterns. The thesis often uses deductive framing while the body uses inductive evidence and causal chains.
Every ACT Reading passage is built on the same logical move: the author presents specific examples, case studies, data points, or anecdotes and uses them to support a broader claim about the world. Generalization questions test whether you can see that move happening — whether you can climb from the concrete details on the page to the abstract principle they collectively support. You will encounter 1 to 2 generalization questions per passage, making this a consistent source of points across the entire test.
Generalization is a form of inductive reasoning. Unlike deduction (which moves from general rules to specific conclusions), induction moves in the opposite direction: from particular observations to broader patterns. The challenge is landing in the sweet spot — a generalization broad enough to capture the pattern but precise enough to stay within what the evidence actually supports.
Students often confuse generalization questions with main idea or inference questions. All three require thinking beyond what is stated, but they ask fundamentally different things.
Main Idea: What is this passage primarily about? You identify the single argument connecting every paragraph. Scope: the entire passage.
Inference: What does this specific detail imply? You draw a logical conclusion from one or two sentences. Scope: narrow — usually one paragraph.
Generalization: What broader principle do these specific examples support? You synthesize multiple pieces of evidence and identify the pattern they collectively point toward. Then you may need to extend that pattern to new situations.
The key difference: Main idea asks about the passage's topic. Inference asks about one detail's implications. Generalization asks you to extract a principle from multiple details and potentially apply it beyond the passage.
Type 1 — Pattern-Based. Multiple specific examples collectively reveal a recurring pattern. Three different archaeological sites show dogs buried with humans → general pattern: ancient cultures valued dogs as more than utilitarian animals. Question stem: "The examples in the passage collectively suggest that..."
Type 2 — Principle-Based. Specific evidence supports an abstract principle about how something works. Dogs evolved to read human emotions, cooperate in hunts, and provide therapeutic benefits → general principle: the dog-human bond shaped both species' development. Question stem: "Based on the evidence presented, the author would most likely agree that..."
Type 3 — Scope Extension. The passage discusses a specific instance, and the question asks whether the pattern extends to a broader category. If dog domestication transformed human social structures, would the domestication of other animals produce similar effects? Question stem: "The passage's argument implies that the described effects would most likely also apply to..."
Type 4 — Category Formation. Multiple examples are grouped into a new conceptual category. Archaeological evidence, genetic studies, and behavioral research are grouped as "converging lines of evidence" for co-evolution. Question stem: "The types of evidence described in the passage can best be categorized as..."
Under the Enhanced ACT format (2025 onward), the Reading section includes visual and quantitative elements — charts, graphs, tables, or infographics — alongside at least one of the four passages. These are not decorations. They carry testable information, and 2 to 3 questions per test will ask you to interpret the visual on its own, connect it to claims in the text, or use data from both sources to draw conclusions.
Visual questions tend to be more straightforward than inference or perspective questions because the data is concrete and verifiable — but they require a different analytical approach. Students who learn to read visuals systematically find these are among the easiest points on the entire test.
Tables: Organized data for side-by-side comparison. Read column and row headers first — they tell you what is being compared and measured. Don't memorize the whole table; scan for the specific data point the question asks about.
Graphs (line, bar, pie): Line graphs show trends over time. Bar graphs compare categories. Pie charts show proportions of a whole. Always check the axes first — they tell you what is being measured and what units are used.
Diagrams and Flowcharts: Show how something works or how a process unfolds. Follow the arrows — they show direction, sequence, and causation.
Timelines: Show chronological sequences. Note the start and end points and the overall trajectory (accelerating? decelerating? steady?).
Do not try to memorize every data point. Instead, spend 30 seconds orienting yourself.
Orient (10 sec): Read the title/caption. Check axis labels and units. Identify what is being measured and how. Note the scale — does it start at zero? Are the intervals even?
Pattern (10 sec): Scan for the big picture. Is the trend going up, down, or flat? Where are the peaks and valleys? Which category is largest or smallest? Are there any anomalies that break the pattern?
Purpose (10 sec): Ask "Why did the author include this visual?" It almost always supports, illustrates, or extends a claim in the text. Identifying the purpose before you see the questions saves enormous time.
One of the four passages on the ACT Reading section is a paired passage set — two shorter passages on a related topic, followed by 10 questions. The first few questions ask about Passage A alone, the next few about Passage B alone, and the final 3 to 4 questions ask you to synthesize information from both passages. These synthesis questions are widely considered the hardest questions on the ACT Reading test because they require you to hold two perspectives in your mind simultaneously and reason about the relationship between them.
Synthesis means combining information from multiple sources to reach a conclusion that neither source provides on its own. Do the authors agree? Where do they disagree? What would one say about the other's argument? What conclusion emerges only when you consider both together?
The paired passage always appears as one of the four ACT Reading passages. The two texts are always related by topic but differ in approach, perspective, or emphasis. Before tackling any synthesis question, state the relationship in one sentence: "Passage A analyzes the history while Passage B creates new work." This sentence becomes your compass.
Typical structure: Questions 1-3 about Passage A only. Questions 4-6 about Passage B only. Questions 7-10 about both (synthesis). Time budget: 9-10 minutes total.
Common relationships: Same topic, different viewpoints (most common). Same topic, different time periods. Complementary perspectives. Scholar/academic vs. practitioner/personal experience.
Read Passage A → answer Passage A questions → read Passage B (actively comparing to A) → answer Passage B questions → tackle synthesis questions last.
This strategy prevents the most common error: confusing which details came from which passage. When students read both passages first, facts from Passage A bleed into their memory of Passage B. The read-one-answer-one approach creates a clean cognitive separation. It also ensures you answer the easier single-passage questions while the relevant passage is fresh.
Many ACT Reading questions do not simply ask what the passage says — they ask you to find the specific evidence that supports a claim, connects two ideas, or justifies an interpretation. These textual evidence questions appear 4 to 6 times per test and require you to point to exact sentences or details in the passage rather than relying on general impressions. The skill they test is precision: not just understanding the passage's argument, but identifying which specific words and sentences make that argument work.
This matters because the ACT designs wrong answers that sound reasonable but are not supported by the actual text. Students who answer from memory or general impression fall for these traps. Students who anchor every answer to a specific, locatable sentence in the passage almost never do.
These questions announce themselves with distinctive stems.
Direct evidence: "Which of the following statements from the passage best supports the claim that..." "The passage provides the strongest evidence for [X] in lines..." "Which detail from the passage most directly supports..."
Best evidence: "Which choice provides the best evidence for the answer to the previous question?" "The claim is most strongly supported by which of the following?"
Counter-evidence: "Which detail from the passage most directly contradicts the claim that..." "The passage provides evidence that undermines which assumption?"
Cross-paragraph evidence: "Which combination of details from the passage most strongly supports..." "Evidence from paragraphs 3 and 6 together suggest that..."
The key signal: these questions always point you back to specific text rather than asking for general interpretation.
PIN gives you a three-step system for any textual evidence question.
P — Pinpoint the claim. Read the question stem and identify the exact claim you need evidence for. Restate it in your own words. If the question says "Which detail best supports the claim that sleep deprivation impairs cognition?" your target is: evidence that not sleeping hurts thinking ability.
I — Investigate the passage. Scan for sentences directly related to your target claim. Use keyword matching — look for the same concepts the question mentions. Read each candidate sentence carefully. Does it directly support the claim, or does it merely discuss the same topic without making the connection?
N — Narrow to the strongest. Compare your candidate sentences. The strongest evidence is the one that most directly and specifically supports the claim without requiring additional inference. If one sentence states the connection outright and another only implies it, the direct statement is stronger.
Time budget: 45-60 seconds. 10 seconds to pinpoint, 25 seconds to investigate, 10-20 seconds to narrow and select.
Of all the skills the ACT Reading section tests, none demands quite as much mental agility as working with multiple perspectives. These questions ask you to hold two or more viewpoints in your mind simultaneously, compare them with precision, and identify exactly where they align and where they diverge. You will face 6 to 8 of these questions on every test, accounting for roughly 15 percent of your Reading score. They appear most frequently on paired passage sets but also within single passages that contain multiple voices, quoted experts, or contrasting positions.
Think of yourself as a diplomat between two ambassadors. Your job is not to decide who is right. Your job is to understand each position completely, identify the specific points of agreement and disagreement, and articulate the relationship between the viewpoints with precision.
Type 1 — Direct Comparison: How do two viewpoints differ on a specific point? Stems: "Unlike Author A, Author B argues..." "Which best describes the difference between..." The correct answer names the exact dimension of disagreement.
Type 2 — Agreement Identification: What do the perspectives share despite their differences? Stems: "Both would likely agree that..." "The perspectives share the assumption that..." Often the trickiest — requires looking past surface differences to find underlying shared beliefs.
Type 3 — Tone Contrast: How do the emotional and stylistic approaches differ? Stems: "Compared to Perspective A, Perspective B is more..." "The contrast in tone can best be described as..." Tests whether you can separate what someone says from how they say it.
Type 4 — Method Differences: How does each perspective build its case? Stems: "Author A relies on... while Author B uses..." "The perspectives differ most in their use of..." Focus on tools: data vs. anecdote, historical examples vs. experiments, theoretical analysis vs. case studies.
Type 5 — Evidence Variation: What different types of support does each perspective use? Stems: "Passage A supports claims primarily through... while Passage B..." Closely related to method but focused specifically on the evidence types.
TRACK gives you a five-step system for mapping perspectives as you read.
T — Tag each voice. As you encounter different perspectives, mentally label them. In a single passage with multiple experts, note: "Dr. A says X," "Dr. B says Y," "the author concludes Z." Track attribution — who is making each claim?
R — Register the position. For each perspective, identify its core claim in one sentence. What does this person believe? What do they want the reader to accept?
A — Assess the overlap. Before looking at any questions, identify where perspectives agree. Shared assumptions are often unstated — you must infer them from what both sides take for granted.
C — Chart the differences. Identify the specific dimension of disagreement. Do they disagree about facts, interpretations, methods, significance, or policy implications? Naming the dimension makes comparison questions much easier.
K — Know the author's stance. In passages with multiple expert voices, the author often has their own position. Track whether the author presents perspectives neutrally or signals agreement with one side through word choice, evidence selection, or structural placement.