Central idea questions appear on every single passage — typically 2 to 3 per passage, totaling about 8 to 10 across the test. That makes them roughly 22 to 28 percent of your Reading score, the single most common question type. They ask you to identify the one argument, insight, or claim that connects every paragraph in the passage. The answer is never hidden in a single sentence — it is the thread running through the entire text.
Central idea questions appear on every single passage — typically 2 to 3 per passage, totaling about 8 to 10 across the test. That makes them roughly 22 to 28 percent of your Reading score, the single most common question type. They ask you to identify the one argument, insight, or claim that connects every paragraph in the passage. The answer is never hidden in a single sentence — it is the thread running through the entire text.
Think of it this way: if someone asked you "What was that passage about?" your answer should be the central idea. Not the topic ("bilingualism") but the specific claim the author makes about the topic ("bilingualism fundamentally restructures cognitive processes in ways that extend far beyond communication").
Every ACT Reading section follows the same structure — four passages, always in this order. Understanding what each type does helps you adjust your central-idea strategy before you even start reading.
The ACT tests three related but distinct concepts. Confusing them is the most common mistake, and the test designs wrong answers specifically to catch students who mix them up.
Topic: What the passage is about in one or two words. Example: bilingualism.
Central idea: The specific claim the author makes about the topic — a complete sentence. Example: Bilingualism fundamentally restructures cognitive processes, enhancing executive function, social understanding, and neurological resilience.
Author's purpose: Why the author wrote the passage — what they want the reader to think, feel, or do. Example: To argue that bilingualism should be understood as cognitive architecture rather than merely a communication skill.
The topic is the subject. The central idea is what the author says about the subject. The purpose is why the author says it. A question asking "The passage is primarily about..." wants the central idea. A question asking "The author's main purpose is to..." wants the purpose. They often share the same correct answer, but not always.
Quick test: If you can swap your answer into a completely different passage and it still works, your answer is too broad — it describes the topic, not the central idea. The central idea should be specific enough that it only fits this one passage.
For much of the twentieth century, bilingualism was considered a cognitive burden — an added complexity that divided a child's mental resources between two competing language systems. That view has been thoroughly overturned. Research conducted over the past three decades has revealed that managing two languages does not split the mind but strengthens it, reshaping fundamental cognitive processes in ways that extend far beyond communication. Bilingualism, it turns out, is not merely a linguistic skill but a form of cognitive architecture that enhances executive function, deepens social understanding, and builds resilience against neurological decline.
The most extensively documented benefit of bilingualism involves executive function — the set of mental processes that govern attention, task-switching, and impulse control. Because bilingual individuals must constantly monitor which language is appropriate for a given context and suppress the other, their brains develop unusually strong inhibitory control systems. Dr. Ellen Bialystok's landmark studies at York University demonstrated that bilingual children consistently outperform monolingual peers on tasks requiring them to ignore misleading information and focus on relevant cues. Crucially, these advantages are not limited to language tasks; they transfer to nonverbal challenges such as sorting shapes by changing rules, suggesting that bilingualism trains domain-general cognitive skills.
The cognitive benefits of bilingualism extend into social understanding as well. Researchers at the University of Chicago found that bilingual children as young as four demonstrate superior ability to interpret other people's intentions and perspectives — a capacity psychologists call "theory of mind." Because bilingual speakers routinely adjust their language to match their audience, selecting not just words but entire linguistic systems based on who is listening, they develop heightened sensitivity to the communicative needs of others. This practice in perspective-taking appears to cultivate broader social skills: bilingual adolescents score higher on standardized measures of empathy and conflict resolution than their monolingual counterparts.
Perhaps the most striking evidence for bilingualism's cognitive impact comes from research on aging. Multiple large-scale studies have found that bilingual individuals develop symptoms of Alzheimer's disease and other forms of dementia an average of four to five years later than comparable monolinguals — not because bilingualism prevents the underlying brain pathology, but because it builds what neurologists term "cognitive reserve." The bilingual brain, accustomed to managing competing demands, develops compensatory neural pathways that continue functioning even as disease damages primary ones. Dr. Fergus Craik of the Rotman Research Institute described this effect as "the most powerful modifiable factor yet identified" in delaying dementia onset.
These findings carry significant implications for education policy. Bilingual education programs, once marginalized in many countries in favor of rapid assimilation into a single dominant language, are receiving renewed attention from policymakers who recognize that maintaining a child's home language while teaching a second does not impede academic progress — it accelerates it. Students in well-designed dual-language programs consistently match or exceed their peers in monolingual classrooms on standardized tests of reading and mathematics, while gaining the additional cognitive and social advantages that bilingualism confers.
Not all researchers accept these conclusions without qualification. Critics point out that many studies comparing bilingual and monolingual populations fail to control adequately for socioeconomic factors, immigration status, and cultural differences that could independently influence cognitive performance. This methodological concern is legitimate, and the field has responded by designing increasingly rigorous studies that account for these variables. The core findings — particularly the executive function advantages and the delayed onset of dementia symptoms — have survived this heightened scrutiny, replicated across diverse populations in Canada, India, Spain, and Singapore.
Beyond the individual cognitive benefits, bilingualism carries economic weight. A report from the New American Economy found that demand for bilingual workers in the United States more than doubled between 2010 and 2020, with bilingual employees earning an average salary premium of five to twenty percent depending on the industry. Multinational corporations increasingly identify bilingual competence not merely as a communication convenience but as a marker of the cognitive flexibility and cultural adaptability they value in leadership candidates.
The accumulating evidence points to a conclusion that would have astonished researchers a generation ago: bilingualism is less a matter of speaking two languages than of building a more adaptable mind. The bilingual brain does not simply store two sets of vocabulary and grammar rules; it develops enhanced systems for attention, empathy, and neural resilience that serve its owner across every domain of life. As global migration and digital communication make multilingual contact increasingly common, understanding bilingualism as cognitive architecture — not merely as a communication tool — could reshape how societies educate children, support aging populations, and cultivate the flexible thinking that complex challenges demand.
Detail questions are the workhorse of the ACT Reading section. Out of 36 questions across four passages, roughly 10 to 13 are pure detail questions — about 30 to 35 percent of the entire test. These are the most straightforward question type because the answer is always stated somewhere in the passage. You do not need to infer, analyze, or interpret. You need to find the right sentence, understand what it says, and match it to the correct answer choice.
The challenge is not comprehension — it is speed. You have 40 minutes for 36 questions, roughly 67 seconds each. If you waste time rereading entire paragraphs to find one fact, you will run out of time. The strategies in this chapter teach you to locate details in seconds, not minutes.
Not all detail questions are identical. The ACT uses three variations, and recognizing which type you face tells you exactly how to find the answer.
Type 1 — Direct lookup. The question asks for a specific fact stated in the passage. Stems: "According to the passage, what did [person] discover?" "The passage states that..." Strategy: Scan for the keyword, read the sentence plus 1-2 surrounding sentences, match to the answer. The correct answer paraphrases the passage — it almost never uses identical words.
Type 2 — Specific reference. The question points to a paragraph number, line reference, or quoted phrase. Stems: "In the third paragraph, the author describes..." "In lines 42-47, the narrator indicates..." Strategy: Go directly to the location. Read the full sentence and one sentence before and after. The answer is always within this window.
Type 3 — Characterization detail. The question asks how a person, place, or concept is described. Stems: "The passage characterizes [person] as..." "The narrator portrays [place] as..." Strategy: Scan for every mention of the subject — these details are sometimes spread across multiple paragraphs. The correct answer synthesizes how the passage overall portrays the subject, not just one isolated mention.
The Keyword Scan is a five-step technique that lets you answer most detail questions in 30 to 40 seconds. Instead of rereading paragraphs hoping to stumble on the right sentence, you go directly to it.
Step 1 — Identify the key term (3 sec): Read the question and find the most specific, scannable word. Names, numbers, technical terms, and unusual words work best. Avoid common words like "the" or "important."
Step 2 — Scan, do not read (5-10 sec): Move your eyes quickly down the passage looking only for your key term. Do not read sentences — just scan for the word shape.
Step 3 — Read the window (10 sec): Once you find the term, read the sentence containing it plus the sentence before and after. This three-sentence window almost always contains the answer.
Step 4 — Predict before looking (5 sec): Form a quick mental answer before looking at the choices. This prevents you from being tricked by tempting wrong answers.
Step 5 — Match and verify (5 sec): Compare your prediction to the four choices. The correct answer will paraphrase what you read. If two choices seem close, reread the exact sentence to see which one matches.
Three afternoons a week, Maya Orozco climbed the marble steps of the Westfield Museum of Natural History to spend four hours doing what most sixteen-year-olds would consider tedious beyond endurance: cataloging insect specimens. She had volunteered at the museum since she was fourteen, two years after her father's death from pancreatic cancer. David Orozco had been a postal carrier by profession but a naturalist by passion, spending every vacation hiking California's coastal hills with a butterfly net, a hand lens, and a battered field notebook. The museum had been his favorite place in the city, and working there made Maya feel she was continuing a conversation he had started.
On a Tuesday in late October, Maya was sorting through uncataloged donations in the entomology department's back storage room when she found a small cedar box wedged behind a row of pinning trays. Her breath caught when she saw the initials carved into the lid: D.A.O. — David Antonio Orozco. Inside, resting on a bed of faded cotton, lay a single butterfly specimen. Its wings were a luminous pale blue, edged with white fringe, with a row of distinctive silver spots along the underwing. A yellowed label in her father's precise handwriting read: "Glaucopsyche xerces. Golden Gate Park, San Francisco. June 14, 1985."
Maya stared at the label, certain she was misreading it. Glaucopsyche xerces — the Xerces blue — had been declared extinct in 1941, the first butterfly species in North America known to have been driven to extinction by human activity. San Francisco's rapid urban expansion in the early twentieth century had destroyed the coastal sand dune habitat where the species depended on native lupine plants for survival. The last confirmed sighting had been recorded in the Sunset District in 1941. If her father had found one forty-four years later, either he had made a spectacular misidentification or he had witnessed something most entomologists believed impossible.
She carried the box to Dr. Anita Patel, the museum's chief entomologist, trying to keep her hands from trembling. Dr. Patel adjusted her microscope, positioned the specimen under the lens, and went very still. After nearly three minutes of silence, she looked up. "The wing venation is consistent. The silver spotting pattern on the ventral hindwing matches the type specimens at the Smithsonian." She removed her glasses and rubbed the bridge of her nose. "Maya, if this is authentic, it means the Xerces blue survived at least four decades longer than anyone believed. This specimen could rewrite the extinction timeline for an entire species."
Beneath the cotton bedding, Maya discovered three of her father's field notebooks. The entries, written in his small, careful script, described finding the butterfly in a remnant patch of sand dune habitat at Golden Gate Park's western edge, where a strip of native dune lupine still grew between a maintenance road and a chain-link fence. He had written: "Watched it for twenty minutes before netting. Wings unmistakable — the blue is unlike anything else on the coast. I know what this is. No one will believe me. Taking it home to keep it safe."
The notebooks showed that David had returned to the same lupine patch for three consecutive summers. In 1986, he recorded searching for six hours without a sighting. In 1987, he noted that the lupine had thinned considerably and the adjacent ground had been graded for construction. His final entry on the subject, dated August 1988, read: "The lupine is gone. Parking lot. If this was the last one, at least someone will know it tried." After that, the notebooks shifted to other species, other hikes, as though he had closed a door he could not bear to open again.
Dr. Patel arranged for DNA analysis at the university's genetics laboratory and contacted a colleague at the Smithsonian's National Museum of Natural History. While they waited for results, the museum prepared a temporary exhibit. On opening day, Maya stood before the display case reading the placard she had helped write: "Glaucopsyche xerces, collected June 14, 1985, Golden Gate Park, San Francisco. Donated from the personal collection of David Antonio Orozco (1958–2010)." She pressed her fingertip against the glass above the pale blue wings, thinking that her father had spent twenty-five years carrying this secret — not because he doubted what he had found, but because he could never find a scientist willing to take an amateur collector seriously enough to look.
Inference questions are the second most common question type on the ACT Reading section, making up roughly 20 to 30 percent of the test — about 7 to 10 questions out of 36. Unlike detail questions, which ask you to find information stated directly, inference questions ask you to figure out what the passage suggests, implies, or means without saying it outright. The answer is never written word-for-word in the text. Instead, you combine evidence from the passage with logical reasoning to reach a conclusion the author supports but does not state explicitly.
Here is the critical distinction: inference questions are not asking for your opinion. They are asking what the passage logically implies. Every correct answer can be traced back to specific evidence in the text. If you cannot point to a sentence that supports your answer, you have gone too far. Think of inferences as standing on a bridge — the passage is one side, the answer is the other, and your reasoning is the bridge between them. The bridge must be short and sturdy, not a wild leap across a canyon.
Understanding the line between these two question types is essential. Treating an inference question like a detail question — searching for exact wording — wastes your time because the answer is not in the text word-for-word. Treating a detail question like an inference question — reasoning beyond the text — leads you to overthink a straightforward answer.
Inference questions announce themselves with specific signal words: suggests, implies, inferred, most likely, probably, would. When you see these, shift from finding exact quotes to building logical connections.
Strong inference signals: "The passage suggests that..." "It can reasonably be inferred that..." "The author most likely believes..." "The passage implies that..." "Based on the passage, [person] would probably..."
Milder signals (could also be detail questions): "Based on the passage..." "According to the author..." "The passage indicates..."
The key words to watch for are suggests, implies, inferred, most likely, and probably. These tell you the answer requires reasoning beyond what is stated directly.
Summarization questions test whether you can compress a passage — or a section of a passage — into its essential meaning without distorting it. These questions appear 3 to 5 times per ACT Reading test in several forms: "Which best summarizes the main argument?" "The third paragraph primarily serves to..." "Which statement most accurately captures the author's conclusion?" What unites all of them is that you must separate what is essential from what is merely supporting detail.
Summarization is closely related to central idea questions, but there is a key difference. Central idea questions ask for the one overarching point of the entire passage. Summarization questions can ask about any level — the whole passage, a single paragraph, a group of paragraphs, or even the function of one sentence within the argument. You need to zoom in and out depending on what the question specifies.
Summarization questions use predictable language. Recognizing them quickly tells you to focus on the big picture rather than hunting for specific details.
Whole-passage stems: "Which best summarizes the main argument?" "The passage as a whole is primarily concerned with..." "The author's central claim is that..."
Paragraph-level stems: "The third paragraph primarily serves to..." "In lines 23-45, the author mainly..." "The function of the second paragraph is to..."
Conclusion stems: "Based on the final paragraph, the author concludes that..." "The passage's conclusion can best be summarized as..."
Comparative stems (dual passages): "Which best describes the relationship between the two passages?" "Compared to Passage A, Passage B primarily focuses on..."
Every passage contains three levels of information. A good summary includes Level 1 and sometimes Level 2, but almost never Level 3. Understanding this hierarchy is the single most important skill for summarization questions.
Level 1 — Core claims (always include in a summary): The author's thesis or main argument. The primary purpose. The central conclusion. If you leave out a Level 1 element, your summary is incomplete.
Level 2 — Supporting points (sometimes include): Major evidence or examples that directly support the thesis. Key contrasts or comparisons. Important cause-effect relationships. These add substance but are not the summary themselves.
Level 3 — Specific details (almost never include): Names, dates, numbers (unless central to the argument). Minor examples. Tangential information. Descriptive flourishes. These make the passage vivid but are not essential to its meaning.
The Hierarchy Test: When stuck between two answer choices, mentally remove the piece of information from the passage entirely. Does the argument still make sense? If yes, it is Level 2 or 3 — probably not the best summary. If removing it collapses the argument, it is Level 1 and belongs in the summary. Apply this test whenever you are choosing between a summary that mentions specific evidence and one that captures the overall argument. The overall argument almost always wins.
Roughly one out of every four Reading questions asks you to determine what a word or phrase means based on how it is used in a passage. That translates to approximately ten questions per test, each one solvable without a dictionary or flashcards. Every answer lives inside the passage itself — authors embed clues because clear communication demands it. Your job is learning to spot those clues quickly and reliably. These questions also have a narrow search zone: when the question says "As used in line 34," the answer is almost always within one or two sentences of the target word. That makes vocabulary questions some of the fastest points on the exam once you know the strategies.
Authors follow predictable patterns to help readers understand difficult words. Learning these five patterns is like learning to read a map — once you know the legend, navigation becomes automatic.
Type 1 — Definition and restatement. The author hands you the meaning directly, often right after the term. Signals: "which is," "that is," "in other words," "meaning," "defined as," "also known as," and punctuation like commas, dashes, parentheses, and colons that set off a definition. "The relationship between trees and mycorrhizal fungi is mutualistic, meaning both partners benefit from the exchange." — everything after "meaning" is the definition.
Type 2 — Example and illustration. Concrete examples surround the difficult term. Signals: "such as," "for example," "for instance," "including," "like." "The fungi deliver phosphorus, nitrogen, and micronutrients to the tree; in return, the tree supplies carbon." — These examples illustrate what a mutualistic exchange looks like. If you understand the examples, you understand the term.
Type 3 — Contrast and antonym. The meaning is revealed by what the word is not. Signals: "but," "however," "unlike," "whereas," "on the other hand," "although," "yet." "Not all interactions within mycorrhizal networks are cooperative. Some orchid species are mycoheterotrophic — they have abandoned photosynthesis entirely and parasitize the fungal network." — "Cooperative" reveals that mycoheterotrophic means the opposite: taking without contributing.
Type 4 — Cause and effect. The meaning emerges from a logical chain. Signals: "because," "since," "therefore," "as a result," "leads to," "due to." "Clear-cutting practices that remove all trees destroy the mycorrhizal network, leaving replanted seedlings without the underground infrastructure they need." — the causal relationship reveals that the network is essential support infrastructure.
Type 5 — General context and inference. No single signal word. The overall tone, topic, and surrounding details collectively suggest meaning. This is the hardest type and requires synthesizing information from multiple sentences. "The tree supplies the fungi with up to thirty percent of its photosynthetic carbon — the energy currency that fuels fungal growth." — No definition of "currency" is given, but the exchange context (deliver minerals, supply carbon) reveals it means a medium of exchange between organisms.
Understanding the ACT's traps is just as important as knowing the strategies.
Trap 1 — The common definition trap. The most familiar meaning of a word is offered, but the passage uses it differently. "Currency" usually means money — but in a biology passage, "energy currency" means a medium of exchange between organisms. Always verify against the passage.
Trap 2 — The opposite meaning trap. One choice means the exact opposite. If the passage contrasts "cooperative" with "parasitize," a wrong answer will define the parasitic relationship as cooperative. Students who skim too fast and miss the contrast word fall for this.
Trap 3 — The too-literal trap. When the passage uses figurative language ("the wood wide web"), one wrong answer interprets it literally ("a fiber-optic internet system installed underground").
Trap 4 — The too-figurative trap. The reverse — when a word is used technically ("hyphae"), a wrong answer offers a metaphorical meaning ("ideas connecting different concepts").
Trap 5 — The right word, wrong context trap. A definition that is valid for the word but does not fit this passage. "Anchor" can mean a heavy boat device, a news presenter, or a stabilizing force. Only one fits the passage's context.
Your defense against all five: the substitution test. Replace the target word with your chosen answer in the original sentence. Does it preserve the meaning? Sound natural? Fit the tone? If yes to all three, you have the right answer.
Sequence questions ask you to track the order of events, identify steps in a process, or follow cause-and-effect chains. While the ACT does not label them as a standalone category, sequence-tracking skills underpin a large portion of the Key Ideas and Details questions — roughly 55 to 60 percent of the section. You can expect 1 to 3 questions per test to ask directly about chronological order, but many more require you to understand when events happened relative to each other to answer correctly.
What makes sequence questions uniquely important is that they test whether you actually understood the passage's structure, not just individual facts. The ACT exploits a natural limitation of how your brain processes time in text: when you encounter a phrase like "three years later," your working memory refreshes, and details from the previous time segment become harder to access. Test makers place critical events on opposite sides of these timeline shifts, then ask you to connect them. The students who score highest are not the ones with the best memories — they are the ones who write things down as they read.
Every sequence question falls into one of five categories. Identifying the type tells you exactly what to look for.
Type 1 — Chronological order. When did events happen relative to each other in actual time? The challenge: the order events appear in the text may differ from the order they happened (flashbacks, flash-forwards). Strategy: Build a chronological timeline as you read. Watch verb tense — past perfect ("had gone") signals events before the main narrative.
Type 2 — Process steps. What are the sequential stages in a procedure? Common in science passages. Strategy: Look for first, then, next, finally. Number each step in the margin. Verify every step in your answer actually appears in the passage.
Type 3 — Cause-effect chains. What caused what? What resulted from a particular event? Strategy: Find the effect (usually stated in the question), trace backward for causation signals: because, since, as a result, consequently, led to. Match the level of causation the question asks for.
Type 4 — Narrative progression. How does a character, argument, or situation develop over the passage? Strategy: Track major turning points. Describe the arc in one sentence: "The narrator goes from [start] to [end] through [key events]."
Type 5 — Cross-timeline ordering. When multiple time periods are mentioned, what is the correct order across all of them? Strategy: Note each distinct time period as you read. Place events on the correct timeline. After reading, list major events in true chronological order regardless of where they appear in the text.
Signal words to watch for: first, then, next, finally (sequence); while, during, meanwhile (simultaneous); before, earlier, had been, years before (flashback); years later, would eventually (flash-forward); because, as a result, led to (cause-effect); suddenly, without warning (interruption).
PLACE is a five-step system for solving any sequence question in roughly 30 seconds.
P — Pin the question type (2 sec): Chronological? Process? Cause-effect? Progression? Cross-timeline? L — Locate time markers (10 sec): Scan for signal words. Check your margin timeline if you built one. A — Arrange events (5 sec): Put them in the correct order — time order, step order, or causal chain. C — Check against options (8 sec): Usually 2-3 choices will obviously violate your sequence. Eliminate them. E — Ensure accuracy (5 sec): Find the specific sentence that confirms your choice. Never submit without textual proof.
The margin timeline is the single most effective tool for sequence questions. As you read, note each major event in 2-3 words in the margin. Number events in chronological order (not narrative order). If an event in paragraph 1 is a flashback, number it by when it happened, not when the author mentions it. Draw arrows between cause-effect pairs. Mark timeline shifts with a bracket and label ("past," "present," "future"). This takes 15 seconds during your first read and saves 30-60 seconds of re-reading per sequence question.
Cause-and-effect questions appear on every ACT Reading test, typically 3 to 5 per test across all four passages. They ask you to identify why something happened, what resulted from an event, or how one factor influenced another. These questions range from straightforward (the answer is stated directly) to subtle (you must infer the causal link from context). Because causal reasoning is the backbone of how authors build arguments and explain phenomena, mastering this skill improves your performance on inference, detail, and argument questions as well.
What makes these questions tricky is how the ACT designs wrong answers. You will see choices that reverse the cause and effect, that confuse correlation with causation, or that present a real detail from the passage that simply does not answer the causal question being asked. To beat these traps, you need a systematic approach — not just instinct.
These questions fall into two categories: those asking for the cause (what triggered it) and those asking for the effect (what resulted). Knowing which one you need prevents a surprisingly common error — finding the right relationship but selecting the wrong end of it.
Asking for the cause: "According to the passage, [event] occurred because..." "The passage suggests that [outcome] was primarily the result of..." "Based on the passage, which best explains why..."
Asking for the effect: "According to the passage, [action] resulted in..." "The author indicates that [X] led to..." "What was the effect of..."
Asking about prevention: "The passage suggests that [X] prevents..." "Without [X], the passage implies that..."
Signal words in passages: Cause indicators — because, since, due to, as a result of, stems from, given that. Effect indicators — therefore, consequently, thus, hence, as a result, led to, resulting in, triggered. Facilitating — contributes to, plays a role in, enables, promotes. Preventing — prevents, inhibits, blocks, without, in the absence of.
When there are no signal words, causation is often embedded in sentence structure: participial phrases ("Faced with declining enrollment, the board voted to consolidate"), relative clauses ("The policy, which displaced thousands, was enacted in 1965"), or sequential sentences with implied connection ("Temperatures rose three degrees. Coral bleaching spread across the reef").
Type 1 — Direct causation. A directly causes B with no intermediary steps. "Sunlight triggers photosynthesis." Trap: The ACT offers answers that add intermediate steps not mentioned in the passage.
Type 2 — Chain reactions. A causes B, which causes C, which causes D. "Deforestation accelerated erosion, which stripped nutrients from topsoil, ultimately leaving the land unable to support agriculture." Trap: Wrong answers skip links, connecting the first and last steps as if directly related.
Type 3 — Multiple causes. Several factors combine to produce one effect. "The decline resulted from habitat loss, pesticide exposure, and climate-driven changes." Trap: Wrong answers list only one factor when the passage specifies multiple are needed.
Type 4 — Branching effects. One cause produces multiple different effects. "The printing press transformed education, disrupted the Church's monopoly, and accelerated scientific collaboration." Trap: The question asks about one specific effect, but a wrong answer describes a different (real) effect of the same cause.
Type 5 — Preventive causation. Something prevents or blocks an expected effect. "Antioxidants protect cells from oxidative damage." "Without pollinators, the plants failed to reproduce." Trap: Double negatives confuse students. "Prevents the inhibition of growth" means growth continues. Convert to positive to check your answer.