Breitbart Business Digest: Fact Checking Harvard’s Flawed Tariff Study

⭐️⭐️⭐️⭐️⭐️

How Harvard Researchers Accidentally Doubled the Tariff Effect

When Federal Reserve Chair Jerome Powell cited rising import prices as evidence that tariffs are driving inflation, he was almost certainly relying on a new Harvard Business School study. The paper’s headline finding: imported goods prices rose “roughly twice as much” as domestic ones—a 1.8x ratio that suggests tariffs are having a dramatic, differential impact on traded goods.


There’s just one problem: that finding depends entirely on two questionable methodological choices that happen to maximize the measured effect. Make equally defensible alternative choices, and the gap shrinks by up to around 40 percent.

The study, “Tracking the Short-Run Price Impact of U.S. Tariffs,” comes from a team led by Harvard Business School’s Alberto Cavallo, along with Paola Llamas of Northwestern and Franco Vazquez of Universidad de San Andrés. It’s been making the rounds in Washington, and for good reason—it’s based on novel real-time price data and tackles an urgent policy question. But the paper is flawed in ways that undermine confidence in its findings.

The Art of Baseline Selection

To measure tariff effects, the researchers need a “pre-tariff trend”—a baseline showing where prices were heading before tariffs. Then they measure the deviation from that trend after tariffs hit. Simple enough.


But when does the baseline period start? And when does the “tariff period” begin? The Harvard team made two key choices: start the baseline in October 2024, and treat March 4, 2025 as when tariffs began. Why October? They don’t say. They have data back to January 2024—they show it in their appendix—but they never use it for the main analysis, never explain these choices, and never test whether they matter.

So, we ran the tests they skipped, using their publicly available data. We recalculated the pre-tariff trends with different baseline periods and different treatment dates to see how sensitive their findings are to these choices.

The paper reports that imported goods rose 5.4 percent relative to pre-tariff trends, while domestic goods rose 3.0 percent—a differential of 2.4 percentage points and a ratio of 1.8 times. That’s the “roughly twice as much” finding that the paper reports.

[embedyt]https://www.youtube.com/playlist?list=PLTsfrBYdBt_2_G3qmcycf-7fiDIWro0V6&width=750&height=500[/embedyt]

But that finding depends entirely on their specific choices. If we stretch the baseline out so that it begins in January and still runs through March, the gap between imported goods price changes relative to trend and domestic goods falls to 2.1 percentage points, a ratio of 1.6 times. That’s a 14 percent decline in the gap by simply adjusting when the baseline starts.

Why does this matter? The researchers had data going back to January but chose to use only October forward. A longer baseline period captures more of the underlying trend, making the post-tariff deviation look smaller.

But why treat March 4 as the start of tariffs? The researchers have a defensible argument that this is when major tariffs on China, Mexico, and Canada were announced. A more natural choice would be when Donald Trump won re-election after campaigning on raising tariffs and defeating an opponent who was constantly claiming his tariffs would amount to a “national sales tax.”

youtube widget

If retailers are forward-looking—and the Harvard paper itself insists they are, emphasizing how prices responded to “tariff news” and “expected” costs—then expectations should have started affecting prices the moment Trump won. Under this specification, the ratio drops from 1.8 times to 1.4 times—not “roughly twice as much” but “about 40 percent more.”

Maybe that’s too early, though. What if we move the tariff date from March to Liberation Day, April 2? That’s when Trump announced a 10 percent baseline tariff on nearly all countries—the big, universal intervention that applied broadly across imports. Use this as your start date, and you’ve reclassified March’s price movements from “tariff effect” to “pre-tariff baseline.” Keeping the earlier January baseline, the absolute difference drops to 1.89 percentage points, a ratio of 1.59 times.

Notice the pattern? At every decision point, the researchers made the choice that produces the larger absolute effect and a larger ratio. They chose a late starting baseline period (October instead of January). They chose an early plausible treatment date (March 4 instead of April 2 or November 5). Each choice is individually defensible. Together, they stack up to maximize the measured impact.


The Garden of Forking Paths

This is what sociologist Andrew Gelman calls the “garden of forking paths. Even without fabricating data or engaging in what social scientists call “p-hacking,” a researcher who must decide which time window to use, which variables to include, and which controls to apply, is effectively choosing between statistical worlds. Each choice leads to a different result.

The problem isn’t that the researchers made unreasonable choices. It’s that they made every choice in the direction that maximizes their headline finding, without ever testing whether that finding holds up under alternative specifications. They show appendix materials with data going back to January 2024—but frame these as “robustness checks” proving their trends are stable, not as evidence that their main results might be sensitive to baseline choice.

Is choosing October over January wrong? Not necessarily. Is March 4 an unreasonable treatment date? You could argue for it. But making every choice in the same direction, without testing sensitivity, while trumpeting a “roughly twice” finding that appears in only one of several plausible specifications? That’s something else.



The Black Box

The paper also claims tariffs added 0.7 percentage points to overall inflation. Would this finding hold up to the same scrutiny? Unfortunately, we can’t know. The researchers base this estimate on category-level price indices and CPI weights they don’t share. They tell us they weight by “official CPI expenditure shares at the 3-digit COICOP level” and that their sample covers 29.7 percent of the CPI basket. But they don’t report the weights, the category-level contributions, or how these aggregate to 0.7 percentage points.

It’s essentially a black box. We know they covered about 30 percent of CPI and that Furnishings (with the highest measured price increase) makes up 54 percent of their products. But we don’t know if Furnishings represents 5 percent or 15 percent of actual CPI weights. Without that information, the 0.7 percentage point claim cannot be verified or replicated.

That kind of transparency problem is all the more troubling given the sensitivity of their other findings to seemingly questionable measurement choices.

This paper will likely be cited in policy briefings as if it were settled science. But its headline finding—that imports rose “roughly twice as much” as domestic goods—appears in exactly one of several equally plausible specifications.

Change the baseline period to use all available data, and “roughly twice” becomes “60 percent more.” Treat Trump’s election as when tariff expectations began affecting prices, and it becomes “40 percent more.” Use Liberation Day as the tariff start date, and the absolute gap shrinks by a fifth.

This isn’t about the researchers being dishonest. They’re not. This is a top-tier team doing serious empirical work. But it’s a textbook case of how a researcher’s degrees of freedom—the many small choices that go into any analysis—can produce quotable, policy-relevant findings that don’t hold up under scrutiny.


The next time you hear that tariffs made imported goods rise “roughly twice as much” as domestic goods, ask: which baseline? Which treatment date? Because depending on the answer, the effect might be half as large as advertised.

https://www.breitbart.com/economy/2025/10/30/breitbart-business-digest-fact-checking-harvards-flawed-tariff-study/