Bill Storage

Unknown's avatar

This user hasn't shared any biographical information

All But the Clergy Believe

As the accused man approached the glowing iron, his heart pounded with faith. God, he trusted, would shield the innocent and leave the guilty to be maimed. The crowd, clutching rosaries and squinting through the smoke, murmured prayers. Most sought a miracle, some merely a verdict. They accepted the trial’s sanctity, exchanging bets on the defendant’s guilt.

Only the priest knew the fire wasn’t as hot as it looked. Sometimes it wasn’t hot at all. The iron was cooled or quietly switched. The timing of the ritual, the placement of fires and cauldrons, the priest’s step to the left rather than right. He held just enough control to steer the outcome toward justice, or what he took for it. The tricks had been passed down from the ancients. Hidden siphons, pivoting mirrors, vessels-within-vessels. Hero of Alexandria had described such things. Lucian of Samosata mocked them in his tales of string-pulled serpents and mechanical gods. Hippolytus of Rome listed them like a stage magician blowing the whistle on his rivals. Fake blood, hollow idols, the miracle of wine poured from nowhere.

By the thirteenth century, the ordeal was a dance: fire, chant, confession, absolution. The guilty, trembling at the priest’s solemn gaze, confessed before the iron’s touch. The faithful innocent, mindful of divine mercy, walked unscathed, unaware of the mirrors, the second cauldron, the cooled metal that had spared them.

There’s no record of public doubt about the mechanism, and church records support the above appraisal. Peter Leeson’s Ordeals drew data from a sample of 208 ordeals in early‑13th‑c. Várad. “Nearly two thirds of the accused were unscathed,” he wrote.  F.W. Maitland, writing in 1909, found only one hot-iron ordeal in two decades that did not result in acquittal, a nearly 100% exoneration rate among the documented defendants who faced ordeals.

The audience saw a miracle and went home satisfied about heaven and earth. The priest saw the same thing and left, perhaps a faint weariness in his step, knowing no miracle had occurred. “Do not put the Lord your God to the test,” he muttered, absolving himself. No commandment had been broken, only the illusion of one. He knew he had saved the believers – from the chaos of doubt, from turning on each other, from being turned upon. It was about souls, yes. But it was more about keeping the village whole.

Everyone believed except the man who made them believe.

In the 1960s and 70s, the Soviet Union still spoke the language of revolution. Newspapers featured daily quotes from Lenin. Speeches invoked the inevitable collapse of capitalism and the coming utopia of classless harmony. School kids memorized Marx.

But by then – and even long before then, we later learned – no one believed it anymore. Not the factory workers, toiling under fabricated quotas. Not the schoolteachers, tasked with revising Marxist texts each summer. And the Politburo? The Brezhnevs and Andropovs mouthed slogans by day, then retreated to Black Sea dachas, Nikon cameras in hand, watching Finnish broadcasts on smuggled American TVs, Tennessee bourbon sweating on the table.

They enforced the rituals nonetheless. Party membership was still required for advancement. Professors went on teaching dialectical materialism. Writers still contrived odes to tractor production and revolutionary youth. All of it repeated with the same flat cadence. No belief, just habit and a vague sense that without it, the whole thing might collapse. No one risked reaching into the fire.

It was a system where no one believed – not the clergy, not the choir, not the congregation. But all pretended. The KGB, the Politburo, the party intellectuals, and everyone else knew Marx had failed. The workers didn’t revolt, and capitalism refused to collapse.

A few tried telling the truth. Solzhenitsyn criticized Stalin’s strategy in a private letter. He got eight years in the Gulag and internal exile. Bukovsky denounced the Communist Youth League at nineteen. He was arrested, declared insane in absentia, and confined. After release, he helped organize the Glasnost Meeting and was sent back to the asylum. On release again, he wrote against the abuse of psychiatry. Everyone knew he was right. They also knew he posed no real threat. They jailed him again.

That was the system. Sinyavsky published fiction abroad. He was imprisoned for the views of his characters. The trial was theater. There was no official transcript. He hadn’t threatened the regime. But he reminded it that its god was dead.

The irony is hard to miss. A regime that prided itself on killing God went on to clone His clergy – badly. The sermons were lifeless, the rituals joyless, the congregation compulsory. Its clergy stopped pretending belief. These were high priests of disbelief, performing the motions of a faith they’d spent decades ridiculing, terrified of what might happen if the spell ever broke.

The medieval priest tricked the crowd. The Soviet official tricked himself. The priest shaped belief to spare the innocent. The commissar demanded belief to protect the system.

The priest believed in justice, if not in miracles. The state official believed in neither.

One lied to uphold the truth. The other told the truth only when the fiction collapsed under its own weight.

And now?

, , , , ,

4 Comments

Six Days to Failure? A Case Study in Cave Bolt Fatigue

The terms fatigue failure and stress-corrosion cracking get tossed around in climbing and caving circles, often in ways that would make an engineer or metallurgist cringe. This is an investigation of a bolt failure in a cave that really was fatigue.

In October, we built a sort of gate to keep large stream debris from jamming the entrance to West Virginia’s Chestnut Ridge Cave. After placing 35 bolts – 3/8 by 3.5-inch, 304 stainless – we ran out. We then placed ten Confast zinc-plated mild steel wedge anchors of the same size. All nuts were torqued to between 20 and 30 foot-pounds.

The gate itself consisted of vertical chains from floor to ceiling, with several horizontal strands. Three layers of 4×4-inch goat panel were mounted upstream of the chains and secured using a mix of 304 stainless quick links and 316 stainless carabiners.

No one visited the entrance from November to July. When I returned in early July and peeled back layers of matted leaves, it was clear the gate had failed. One of the non-stainless bolts had fractured. Another had pulled out about half an inch and was bent nearly 20 degrees. Two other nuts had loosened and were missing. At least four quick links had opened enough to release chain or goat panel rods. Even pairs of carabiners with opposed gates had both opened, freeing whatever they’d been holding.

Cave gate hardware damage

I recovered the hanger-end of the broken bolt and was surprised to see a fracture surface nearly perpendicular to the bolt’s axis, clearly not a shear break. The plane was flat and relatively smooth, with no sign of necking or the cup-and-cone profile typical of ductile tensile failure. Under magnification, the surface showed slight bumpiness, indicating the smoothness didn’t come from rubbing against the embedded remnant of the bolt. These features rule out a classic shear failure from preload loss (e.g., a nut loosening from vibration) and also rule out simple tensile overload and ductile fracture.

fracture surface of mild steel bolt

That leaves two possibilities: brittle tensile fracture or fatigue failure under higher-than-expected cyclic tensile load. Brittle fracture seems highly unlikely. Two potential causes exist. One is hydrogen embrittlement, but that’s rare in the low-strength carbon steel used in these bolts. The zinc-plating process likely involved acid cleaning and electroplating, which can introduce hydrogen. But this type of mild steel (probably Grade 2) is far too soft to trap it. Only if the bolt had been severely cold-worked or improperly baked post-plating would embrittlement be plausible.

The second possibility is a gross manufacturing defect or overhardening. That also seems improbable. Confast is a reputable supplier producing these bolts in massive quantities. The manufacturing process is simple, and I found no recall notices or defect reports. Hardness tests on the broken bolt (HRB ~90) confirm proper manufacturing and further suggest embrittlement wasn’t a factor.

While the available hydraulic energy at the cave entrance would seem to be low, and the 8-month time to failure is short, tensile fatigue originating at a corrosion pit emerges as the only remaining option. Its viability is supported by the partially pulled-out and bent bolt, which was placed about a foot away.

The broken bolt remained flush with the hanger, and the fracture lies roughly one hanger thickness from the nut. While the nut hadn’t backed off significantly, it had loosened enough to lose all preload. This left the bolt vulnerable to cyclic tensile loading from the attached chain vibrating in flowing water and from impacts by logs or boulders.

A fatigue crack could have initiated at a corrosion pit. Classic stress-corrosion cracking is rare in low-strength steel, but zinc-plated bolts under tension in corrosive environments sometimes behave unpredictably. The stream entering the cave has a summer pH of 4.6 to 5.0, but during winter, acidic conditions likely intensified, driven by leaf litter decay and the oxidation of pyrites in upstream Mauch Chunk shales after last year’s drought. The bolt’s initial preload would have imposed tensile stresses at 60–80% of yield strength. In that environment, stress-corrosion cracking is at least plausible.

More likely, though, preload was lost early due to vibration, and corrosion initiated a pit where the zinc plating had failed. The crack appears to have originated at the thread root (bottom right in above photo) and propagated across about two-thirds of the cross-section before sudden fracture occurred at the remaining ligament (top left).

The tensile stress area for 3/8 x 16 bolt would be 0.0775 square inches. If 65% was removed by fatigue, the remaining area would be 0.0271 sq. in. Assuming the final overload occurred at a tensile stress of around 60 ksi (SAE J429 Grade 2 bolts), then the final rupture would have required a tensile load of about 1600 pounds, a plausible value for a single jolt from a moving log or sudden boulder impact, especially given the force multiplier effect of the gate geometry, discussed below.

In mild steel, fatigue cracks can propagate under stress ranges as low as 10 to 30 percent of ultimate tensile strength, given a high enough number of cycles. Based on published S–N curves for similar material, we can sketch a basic relationship between stress amplitude and cycles to failure in an idealized steel rod (see columns 1 and 2 below).

Real-world conditions, of course, require adjustments. Threaded regions act as stress risers. Standard references assign a stress concentration factor (K) of about 3 to 4 for threads, which effectively lowers the endurance limit by roughly 40 percent. That brings the endurance limit down to around 7.5 ksi.

Surface defects from zinc plating and additional concentration at corrosion pits likely reduce it by another 10 percent. Adjusted stress levels for each cycle range are shown in column 3.

Does this match what we saw at the cave gate? If we assume the chain and fencing vibrated at around 2 Hz during periods of strong flow – a reasonable estimate based on turbulence – we get about 172,000 cycles per day. Just six days of sustained high flow would yield over a million cycles, corresponding to a stress amplitude of roughly 7 ksi based on adjusted fatigue data.

Given the bolt’s original cross-sectional area of 0.0775 in², a 7 ksi stress would require a cyclic tensile load of about 540 pounds.

Cycles to FailureStress amplitude (ksi)Adjusted Stress
~10³40 ksi30 ksi
~10⁴30 ksi20 ksi
~10⁵20 ksi12 ksi
~10⁶15 ksi7 ksi
Endurance limit12 ksi5 ksi

Could our gate setup impose 540-pound axial loads on ceiling bolts? Easily – and the geometry shows how. In load-bearing systems like the so-called “death triangle,” force multiplication depends on the angle between anchor points. This isn’t magic. It’s just static equilibrium: if an object is at rest, the vector sum of forces acting on it in every direction must be zero (as derived from Newton’s first two laws of mechanics).

In our case, if the chain between two vertically aligned bolts sags at a 20-degree angle, the axial force on each bolt is multiplied by about a factor of eight. That means a horizontal force of just 70 pounds – say, from a bouncing log – can produce an axial load (vertical load on the bolt) of 540 pounds.

Under the conditions described above, six days of such cycling would be enough to trigger fatigue failure at one million cycles. If a 100-pound force was applied instead, the number of cycles to failure would drop to around 100,000.

The result was genuinely surprising. I knew the principles, but I hadn’t expected fatigue at such low stress levels and with so few cycles. Yet the evidence is clear. The nearby bolt that pulled partly out likely saw axial loads of over 1,100 pounds, enough to cause failure in just tens of thousands of cycles had the broken bolt been in its place. The final fracture area on the failed bolt suggests a sudden tensile load of around 1,600 pounds. These numbers confirm that the gate was experiencing higher axial forces on bolts than we’d anticipated.

The root cause was likely a corrosion pit, inevitable in this setting, and something stainless bolts (304 or 316) would have prevented, though stainless wouldn’t have stopped pullout. Loctite might help quick links resist opening under impact and vibration, though that’s unproven in this context. Chains, while easy to rig, amplified axial loads due to their geometry and flexibility. Stainless cable might vibrate less in water. Unfortunately, surface conditions at the entrance make a rigid or welded gate impractical. Stronger bolts – ½ or even ⅝ inch – torqued to 55 to 85 foot-pounds may be the only realistic improvement, though installation will be a challenge in that setting.

More broadly, this case illustrates how quickly nature punishes the use of non-stainless anchors underground.

, , ,

Leave a comment

From Aqueducts to Algorithms: The Cost of Consensus

The Scientific Revolution, we’re taught, began in the 17th century – a European eruption of testable theories, mathematical modeling, and empirical inquiry from Copernicus to Newton. Newton was the first scientist, or rather, the last magician, many historians say. That period undeniably transformed our understanding of nature.

Historians increasingly question whether a discrete “scientific revolution” ever happened. Floris Cohen called the label a straightjacket. It’s too simplistic to explain why modern science, defined as the pursuit of predictive, testable knowledge by way of theory and observation, emerged when and where it did. The search for “why then?” leads to Protestantism, capitalism, printing, discovered Greek texts, scholasticism, even weather. That’s mostly just post hoc theorizing.

Still, science clearly gained unprecedented momentum in early modern Europe. Why there? Why then? Good questions, but what I wonder, is why not earlier – even much earlier.

Europe had intellectual fireworks throughout the medieval period. In 1320, Jean Buridan nearly articulated inertia. His anticipation of Newton is uncanny, three centuries earlier:

“When a mover sets a body in motion he implants into it a certain impetus, that is, a certain force enabling a body to move in the direction in which the mover starts it, be it upwards, downwards, sidewards, or in a circle. The implanted impetus increases in the same ratio as the velocity. It is because of this impetus that a stone moves on after the thrower has ceased moving it. But because of the resistance of the air (and also because of the gravity of the stone) … the impetus will weaken all the time. Therefore the motion of the stone will be gradually slower, and finally the impetus is so diminished or destroyed that the gravity of the stone prevails and moves the stone towards its natural place.”

Robert Grosseteste, in 1220, proposed the experiment-theory iteration loop. In his commentary on Aristotle’s Posterior Analytics, he describes what he calls “resolution and composition”, a method of reasoning that moves from particulars to universals, then from universals back to particulars to make predictions. Crucially, he emphasizes that both phases require experimental verification.

In 1360, Nicole Oresme gave explicit medieval support for a rotating Earth:

“One cannot by any experience whatsoever demonstrate that the heavens … are moved with a diurnal motion… One can not see that truly it is the sky that is moving, since all movement is relative.”

He went on to say that the air moves with the Earth, so no wind results. He challenged astrologers:

“The heavens do not act on the intellect or will… which are superior to corporeal things and not subject to them.”

Even if one granted some influence of the stars on matter, Oresme wrote, their effects would be drowned out by terrestrial causes.

These were dead ends, it seems. Some blame the Black Death, but the plague left surprisingly few marks in the intellectual record. Despite mass mortality, history shows politics, war, and religion marching on. What waned was interest in reviving ancient learning. The cultural machinery required to keep the momentum going stalled. Critical, collaborative, self-correcting inquiry didn’t catch on.

A similar “almost” occurred in the Islamic world between the 10th and 16th centuries. Ali al-Qushji and al-Birjandi developed sophisticated models of planetary motion and even toyed with Earth’s rotation. A layperson would struggle to distinguish some of al-Birjandi’s thought experiments from Galileo’s. But despite a wealth of brilliant scholars, there were few institutions equipped or allowed to convert knowledge into power. The idea that observation could disprove theory or override inherited wisdom was socially and theologically unacceptable. That brings us to a less obvious candidate – ancient Rome.

Rome is famous for infrastructure – aqueducts, cranes, roads, concrete, and central heating – but not scientific theory. The usual story is that Roman thought was too practical, too hierarchical, uninterested in pure understanding.

One text complicates that story: De Architectura, a ten-volume treatise by Marcus Vitruvius Pollio, written during the reign of Augustus. Often described as a manual for builders, De Architectura is far more than a how-to. It is a theoretical framework for knowledge, part engineering handbook, part philosophy of science.

Vitruvius was no scientist, but his ideas come astonishingly close to the scientific method. He describes devices like the Archimedean screw or the aeolipile, a primitive steam engine. He discusses acoustics in theater design, and a cosmological models passed down from the Greeks.  He seems to describe vanishing point perspective, something seen in some Roman art of his day. Most importantly, he insists on a synthesis of theory, mathematics, and practice as the foundation of engineering. His describes something remarkably similar to what we now call science:

“The engineer should be equipped with knowledge of many branches of study and varied kinds of learning… This knowledge is the child of practice and theory. Practice is the continuous and regular exercise of employment… according to the design of a drawing. Theory, on the other hand, is the ability to demonstrate and explain the productions of dexterity on the principles of proportion…”

“Engineers who have aimed at acquiring manual skill without scholarship have never been able to reach a position of authority… while those who relied only upon theories and scholarship were obviously hunting the shadow, not the substance. But those who have a thorough knowledge of both… have the sooner attained their object and carried authority with them.”

This is more than just a plea for well-rounded education. H e gives a blueprint for a systematic, testable, collaborative knowledge-making enterprise. If Vitruvius and his peers glimpsed the scientific method, why didn’t Rome take the next step?

The intellectual capacity was clearly there. And Roman engineers, like their later European successors, had real technological success. The problem, it seems, was societal receptiveness.

Science, as Thomas Kuhn famously brough to our attention, is a social institution. It requires the belief that man-made knowledge can displace received wisdom. It depends on openness to revision, structured dissent, and collaborative verification. These were values that the Roman elite culture distrusted.

When Vitruvius was writing, Rome had just emerged from a century of brutal civil war. The Senate and Augustus were engaged in consolidating power, not questioning assumptions. Innovation, especially social innovation, was feared. In a political culture that prized stability, hierarchy, and tradition, the idea that empirical discovery could drive change likely felt dangerous.

We see this in Cicero’s conservative rhetoric, in Seneca’s moralism, and in the correspondence between Pliny and Trajan, where even mild experimentation could be viewed as subversive. The Romans could build aqueducts, but they wouldn’t fund a lab.

Like the Islamic world centuries later, Rome had scholars but not systems. Knowledge existed, but the scaffolding to turn it into science – collective inquiry, reproducibility, peer review, invitations for skeptics to refute – never emerged.

Vitruvius’s De Architectura deserves more attention, not just as a technical manual but as a proto-scientific document. It suggests that the core ideas behind science were not exclusive to early modern Europe. They’ve flickered into existence before, in Alexandria, Baghdad, Paris, and Rome, only to be extinguished by lack of institutional fit.

That science finally took root in the 17th century had less to do with discovery than with a shift in what society was willing to do with discovery. The real revolution wasn’t in Newton’s laboratory, it was in the culture.

Rome’s Modern Echo?

It’s worth asking whether we’re becoming more Roman ourselves. Today, we have massively resourced research institutions, global scientific networks, and generations of accumulated knowledge. Yet, in some domains, science feels oddly stagnant or brittle. Dissenting views are not always engaged but dismissed, not for lack of evidence, but for failing to fit a prevailing narrative.

We face a serious, maybe existential question. Does increasing ideological conformity, especially in academia, foster or hamper science?

Obviously, some level of consensus is essential. Without shared standards, peer review collapses. Climate models, particle accelerators, and epidemiological studies rely on a staggering degree of cooperation and shared assumptions. Consensus can be a hard-won product of good science. And it can run perilously close to dogma. In the past twenty years we’ve seen consensus increasingly enforced by legal action, funding monopolies, and institutional ostracism.

String theory may (or may not) be physics’ great white whale. It’s mathematically exquisite but empirically elusive. For decades, critics like Lee Smolin and Peter Woit have argued that string theory has enjoyed a monopoly on prestige and funding while producing little testable output. Dissenters are often marginalized.

Climate science is solidly evidence-based, but responsible scientists point to constant revision of old evidence. Critics like Judith Curry or Roger Pielke Jr. have raised methodological or interpretive concerns, only to find themselves publicly attacked or professionally sidelined. Their critique is labeled denial. Scientific American called Curry a heretic. Lawsuits, like Michael Mann’s long battle with critics, further signal a shift from scientific to pre-scientific modes of settling disagreement.

Jonathan Haidt, Lee Jussim, and others have documented the sharp political skew of academia, particularly in the humanities and social sciences, but increasingly in hard sciences too. When certain political assumptions are so embedded, they become invisible. Dissent is called heresy in an academic monoculture. This constrains the range of questions scientists are willing to ask, a problem that affects both research and teaching. If the only people allowed to judge your work must first agree with your premises, then peer review becomes a mechanism of consensus enforcement, not knowledge validation.

When Paul Feyerabend argued that “the separation of science and state” might be as important as the separation of church and state, he was pushing back against conservative technocratic arrogance. Ironically, his call for epistemic anarchism now resonates more with critics on the right than the left. Feyerabend warned that uniformity in science, enforced by centralized control, stifles creativity and detaches science from democratic oversight.

Today, science and the state, including state-adjacent institutions like universities, are deeply entangled. Funding decisions, hiring, and even allowable questions are influenced by ideology. This alignment with prevailing norms creates a kind of soft theocracy of expert opinion.

The process by which scientific knowledge is validated must be protected from both politicization and monopolization, whether that comes from the state, the market, or a cultural elite.

Science is only self-correcting if its institutions are structured to allow correction. That means tolerating dissent, funding competing views, and resisting the urge to litigate rather than debate. If Vitruvius teaches us anything, it’s that knowing how science works is not enough. Rome had theory, math, and experimentation. What it lacked was a social system that could tolerate what those tools would eventually uncover. We do not yet lack that system, but we are testing the limits.

, , , , , ,

2 Comments

Rational Atrocity?

Bayesian Risk and the Internment of Japanese Americans

We can use Bayes (see previous post) to model the US government’s decision to incarcerate Japanese Americans, 80,000 of which were US citizens, to reduce a perceived security risk. We can then use a quantified risk model to evaluate the internment decision.

We define two primary hypotheses regarding the loyalty of Japanese Americans:

  • H1: The population of Japanese Americans are generally loyal to the United States and collectively pose no significant security threat.

  • H2: The population of Japanese Americans poses a significant security threat (e.g., potential for espionage or sabotage).

The decision to incarceration Japanese Americans reflects policymakers’ belief in H2 over H1, updated based on evidence like the Niihau Incident.

Prior Probabilities

Before the Niihau Incident, policymakers’ priors were influenced by several factors:

  • Historical Context: Anti-Asian sentiment on the West Coast, including the 1907 Gentlemen’s Agreement and 1924 Immigration Act, fostered distrust of Japanese Americans.

  • Pearl Harbor: The surprise attack on December 7, 1941, heightened fears of internal threats. No prior evidence of disloyalty existed.

  • Lack of Data: No acts of sabotage or espionage by Japanese Americans had been documented before Niihau. Espionage detection and surveillance were limited. Several espionage rings tied to Japanese nationals were active (Itaru Tachibana, Takeo Yoshikawa).

Given this, we can estimate subjective priors:

  • P(H1) = 0.99: Policymakers might have initially been 99 percent confident that Japanese Americans were loyal, as they were U.S. citizens or long-term residents with no prior evidence of disloyalty. The pre-Pearl Harbor Munson Report (“There is no Japanese `problem’ on the Coast”) supported this belief.

  • P(H2) = 0.01: A minority probability of threat due to racial prejudices, fear of “fifth column” activities, and Japan’s aggression.

These priors are subjective and reflect the mix of rational assessment and bias prevalent at the time. Bayesian reasoning (Subjective Bayes) requires such subjective starting points, which are sometimes critical to the outcome.

Evidence and Likelihoods

The key evidence influencing the internment decision was the Niihau Incident (E1) modeled in my previous post. We focus on this, as it was explicitly cited in justifying internment, though other factors (e.g., other Pearl Harbor details, intelligence reports) played a role.

E1: The Niihau Incident

Yoshio and Irene Harada, Nisei residents, aided Nishikaichi in attempting to destroy his plane, burn papers, and take hostages. This was interpreted by some (e.g., Lt. C.B. Baldwin in a Navy report) as evidence that Japanese Americans might side with Japan in a crisis.

Likelihoods:

P(E1|H1) = 0.01: If Japanese Americans are generally loyal, the likelihood of two individuals aiding an enemy pilot is low. The Haradas’ actions could be seen as an outlier, driven by personal or situational factors (e.g., coercion, cultural affinity). Note that this 1% probability is not the same 1% probability of H1, the prior belief that Japanese Americans weren’t loyal. Instead, P(E1|H1) is the likelihood assigned to whether E1, the Harada event, would have occurred given than Japanese Americans were loyal to the US.

P(E1|H2) = 0.6: High likelihood of observing the Harada evidence if the population of Japanese Americans posed a threat.

Posterior Calculation Using Bayes Theorem:

P(H1∣E1) = P(E1∣H1)⋅P(H1) / [P(E1∣H1)⋅P(H1)+P(E1∣H2)⋅P(H2)]

P(H1∣E1)=0.01⋅0.99 / [(0.01⋅0.99)+(0.6⋅0.01)] = 0.626

P(H2|E1) = 1 – P(H1|E1) = 0.374

The Niihau Incident significantly increases the probability of H2 (its prior was 0.01), suggesting a high perceived threat. This aligns with the heightened alarm in military and government circles post-Niihau. 62.6% confidence in loyalty is unacceptable by any standards. We should experiment with different priors.

Uncertainty Quantification

  • Aleatoric Uncertainty: The Niihau Incident involved only two people.

  • Epistemic Uncertainty: Prejudices and wartime fear would amplify P(H2).

Sensitivity to P(H1)

The posterior probability of H2 is highly sensitive to changes in P(H2) – and to P(H1) because they are linearly related: P(H2) = 1.0 – P(H1).

The posterior probability of H2 is somewhat sensitive to the likelihood assigned to P(E1|H1), but in a way that may be counterintuitive – because it is the likelihood assigned to whether E1, the Harada event, would have occurred given than Japanese Americans were loyal. We now know them to have been loyal, but that knowledge can’t be used in this analysis. Increasing this value lowers the posterior probability.

The posterior probability of H2 is relatively insensitive to changes in P(E1|H2), the likelihood of observing the evidence if Japanese Americans posed a threat (which, again, we now know them to have not).

A plot of posterior probability of H2 against the prior probabilities assigned to H2 – that is, P(H2|E1) vs P(H2) – for a range of values of P(H2) using three different values of P(E1|H1) shows the sensitivities. The below plot (scales are log-log) also shows the effect of varying P(E1|H2); compare the thin blue line to the thick blue line.

Prior hypotheses with probabilities greater the 99% represent confidence levels that are rarely justified. Nevertheless, we plot high posteriors for priors of H1 (i.e., posteriors of H2 down to 0.00001 (1E-5). Using P(E1|H1) = 0.05 and P(E1|H2 = 0.6, we get a posterior P(H2|E1) = 0.0001 – or P(H1|E1) = 99.99%, which might be initially judged as not supporting incarceration of US citizens in what were effectively concentration camps.

Risk

While there is no evidence of either explicit Bayesian reasoning or risk quantification by Franklin D. Roosevelt or military analysts, we can examine their decisions using reasonable ranges of numerical values that would have been used if numerical analysis had been employed.

We can model risk, as is common in military analysis, by defining it as the product of severity and probability – probability equal to that calculated as the posterior probability that a threat existed in the population of 120,000 who were interned.

Having established a range of probabilities for threat events above, we can now estimate severity – the cost of a loss – based on lost lives and lost defense capability resulting from a threat brought to life.

The Pearl Harbor attack itself tells us what a potential hazard might look like. Eight U.S. Navy battleships were at Pearl Harbor: Arizona, Oklahoma, West Virginia, California, Nevada, Tennessee, Maryland, and Pennsylvania. Typical peacetime crew sizes ranged from 1,200 to 1,500 per battleship, though wartime complements could exceed that. About 8,000–10,000 sailors were assigned to the battleships. More sailors would have been on board had the attack not happened on a Sunday morning.

About 37,000 Navy and 14,000 Army personnel were stationed at Pearl Harbor. 2,403 were killed in the attack, most of them aboard battleships. Four battleships were sunk. The Arizona suffered a catastrophic magazine explosion from a direct bomb hit. Over 1,170 crew members were killed. 400 were killed on the Oklahoma when it sank. None of the three aircraft carriers of the Pacific Fleet were in Pearl Harbor on Dec. 7. The USS Enterprise was due to be in port on Dec. 6 but was delayed by weather. Its crew was about 2,300 men.

Had circumstances differed slightly, the attack would not have been a surprise, and casualties would have been fewer. But in other conceivable turns of events, they could have been far greater. A modern impact analysis of an attack on Pearl Harbor or other bases would consider an invasion’s “cost” to be 10 to 20,000 lives and the loss of defense capability due to destroyed ships and aircraft. Better weather could have meant destruction of one third of US aircraft carriers in the Pacific.

Using a linear risk model, an analyst, if such analysis was done back then, might have used the above calculated P(H2|E1) as the probability of loss and 10,000 lives as one cost of the espionage. Using probability P(H1) in the range of 99.99% confidence in loyalty – i.e., P(H2) = 1E-4 – and severity = 10,000 lives yields quantified risk.

As a 1941 risk analyst, you would be considering a one-in-10,000 chance of losing 10,000 lives and loss of maybe 25% of US defense capacity. Another view of the risk would be that each of 120,000 Japanese Americans poses a one-in-10,000 chance of causing 10,000 deaths, an expected cost of roughly 120,000 lives (roughly, because the math isn’t quite as direct as it looks in this example).

While I’ve modeled the decision using a linear expected value approach, it’s important to note that real-world policy, especially in safety-critical domains, is rarely risk-neutral. For instance, Federal Aviation Regulation AC 25.1309 states that “no single failure, regardless of probability, shall result in a catastrophic condition”, a clear example of a threshold risk model overriding probabilistic reasoning. In national defense or public safety, similar thinking applies. A leader might deem a one-in-10,000 chance of catastrophic loss (say, 10,000 deaths and 25% loss of Pacific Fleet capability) intolerable, even if the expected value (loss) were only one life. This is not strictly about math; it reflects public psychology and political reality. A risk-averse or ambiguity-intolerant government could rationally act under such assumptions.

Would you take that risk, or would you incarcerate? Would your answer change if you used P(H1) = 99.999 percent? Could a prior of that magnitude ever be justified?

From the perspective of quantified risk analysis (as laid out in documents like FAR AC 25.1309), President Roosevelt, acting in early 1942 would have been justified even if P(H1) had been 99.999%.

In a society so loudly committed to consequentialist reasoning, this choice ought to seem defensible. That it doesn’t may reveal more about our moral bookkeeping than about Roosevelt’s logic. Racism existed in California in 1941, but it unlikely increased scrutiny by spy watchers. The fact that prejudice existed does not bear on the decision, because the prejudice did not motivate any action that would have born – beyond the Munson Report – on the prior probabilities used. That the Japanese Americans were held far too long is irrelevant to Roosevelt’s decision.

Since the rationality of Roosevelt’s decision, as modeled by Bayesian reasoning and quantified risk, ultimately hinges on P(H1), and since H1’s primary input was the Munson Report, we might scrutinize the way the Munson Report informs H1.

The Munson Report is often summarized with its most quoted line: “There is no Japanese ‘problem’ on the Coast.” And that was indeed its primary conclusion. Munson found Japanese American citizens broadly loyal and recommended against mass incarceration. However, if we assume the report to be wholly credible – our only source of empirical grounding at the time – then certain passages remain relevant for establishing a prior. Munson warned of possible sabotage by Japanese nationals and acknowledged the existence of a few “fanatical” individuals willing to act violently on Japan’s behalf. He recommended federal control over Japanese-owned property and proposed using loyal Nisei to monitor potentially disloyal relatives. These were not the report’s focus, but they were part of it. Critics often accuse John Franklin Carter of distorting Munson’s message when advising Roosevelt. Carter’s motives are beside the point. Whether his selective quotations were the product of prejudice or caution, the statements he cited were in the report. Even if we accept Munson’s assessment in full – affirming the loyalty of Japanese American citizens and acknowledging only rare threats – the two qualifiers Carter cited are enough to undercut extreme confidence. In modern Bayesian practice, priors above 99.999% are virtually unheard of, even in high-certainty domains like particle physics and medical diagnostics. From a decision-theoretic standpoint, Munson’s own language renders such priors unjustifiable. With confidence lower than that, Roosevelt made the rational decision – clear in its logic, devastating in its consequences.

, , , ,

3 Comments

Bayes Theorem, Pearl Harbor, and the Niihau Incident

The Niihau Incident of December 7–13, 1941 provides a good case study for applying Bayesian reasoning to historical events, particularly in assessing decision-making under uncertainty. Bayesian reasoning involves updating probabilities based on new evidence, using Bayes’ theorem: P(A∣B) = P(B∣A) ⋅ P(A)P(B) / P(A|B), where:

  • P(E∣H) is the likelihood of observing E given H
  • P(H∣E) is the posterior probability of hypothesis H given evidence E
  • P(H) is the prior probability of H
  • P(E) is the marginal probability of E.

Terms like P(E∣H), the probability of evidence given a hypothesis, can be confusing. Alternative phrasings may help:

  • The probability of observing evidence E if hypothesis H were true
  • The likelihood of E given H
  • The conditional probability of E under H

These variations clarify that we’re assessing how likely the evidence is under a specific scenario, not the probability of the hypothesis itself, which is P(H∣E).

In the context of the Niihau Incident, we can use Bayesian reasoning to analyze the decisions made by the island’s residents, particularly the Native Hawaiians and the Harada family, in response to the crash-landing of Japanese pilot Shigenori Nishikaichi. Below, I’ll break down the analysis, focusing on key decisions and quantifying probabilities while acknowledging the limitations of historical data.

Context of the Niihau Incident

On December 7, 1941, after participating in the Pearl Harbor attack, Japanese pilot Shigenori Nishikaichi crash-landed his damaged A6M2 Zero aircraft on Niihau, a privately owned Hawaiian island with a population of 136, mostly Native Hawaiians. The Japanese Navy had mistakenly designated Niihau as an uninhabited island for emergency landings, expecting pilots to await rescue there. The residents, unaware of the Pearl Harbor attack, initially treated Nishikaichi as a guest but confiscating his weapons. Over the next few days, tensions escalated as Nishikaichi, with the help of Yoshio Harada and his wife Irene, attempted to destroy his plane and papers, took hostages, and engaged in violence. The incident culminated in the Kanaheles, a Hawaiian couple, overpowering and killing Nishikaichi. Yoshio Harada committing suicide.

From a Bayesian perspective, we can analyze the residents updating their beliefs as new evidence emerged.

We define two primary hypotheses regarding Nishikaichi’s intentions:

  • H1: Nishikaichi is a neutral (non-threatening) lost pilot needing assistance.

  • H2: Nishikaichi is an enemy combatant with hostile intentions.

The residents’ decisions reflect the updating of beliefs about (credence in) these hypotheses.

Prior Probabilities

At the outset, the residents had no knowledge of the Pearl Harbor attack. Thus, their prior probability for P(H1) (Nishikaichi is non-threatening) would likely be high, as a crash-landed pilot could reasonably be seen as a distressed individual. Conversely, P(H2) (Nishikaichi is a threat) would be low due to the lack of context about the war.

We can assign initial priors based on this context:

  • P(H1) = 0.9: The residents initially assume Nishikaichi is a non-threatening guest, given their cultural emphasis on hospitality and lack of information about the attack.

  • P(H2) = 0.1: The possibility of hostility exists but is less likely without evidence of war.

These priors are subjective, reflecting the residents’ initial state of knowledge, consistent with the Bayesian interpretation of probability as a degree of belief.

We identify key pieces of evidence that influenced the residents’ beliefs:

E1: Nishikaichi’s Crash-Landing and Initial Behavior

Nishikaichi crash-landed in a field near Hawila Kaleohano, who disarmed him and treated him as a guest. His initial behavior (not hostile) supports H1.

Likelihoods:

  • P(E1∣H1) = 0.95: A non-threatening pilot is highly likely to crash-land and appear cooperative.

  • P(E1∣H2) = 0.3: A hostile pilot could be expected to act more aggressively, though deception is possible.

Posterior Calculation:

P(H1∣E1) = [P(E1∣H1)⋅P(H1)] / [P(E1∣H1)⋅P(H1) + P(E1∣H2)⋅P(H2) ]

P(H1|E1) = 0.95⋅0.9 / [(0.95⋅0.9) + (0.3⋅0.1)] = 0.97

After the crash, the residents’ belief in H1 justifies hospitality.

E2: News of the Pearl Harbor Attack

That night, the residents learned of the Pearl Harbor attack via radio, revealing Japan’s aggression. This significantly increases the likelihood that Nishikaichi was a threat.

Likelihoods:

  • P(E2∣H1) = 0.1 P(E2|H1) = 0.1 P(E2∣H1) = 0.1: A non-threatening pilot is unlikely to be associated with a surprise attack.

  • P(E2∣H2) = 0.9 P(E2|H2) = 0.9 P(E2∣H2) = 0.9: A hostile pilot is highly likely to be linked to the attack.

Posterior Calculation (using updated priors from E1):

P(H1∣E2) = P(E2∣H1)⋅P(H1∣E1) / [P(E2∣H1)⋅P(H1∣E1) + P(E2∣H2)⋅P(H2∣E1)]

P(H1∣E2) = 0.1⋅0.97 / [(0.1⋅0.97) + (0.9⋅0.03)] = 0.76

P(H2∣E2) = 0.24

The news shifts the probability toward H2, prompting the residents to apprehend Nishikaichi and put him under guard with the Haradas.

E3: Nishikaichi’s Collusion with the Haradas

Nishikaichi convinced Yoshio and Irene Harada to help him escape, destroy his plane, and burn Kaleohano’s house to eliminate his papers.

Likelihoods:

  • P(E3∣H1) = 0.01: A non-threatening pilot is extremely unlikely to do this.

  • P(E3∣H2) = 0.95: A hostile pilot is likely to attempt to destroy evidence and escape.

Posterior Calculation (using updated priors from E2):

P(H1∣E3) = P(E3∣H1)⋅P(H1∣E2) / [P(E3∣H1)⋅P(H1∣E2) + P(E3∣H2)⋅P(H2∣E2)]

P(H1∣E3) = 0.01⋅0.759 / [(0.01⋅0.759) + (0.95⋅0.241)] = 0.032

P(H2∣E3) = 0.968

This evidence dramatically increases the probability of H2, aligning with the residents’ decision to confront Nishikaichi.

E4: Nishikaichi Takes Hostages and Engages in Violence

Nishikaichi and Harada took Ben and Ella Kanahele hostage, and Nishikaichi fired a machine gun. Hostile intent is confirmed.

Likelihoods:

  • P(E4∣H1) = 0.001: A non-threatening pilot is virtually certain not to take hostages or use weapons.

  • P(E4∣H2) = 0.99: A hostile pilot is extremely likely to resort to violence.

Posterior Calculation (using updated priors from E3):

P(H1∣E4) = P(E4∣H1)⋅P(H1∣E3)/ [P(E4∣H1)⋅P(H1∣E3) + P(E4∣H2)⋅P(H2∣E3)P(H1|E4)]

P(H1∣E4) = 0.001⋅0.032 / [(0.001⋅0.032)+(0.99⋅0.968)] =0.00003

P(H2∣E4) = 1.0 – P(H1∣E4) = 0.99997

At this point, the residents’ belief in H2 is near certainty, justifying the Kanaheles’ decisive action to overpower Nishikaichi.

Uncertainty Quantification

Bayesian reasoning also involves quantifying uncertainty, particularly aleatoric (inherent randomness) and epistemic (model uncertainty) components.

Aleatoric Uncertainty: The randomness in Nishikaichi’s actions (e.g., whether he would escalate to violence) was initially high due to the residents’ lack of context. As evidence accumulated, this uncertainty decreased, as seen in the near-certain posterior for H2 after E4.

Epistemic Uncertainty: The residents’ model of Nishikaichi’s intentions was initially flawed due to their isolation and lack of knowledge about the war. This uncertainty reduced as they incorporated news of Pearl Harbor and observed Nishikaichi’s actions, refining their model of his behavior.

Analysis of Decision-Making

The residents’ actions align with Bayesian updating:

Initial Hospitality (E1): High prior for H1 led to treating Nishikaichi as a guest, with precautions (disarming him) reflecting slight uncertainty.

Apprehension (E2): News of Pearl Harbor shifted probabilities toward H2, prompting guards and confinement with the Haradas.

Confrontations (E3, E4): Nishikaichi’s hostile actions (collusion, hostage-taking) pushed P(H2) to near 1, leading to the Kanaheles’ lethal response.

The Haradas’ decision to assist Nishikaichi complicates the analysis. Their priors may have been influenced by cultural or personal ties to Japan, increasing their P(H1) or introducing a separate hypothesis of loyalty to Japan. Lack of detailed psychological data makes quantifying their reasoning speculative.

Limitations and Assumptions

Subjective Priors: The assigned priors (e.g., P(H1) = 0.9) are estimates based on historical context, not precise measurements. Bayesian reasoning allows subjective priors, but different assumptions could alter results.

Likelihood Estimates: Likelihoods (e.g., P(E1∣H1) = 0.95) are informed guesses, as historical records lack data on residents’ perceptions.

Simplified Hypotheses: I used two hypotheses for simplicity. In reality, residents may have considered nuanced possibilities, e.g., Nishikaichi being coerced or acting out of desperation.

Historical Bias: may exaggerate or omit details, affecting our understanding of evidence.

Conclusion

Bayesian reasoning (Subjective Bayes) provides a structured framework to understand how Niihau’s residents updated their beliefs about Nishikaichi’s intentions. Initially, a high prior for him being non-threatening (P(H1)=0.9) was reasonable given their isolation. As evidence accumulated (news of Pearl Harbor, Nishikaichi’s collusion with the Haradas, and his violent actions) the posterior probability of hostility, P(H2) approached certainty, justifying their escalating responses. Quantifying this process highlights the rationality of their decisions under uncertainty, despite limited information. This analysis demonstrates Bayesian inference used to model historical decision-making, assuming the deciders were rational agents.

Next

The Niihau Incident influenced U.S. policy decisions regarding the internment of Japanese Americans during World War II. It heightened fears of disloyalty among Japanese Americans. Applying Bayesian reasoning to the decision to intern Japanese Americans after the Niihau Incident might provide insight on how policymakers updated their beliefs about the potential threat posed by this population based on limited evidence and priors. In a future post, I’ll use Bayes’ theorem to model this decision-making process to model the quantification of risk.

, , , ,

2 Comments

If the Good Lord’s Willing and the Creek Don’t Rise

Feller said don’t try writin dialect less you have a good ear. Now do I think my ear’s good? Well, I do and I don’t. Problem is, younguns ain’t mindin this store. I’m afeared we don’t get it down on paper we gonna lose it. So I went up the holler to ask Clare his mind on it.

We set a spell. He et his biscuits cold, sittin on the porch, not sayin’ much, piddlin with a pocketknife like he had a mind to whittle but couldn’t commit. Clare looked like sumpin the cat drug in. He was wore slap out from clearing the dreen so he don’t hafta tote firewood from up where the gator can’t git. “Reckon it’ll come up a cloud,” he allowed, squinting yonder at the ridge. “Might could,” I said. He nodded slow. “Don’t fret none,” he said. “That haint don’t stir in the holler less it’s fixin ta storm proper.” Then he leaned back, tuckered, fagged-out, and let the breeze do the talkin.

Now old Clare, he called it alright. Well, I’ll swan! The wind took up directly, then down it come. We watched the brown water push a wall of dead leaves and branches down yon valley. Dry Branch, they call it, and that’s a fact. Ain’t dry now. Feature it. One minute dry as dust, then come a gully-washer, bless yer heart. That was right smart of time ago.

If you got tolerable horse sense for Appalachian colloquialism, you’ll have understood most of that. A haint, by the way, is a spirit, a ghost, a spell, or a hex. Two terms used above make me wonder if all the technology we direct toward capturing our own shreds of actual American culture still fail to record these treasured regionalisms.

A “dreen,” according to Merriam-Webster, is “a dialectal variation of ‘drain,’ especially in Southern and South Midland American English.” Nah, not in West Virginia. That definition is a perfect example of how dictionaries flatten regional terms into their nearest Standard English cousin and, in doing so, miss the real story. It’s too broad and bland to capture what was, in practice, a topographic and occupational term used by loggers.

A dreen, down home, is a narrow, shallow but steep-sided and steeply sloping valley used to slide logs down. It’s recognized in local place-names and oral descriptions. Clear out the gully – the drain – for logs and you got yourself a dreen. The ravine’s water flow, combined with exposed shards of shale, make it slick. Drop logs off up top, catch them in a basin at the bottom. An economical means for moving logs down rough terrain without a second team of horses, specialized whiffletrees, and a slip-tongue skidder. How is it that there is zero record of what a dreen is on the web?

To “feature” something means to picture it in your mind. Like, “imagine,” but more concrete. “Picture this” + “feature picture” → “feature this.” Maybe? I found a handful of online forums where someone wrote, “I can’t feature it,” but the dictionaries are silent. What do I not pay you people for?

It’s not just words and phrases that our compulsive documentation and data ingestion have failed to capture about Appalachia. Its expressive traditions rarely survive the smooshing that comes with cinematic stereotypes. Poverty, moonshine, fiddles, a nerdy preacher and, more lately, mobile meth labs, are easy signals for “rural and backward.” Meanwhile, the texture of Appalachian life is left out.

Ever hear of shape-note music? How about lined-out singing? The style is raw and slow, not that polished gospel stuff you hear down in Alabama. The leader “lines out” a hymn, and the congregation follows in a full, droning response. It sounds like a mixture of Gaelic and plain chant – and probably is.

Hill witch. Granny women, often midwives, were herbalists and folk doctors. Their knowledge was empirical, intergenerational, and somehow female-owned. They were healers with an oral pharmacopoeia rooted in a mix of Native American and Scottish traditions. Hints of it, beyond the ginseng, still pop up here and there.

Jack tales. They pick up where Jack Frost, Jack and Jill, and Little Jack Horner left off. To my knowledge, those origins are completely unrelated to each other. Jack tales use these starting points to spin yarns about seemingly low-ambition or foolish folk who outfox them what think they’re smart. (Pronounce “smart” with a short “o” and a really long “r” that stretches itself into two distinct syllables.)

Now, I know that in most ways, none of that amounts to a hill of beans, but beyond the dialect, I fear we’re going to lose some novel expressions. Down home,

“You can’t get there from here” means it is metaphorically impossible or will require a lot of explaining.

“Puny” doesn’t mean you’re small; it means you look sick.

“That dog won’t hunt” means an idea, particularly a rebuttal or excuse, that isn’t plausible.

“Tighter than Dick’s hatband” means that someone is stingy or has proposed an unfair trade.

“Come day, go day, God send Sunday” means living day to day, e.g., hoping the drought lets up.

“He’s got the big eye” means he can’t sleep.

“He’s ate up with it” means he’s obsessed – could be jealousy, could be pride.

“Well, I do and I don’t” says more than indecision. You deliver it as a percussive anapest (da-da-DUM!, da-da-DUM!), granting it a kind of rhythmic, folksy authority. It’s a measured fence-sitting phrase that buys time while saying something real. It’s a compact way to acknowledge nuance, to say, “I agree… to a point,” followed with “It’s complicated…” Use it to acknowledge an issue as more personal and moral, less analytical. You can avoid full commitment while showing thoughtfulness. It weighs individual judgment. See also:

“There’s no pancake so thin it ain’t got two sides.”

The stoics got nothin on this baby. I don’t want you think I’m uppity – gettin above my raisin, I mean – but this one’s powerful subtle. There’s a conflict between principle and sympathy. It flattens disagreement by framing it as something natural. Its double negative ain’t no accident. Deploy it if you’re slightly cornered but not ready to concede. You acknowledge fairness, appear to hover above the matter at hand, seemingly without taking sides. Both parties know you have taken a side, of course. And that’s ok. That’s how we do it down here. This is de-escalation of conflict through folk epistemology: nothing is so simple that it doesn’t deserve a second look. Even a blind hog finds an acorn now and then. Just ‘cause the cat’s a-sittin still don’t mean it ain’t plannin.

Appalachia is America’s most misunderstood archive, its stories tucked away in hollers like songs no one’s sung for decades.

, ,

2 Comments

Grains of Truth: Science and Dietary Salt

Science doesn’t proceeds in straight lines. It meanders, collides, and battles over its big ideas. Thomas Kuhn’s view of science as cycles of settled consensus punctuated by disruptive challenges is a great way to understand this messiness, though later approaches, like Imre Lakatos’s structured research programs, Paul Feyerabend’s radical skepticism, and Bruno Latour’s focus on science’s social networks have added their worthwhile spins. This piece takes a light look, using Kuhn’s ideas with nudges from Feyerabend, Lakatos, and Latour, at the ongoing debate over dietary salt, a controversy that’s nuanced and long-lived. I’m not looking for “the truth” about salt, just watching science in real time.

Dietary Salt as a Kuhnian Case Study

The debate over salt’s role in blood pressure shows how science progresses, especially when viewed through the lens of Kuhn’s philosophy. It highlights the dynamics of shifting paradigms, consensus overreach, contrarian challenges, and the nonlinear, iterative path toward knowledge. This case reveals much about how science grapples with uncertainty, methodological complexity, and the interplay between evidence, belief, and rhetoric, even when relatively free from concerns about political and institutional influence.

In The Structure of Scientific Revolutions, Kuhn proposed that science advances not steadily but through cycles of “normal science,” where a dominant paradigm shapes inquiry, and periods of crisis that can result in paradigm shifts. The salt–blood pressure debate, though not as dramatic in consequence as Einstein displacing Newton or as ideologically loaded as climate science, exemplifies these principles.

Normal Science and Consensus

Since the 1970s, medical authorities like the World Health Organization and the American Heart Association have endorsed the view that high sodium intake contributes to hypertension and thus increases cardiovascular disease (CVD) risk. This consensus stems from clinical trials such as the 2001 DASH-Sodium study, which demonstrated that reducing salt intake significantly (from 8 grams per day to 4) lowered blood pressure, especially among hypertensive individuals. This, in Kuhn’s view, is the dominant paradigm.

This framework – “less salt means better health” – has guided public health policies, including government dietary guidelines and initiatives like the UK’s salt reduction campaign. In Kuhnian terms, this is “normal science” at work. Researchers operate within an accepted model, refining it with meta-analyses and Randomized Control Trials, seeking data to reinforce it, and treating contradictory findings as anomalies or errors. Public health campaigns, like the AHA’s recommendation of less than 2.3 g/day of sodium, reflect this consensus. Governments’ involvement embodies institutional support.

Anomalies and Contrarian Challenges

However, anomalies have emerged. For instance, a 2016 study by Mente et al. in The Lancet reported a U-shaped curve; both very low (less than 3 g/day) and very high (more than 5 g/day) sodium intakes appeared to be associated with increased CVD risk. This challenged the linear logic (“less salt, better health”) of the prevailing model. Although the differences in intake were not vast, the implications questioned whether current sodium guidelines were overly restrictive for people with normal blood pressure.

The video Salt & Blood Pressure: How Shady Science Sold America a Lie mirrors Galileo’s rhetorical flair, using provocative language such as “shady science” to challenge the establishment. Like Galileo’s defense of heliocentrism, contrarians in the salt debate (researchers like Mente) amplify anomalies to question dogma, sometimes exaggerating flaws in early studies (e.g., Lewis Dahl’s rat experiments) or alleging conspiracies (e.g., pharmaceutical influence). More in Feyerabend’s view than in Kuhn’s, this exaggeration and rhetoric might be desirable. It’s useful. It provides the challenges that the paradigm should be able to overcome to remain dominant.

These challenges haven’t led to a paradigm shift yet, as the consensus remains robust, supported by RCTs and global health data. But they highlight the Kuhnian tension between entrenched views and emerging evidence, pushing science to refine its understanding.

Framing the issue as a contrarian challenge might go something like this:

Evidence-based medicine sets treatment guidelines, but evidence-based medicine has not translated into evidence-based policy. Governments advise lowering salt intake, but that advice is supported by little robust evidence for the general population. Randomized controlled trials have not strongly supported the benefit of salt reduction for average people. Indeed, we see evidence that low salt might pose as great a risk.

Sodium Intake vs. Cardiovascular Disease Risk

Sodium Intake vs. Cardiovascular Disease Risk. Based on Mente (2016) and O’Donnell (2014).

Methodological Challenges

The question “Is salt bad for you?” is ill-posed. Evidence and reasoning say this question oversimplifies a complex issue: sodium’s effects vary by individual (e.g., salt sensitivity, genetics), diet (e.g., processed vs. whole foods), and context (e.g., baseline blood pressure, activity level). Science doesn’t deliver binary truths. Modern science gives probabilistic models, refined through iterative testing.

While randomized controlled trials (RCTs) have shown that reducing sodium intake can lower blood pressure, especially in sensitive groups, observational studies show that extremely low sodium is associated with poor health. This association may signal reverse causality, an error in reasoning. The data may simply reveal that sicker people eat less, not that they are harmed by low salt. This complexity reflects the limitations of study design and the challenges of isolating causal relationships in real-world populations. The above graph is a fairly typical dose-response curve for any nutrient.

The salt debate also underscores the inherent difficulty of studying diet and health. Total caloric intake, physical activity, genetic variation, and compliance all confound the relationship between sodium and health outcomes. Few studies look at salt intake as a fraction of body weight. If sodium recommendations were expressed as sodium density (mg/kcal), it might help accommodate individual energy needs and eating patterns more effectively.

Science as an Iterative Process

Despite flaws in early studies and the polemics of dissenters, the scientific communities continue to refine its understanding. For example, Japan’s national sodium reduction efforts since the 1970s have coincided with significant declines in stroke mortality, suggesting real-world benefits to moderation, even if the exact causal mechanisms remain complex.

Through a Kuhnian lens, we see a dominant paradigm shaped by institutional consensus and refined by accumulating evidence. But we also see the system’s limits: anomalies, confounding variables, and methodological disputes that resist easy resolution.

Contrarians, though sometimes rhetorically provocative or methodologically uneven, play a crucial role. Like the “puzzle-solvers” and “revolutionaries” in Kuhn’s model, they pressure the scientific establishment to reexamine assumptions and tighten methods. This isn’t a flaw in science; it’s the process at work.

Salt isn’t simply “good” or “bad.” The better scientific question is more conditional: How does salt affect different individuals, in which contexts, and through what mechanisms? Answering this requires humility, robust methodology, and the acceptance that progress usually comes in increments. Science moves forward not despite uncertainty, disputation and contradiction but because of them.

, , , ,

5 Comments

Content Strategy Beyond the Core Use Case

Introduction

In 2022, we wrote, as consultants to a startup, a proposal for an article exploring how graph computing could be applied to Bulk Metallic Glass (BMG), a class of advanced materials with an unusual atomic structure and high combinatorial complexity. The post tied a scientific domain to the strengths of the client’s graph computing platform – in this case, its ability to model deeply structured, non-obvious relationships that defy conventional flat-data systems.

This analysis is an invitation to reflect on the frameworks we use to shape our messaging – especially when we’re speaking to several audiences at once.

Everyone should be able to browse a post, skim a paragraph or two, and come away thinking, “This company is doing cool things.” A subset of readers should feel more than that.

Our client (“Company”) rejected the post based on an outline we submitted. It was too far afield. But in a saturated blogosphere where “graph for fraud detection” has become white noise, unfamiliarity might be exactly what cuts through. Let’s explore.

Company Background

  • Stage and Funding: Company, with ~$30M in Series-A funding, was preparing for Series B, having two pilot customers, both Fortune-500, necessitating a focus on immediate traction. Company was arriving late – but with a platform more extensible than the incumbents.
  • Market Landscape: The 2022 graph database – note graph db, as differentiated from the larger graph-computing landscape – market was dominated by firms like Neo4j, TigerGraph, Stardog, and ArangoDB. Those firms had strong branding in fraud detection, cybersecurity, and recommendation systems. Company’s extensible platform needed to stand out.
  • Content Strategy: With 2–3 blog posts weekly, Company aimed to attract investors, journalists, analysts, customers, and jobseekers while expanding SEO. Limited pilot data constrained case studies, risking repetitive content. Company had already agreed to our recommendation of including employee profiles showing their artistic hobbies to attract new talent and show Company valued creative thinking.
  • BMG Blog Post: Proposed to explore graph computing’s application to BMG’s amorphous structure, the post aimed to diversify content and position Company as a visionary, not in materials science but in designing a product that could solve a large class of problems faced by emerging tech.

The Decision

Company rejected the BMG post, prioritizing content aligned with their pilot customers and core markets. This conservative approach avoided alienating key audiences but missed opportunities to expand its audience and to demonstrate the product’s adaptability and extensibility.

Psychology of Content Marketing: Balancing Audiences

Content marketing must navigate a diverse audience with varying needs, from skimming executives to deep-reading engineers. Content must be universal acceptability – ensuring every reader, regardless of expertise, leaves with a positive impression (Company is doing interesting things) – while sparking curiosity or excitement in key subsets (e.g., customers, investors). Company’s audiences included:

  • Technical Enthusiasts: Seek novel applications (e.g., BMG) to spark curiosity.
  • Jobseekers: Attracted to innovative projects, enhancing talent pipelines.
  • Analysts: Value enterprise fit, skimming technical details for authority.
  • Investors: Prioritize traction and market size, wary of niche distractions.
  • Customers: Demand ROI-driven solutions, less relevant to BMG.
  • Journalists: Prefer relatable stories, potentially finding BMG too niche.

Strategic Analysis

Background on the graph word in 2022 will help with framing Company’s mindset. In 2017-2020, several cloud database firms had aliened developers with marketing content claiming their products would eliminate the need for coders. This strategic blunder stemmed from failure to manage messaging to a diverse audience. The blunder was potentially costly since coders are a critical group at the base of the sales funnel.  Company’s rejection avoided this serious misstep but may have underplayed the value of engaging technology enthusiasts and futurists.

The graph database space was crowded. Company needed not only to differentiate their product but their category. Graph computing, graph AI, and graph analytics is a larger domain, but customers and analysists often missed the difference.

The proposed post cadence at the time, 3 to 5 posts per week, accelerated the risk of exhausting standard content categories. Incumbents like Neo4j had high post rates, further frustrating attempts to cover new aspects of the standard use cases.

Possible Rationale for Rejection and Our Responses

  1. Pilot Customer Focus:
    • The small pilot base drove content toward fraud detection and customer 360 to ensure success and investor confidence. BMG’s niche focus risked diluting this narrative, potentially confusing investors about market focus.
    • Response: Our already-high frequency of on-point posts (fraud detection, drug discovery, customer 360) combined with messaging on Company’s site ensures that an investor or analyst would unambiguously discern core focus.
  2. Crowded Market Dynamics:
    • Incumbents owned core use cases, forcing Company to compete directly. BMG’s message was premature.
    • Response: That incumbents owned core use cases is a reason to show that Company’s product was designed to handle those cases (accomplished with the majority of Company’s posts) but also had applicability beyond the crowded domains of core uses cases.
  3. Low ROI Potential:
    • BMG targets a niche market with low value.
    • Response: The BMG post, like corporate news posts and employee spotlights is not competing with core focus. It’s communicates something about Company’s minds, not its products.
  4. Audience Relevance:
    • BMG might appeal to technical enthusiasts but is less relevant to customers and investors.
    • Response: Journalists, feeling the staleness of graph db’s core messaging, might cover the BMG use case, thereby exposing Company to investors and analysts.

Missed Opportunities

  1. Content Diversification:
    • High blog frequency risked repetitive content. BMG could have filled gaps, targeting long-tail keywords for future SEO growth.
    • In 2025, materials science graph applications have grown, suggesting early thought leadership could have built brand equity.
  2. Thought Leadership:
    • BMG positioned Company as a pioneer in emerging fields, appealing to analysts and investors seeking scalability.
    • Engaging technical enthusiasts could have attracted jobseekers, addressing talent needs.
  3. Niche Market Potential:
    • BMG’s relevance to aerospace and medical device R&D could have sparked pilot inquiries, diversifying customer pipelines.
    • A small allocation of posts to niche but still technical topics could have balanced core focus without significant risk.

Decision Impact

  • Short-Term: The rejection aligned with Company’s need to focus on the pilot and core markets, ensuring investor and customer confidence. The consequences were minimal, as BMG was unlikely to drive immediate high-value leads.
  • Long-Term: A minor missed opportunity to establish thought leadership in a growing field, potentially enhancing SEO and investor appeal.

Lessons for Content Marketing Strategists

  1. Balance Universal Acceptability and Targeted Curiosity:
    • Craft content that all audiences find acceptable (“This is interesting”) while sparking excitement in key groups (e.g., technical enthusiasts and futurists). Alienate no one.
  2. Understand the Value of Thought Leadership:
    • Thought leadership shows that Company can connect knowledge to real-world problems in ways that engage and lead change.
  3. Align Content with Business Stage:
    • Series-A startups prioritize traction, favoring core use cases. Company’s focus on financial services was pragmatic, but it potentially limited exposure.
    • Later-stage companies can afford niche content for thought leadership, balancing short-term ROI with long-term vision.
  4. Navigate Crowded Markets:
    • Late entrants must compete on established turf while differentiating. Company’s conservative approach competed with incumbents but missed a chance to reposition the conversation with visionary messaging.
    • Niche content can carve unique positioning without abandoning core markets.
  5. Manage Content Cadence:
    • High frequency (2–3 posts/week) requires diverse topics to avoid repetition. Allocate 80% to core use cases and 20% to niche topics to sustain engagement and SEO.
  6. Leverage Limited Data:
    • With a small pilot base, anonymized metrics or hypothetical use cases can bolster credibility without revealing sensitive data. E.g., BMG simulations could serve this need.
    • Company’s datasheets lacked evidential support, highlighting the need for creative proof points.
  7. SEO as a Long Game:
    • Core use case keywords (e.g., “fraud detection”) drive immediate traffic, but keyword expansion builds future relevance.
    • Company’s rejection of BMG missed early positioning in a growing field.

Conclusion

Company’s rejection of the BMG blog post was a defensible, low-impact decision driven by the desire to focus on a small pilot base and compete in a crowded 2022 graph database market. It missed a minor opportunity to diversify content, engage technical audiences, and establish thought leadership, both in general and in materials science – a field that had gained traction by 2025. A post like BMG wasn’t trying to generate leads from metallurgists. It was subtly, but unmistakably, saying: “We’re not just a graph database. We’re building the substrate for the next decade’s knowledge infrastructure.” That message is harder to convey when Company ties itself too tightly to existing use cases.

BMG was a concrete illustration that Company’s technology can handle problem spaces well outside the comfort zone of current incumbents. Where most vendors extend into new verticals by layering integrations or heuristics, the BMG post suggested that a graph-native architecture can generalize across domains not yet explored. The post showed breadth and demonstrated one aspect of transferability of success, exactly wat Series B investors say they’re looking for.

While not a critical mistake, this decision offers lessons for strategists and content marketers. It illustrates the challenge of balancing universal acceptability with targeted curiosity in a crowded market, where a late entrant must differentiate while proving traction. This analysis (mostly in outline form for quick reading) explores the psychology and nuances of the decision, providing a framework for crafting effective content strategies.

For content marketing strategists, the BMG post case study underscores the importance of balancing universal acceptability with targeted curiosity, aligning content with business stage, and leveraging niche topics to differentiate in crowded markets. By allocating a small portion of high-frequency content to exploratory posts, startups can maintain focus while planting seeds for future growth, ensuring all audiences leave with a positive impression and a few leave inspired.

, , , , , , , , ,

Leave a comment

Unlocking the Secrets of Bulk Metallic Glass with Graph Computing

This blog post was written in May 2022 by Amy Skowronski and Bill Storage for {Company}. {Company} did not accept it (or a shorter version) for publication because it was tangential to their current (at that time) focus. I’ll comment on that decision in a future post.

Fraud detection, drug discovery, and network security have all advanced with the help of graph computing – but these are just the early, obvious wins. The deeper promise of {Company}’s graph-native platform lies in uncovering complex relationships in domains most systems aren’t built to touch. To show what that looks like, we turn to a frontier yet revealing application: a class of advanced materials known as Bulk Metallic Glass.

Imagine a metal that’s stronger than steel, bends like plastic, and resists corrosion like glass. Bulk metallic glass (BMG), discovered in the 1960s, was found to have such characteristics. Recent advancements, enabled in part by high performance computing, give this revolutionary material the potential to transform industries from aerospace to medical devices.

Unlike the orderly atomic structures of common metals, BMGs boast a chaotic, amorphous arrangement that defies traditional metallurgy. At {Company}, we’re harnessing the power of graph computing to decode BMG’s atomic secrets, unlocking new possibilities for materials science. In this post, we’ll explore how BMGs differ from common metals and why graph analytics is the key to designing the next generation of advanced materials.

What Makes Bulk Metallic Glass So Special?

To understand BMGs, let’s start with the basics of metal structure. Most metals like steel and aluminum form crystalline lattices. These are highly organized, repeating patterns of atoms. The lattices define how metals behave: their strength, ductility, and even how they corrode. Common arrangements include:

  • Face-Centered Cubic (FCC): Picture a cube with atoms at each corner and in the center of each face. This is most aluminum and copper. FCC metals are more ductile, ideal for shaping into wires or sheets.
  • Body-Centered Cubic (BCC): Think of a cube with an atom at each corner and one in the center (e.g., iron at room temperature, which becomes FCC at higher temperatures). BCC metals are strong but less ductile, making them prone to brittle fracture under stress.
  • Hexagon Close-Packed (HCP): Imagine tightly packed spheres stacked in a hexagonal pattern. This is most magnesium and titanium. HCP metals offer a balance of strength and formability, common in aerospace components.
Diagram of steel crystalline structure. Microstructure of VT22 (Ti5Al5Mo5V1,5Cr), courtesy of Edward Pleshakov.

All these structures are orderly, predictable, and rigid. But they have flaws – dislocations and grain boundaries where crystal regions meet. The boundaries act like seams, weaker than the surrounding fabric. Cracks and corrosion exploit the boundaries. Enter bulk metallic glass.

BMGs are amorphous; their atoms are arranged in a random, glass-like state, resembling a frozen liquid. Instead of lattices, BMG atoms are arranged in tightly packed clusters. This chaos gives BMGs unique properties:

  • Incredible Strength: Without grain boundaries, BMGs resist cracking. Their strengths reach 2–3 GPa (300,000–400,000 psi), exceeding that of many high-strength steels, which typically top out around 1.5–2 GPa.
  • Elasticity: BMGs can flex like polymers, deforming up to 2% before yielding, far exceeding the behavior of normal metals.
  • Corrosion Resistance: Absence of ordered planes makes it harder for chemicals to attack and penetrate. BMGs are ideal for harsh environments like jet engines or implants.
  • Processability: BMGs can be molded like glass when heated into their supercooled liquid region, enabling complex shapes for gears or biomedical stents.

Of course, there’s a catch, one that has hindered BMG development for decades. Designing BMGs is arduous. Their properties depend on precise compositions (e.g., Zr41.2Ti13.8Cu12.5Ni10Be22.5, known as Vitreloy 1) and extreme cooling rates in early systems (10⁵–10⁶ K/sec), though modern BMGs can form at rates as low as 1–100 K/sec. Understanding their atomic structure means solving a 3D puzzle with billions of pieces.

Graph Computing: Decoding BMG’s Atomic Chaos

BMGs’ amorphous structure is a network of atoms connected by bonds, with no repeating pattern. This makes them a perfect fit for graph analytics, where atoms are nodes and bonds are edges. {Company’s} high-performance graph platform can model these atomic networks at massive scale, revealing insights that traditional tools can’t touch. Here’s how it works:

1. Modeling the Amorphous Network

Imagine a BMG sample with billions of atoms, each bonded to 8–13 neighbors in a random cluster. {Company} represents this as a graph. Each node (atom) has properties like element type (e.g., Zr, Cu) and position. Each edge (bond) has attributes like bond strength or distance. Unlike crystalline metals, where lattices repeat predictably, BMG graphs are irregular, with varying degrees (number of bonds per atom) and clustering coefficients (how tightly atoms pack locally).

Using {Company’s} distributed graph engine, researchers can ingest terabytes of molecular dynamics (MD) simulation data – snapshots of atomic positions from supercomputers – and build these graphs in real time. Our platform’s ability to handle sparse, irregular graphs at scale (think 109 nodes and 1010 edges) makes it ideal for BMGs, outperforming traditional methods by 10–100x.

2. Analyzing Local Atomic Clusters

BMGs owe their strength to short-range order – local clusters like icosahedra (12 atoms around a central one) or tetrahedra (4 atoms tightly packed). These clusters don’t repeat globally but dominate locally, influencing properties like ductility. {Company’s} graph algorithms, like community detection (e.g., Louvain clustering), identify these motifs by finding densely connected subgraphs. For example, a high icosahedral cluster count in a Zr-based BMG correlates with better glass-forming ability and higher resistance to shear localization.

We also use graph neural networks (GNNs) to predict cluster stability. GNNs, using high-quality training data, learn from the graph’s topology and node features (e.g., atomic radii, electronegativity), predicting which compositions favor amorphous structures. This accelerates BMG design, reducing trial-and-error in the lab.

3. Simulating Defects and Dynamics

BMGs aren’t perfect. Shear transformation zones (STZs) – regions where atoms rearrange under stress – control their plasticity. {Company} models STZs as anomalous subgraphs, where nodes have unusual connectivity (e.g., lower or higher degree than average). Our anomaly detection algorithms can pinpoint these defects, helping engineers predict failure points.

Dynamic processes, like atomic rearrangements during cooling, are modeled as temporal graphs, where atom positions and inferred bonds change over time due to thermal motion or stress. {Company’s} real-time processing (powered by HPC roots) tracks these changes at scale, revealing how cooling rates affect amorphous stability. This is critical for scaling BMG production, because slow cooling often leads to unwanted crystallization.

4. Optimizing Composition

BMG recipes are complex, with 3–5 elements in precise ratios. {Company’s} graph traversal algorithms explore compositional spaces, identifying combinations that maximize icosahedral clusters or minimize crystallization risk. For instance, adding 1% Be to a Zr-Cu alloy can stabilize the amorphous phase. Our platform integrates with machine learning pipelines, enabling researchers to iterate faster than ever.

Why BMG Matters to Industry

BMGs are already making waves:

  • Aerospace: NASA’s BMG gears for Mars rovers (developed and tested, not production to our knowledge) are 2x stronger than titanium, with no grain boundaries to fail under stress.
  • Medical Devices: BMG implants resist corrosion in the body, lasting longer than the best stainless steels.
  • Electronics: BMG casings for smartphones (e.g., Apple’s trials) combine strength with moldability for sleek designs.

The Future of Materials Science with {Company}

The global advanced materials market is projected to hit $1.1 trillion by 2027, and BMGs are a growing slice. But their potential is untapped due to design complexity. {Company’s} graph platform bridges this gap, enabling researchers to model, analyze, and optimize BMG structures at unprecedented scale. {Company’s} graph computing platform, with its roots in high-performance computing and expertise in Graph AI, is the perfect partner for this journey.

, , ,

4 Comments

Reflection Lag

Gregor Ehrenwald never excelled at conversation but had a gift for suggestion. He curated his Facebook like a diplomat, or maybe a monastic scribe. His profile boasted clips of treacherous mountain bike trails. He didn’t own a mountain bike. He was waiting for the Epic 8’s price to drop. He shared quotes from Wittgenstein and Heidegger. He’d bought Being and Time and planned to read it soon. His Friends tab showed camaraderie of a luminous, unaccountable texture. These weren’t lies, but aspirations projected from the regard he recalled holding at Canyon Lake High School. Or would still hold, had he not been forced to work nights to secure the college education that took him out of circulation for five full years. Not lies exactly, more a species of autobiographical foreshadowing.

After several years of this mild deception, a shift occurred – not dramatically, but with the soft click of remembering a movie he never saw. Gregor began composing posts with a fluency that startled even him. “Great catching up with granite master Lars, still the sickest dude on the west face of Tahquitz” he typed one night, forgetting that he’d never climbed at Tahquitz and that there was no Lars. He had, admittedly, joined the climbing club his senior year. Yet the words fell into place as though Lars had laughed and belayed and borrowed a pedal wrench he never returned. These memories did not contradict Gregor’s recollection of his real high school years but simply inserted themselves beside them, as though time had gently forked.

Gregor posted late into the night, like he didn’t have a day job. He woke to the muscle memory of reaching for his phone. His fingers danced across the screen before his eyes could adjust to the light. He dabbled in Facebook politics briefly. The algorithm offered outrage and validation. He wanted something warmer, something that remembered him.

After work, Gregor passed the hallway mirror, caught his reflection, and paused – eyes bright, almost feverish, as if he’d just heard good news.

That night, in a storage bin untouched for years, he found his high school book covers – brown grocery bags, folded with care, still taped from Algebra, Latin, Geometry. Their surfaces were scrawled with pen and Sharpie, dense with notes and swirls coiling inward.

He traced a note from Luke Stone: “Hey Library Rat – kidding, man!” A hasty “Cool guy, great P.A.” from Charlotte Brooks, who had usually looked through him.  Then:

“G-money! Chemistry sucked without your jokes. Stay wild!”

“You leave a little sparkle wherever you go. Work hard and stay humble!”

There were a dozen book covers, each packed with tributes. Some comments ran in overlapping curls. Others were squeezed so narrow they had to be read with the flashlight.

He spotted this from Lauren D:

“To bestie GE – Voted Most Original Sense of Humor!!!”

And from Justin:

“Never forget that time we killed the Elsinore talent show. Wild sax!”

The names rang hollow. He didn’t remember a talent show.

He studied the doodles. Simple, repetitive shapes – coiling glyphs, chains, filigree. Clumsy figures too, but insistent: cats with mohawks, clown faces, spirals to nowhere. Probably mid-lecture boredom. Or maybe not.

 On the last cover, he noticed, near the bottom, beneath a tiny saxophone outline, penned in a measured, angular hand:

“Believe in yourself as much as I believe in you! Facebook FTW!”

Facebook?

Gregor froze. He set the cover down carefully. The room leaned in, heavy with heat.

A prank, he thought at first. Someone messing with him, writing on his stuff. But no. He wondered if he’d suffered some obscure brain fever – the kind that haunt old novels, now rebranded as mild dissociative episodes.

The handwriting mimicked styles he admired: elongated Gs and Spencerian script, grand loops with a practiced flair. Some mirrored his own hand. Others from hands he’d never seen. It was as though the entries had written themselves to flatter him in the light he wished to be seen in.

Facebook – that was the breach – the hinges on which the door now swung. No Facebook back then. Nor had there been Lars. And yet: how warm the perceived laughter, how victorious up on Tahquitz, how easy the belonging.

Then he recalled a neuroscience article Lars had shared. Memories could misfire, it said, landing in the wrong slot.

He sat on the edge of the bed and devised his own Theory of Premature Memory Displacement.

Certain memories, he reasoned, do not originate in the past but arrive early, dressed in nostalgia. The mind, trying to orient them temporally, may misfile them. The Facebook entries were always meant for him – but like mail delivered to a former address, had arrived a decade late. A memory lost doesn’t vanish, it ricochets around the mind until it lands on some vacant shelf, to be recovered later.

Satisfied, he opened the last cover – the part that once faced the book’s actual cover. There he found a girl’s message:

“Never stop writing, Gregor. You see things others miss.”

He underlined her name. Trina – and then, as if prompted, recalled her fabulous voice, her rendition of Coldplay’s Viva la Vida. He refolded the covers and put them back in the bin.

He thumbed through Heidegger, hunting for a line to post. There it was – page 374:

“The ‘past’or better, the having-beenhas its being in the future.”

The likes came in slow but steady.

, , , , , ,

Leave a comment