If you have the right to die, you should have the right to try!

Ruxandra Teslo asks a good question:

I have a curiosity: why is it the case that it is easier to get MAID in Canada than it is to access experimental treatments which carry a higher risk? In the past, I used to think ppl do not like “deaths caused by the medical system”, but for MAID the prob of death is 100%…

The Canadians may be somewhat inconsistent on this point. Unfortunately, the Supreme Court has been consistent and has rejected medical self-defense arguments for physician assisted suicide and let stand an appeals court ruling that patients do not have a right to access drugs which have not yet been permitted for sale by the FDA (fyi, I was part of an Amici Curiae brief for this case).

Hat tip for the post title to Jason Crawford.

Think through the situation one step further

Many of you got upset when I mentioned the possibility that parents use smart phone software to control the social media usage of their kids.  There was an outcry about how badly those systems work (is that endogenous?).  But that is missing the point.

If you wish to limit social media usage, mandate that the phone companies install such software and make it more effective.  Or better yet commission or produce a public sector app to do the same, a “public option” so to speak.  Parents can then download such an app on the phone of their children, or purchase the phone with the app, and manipulate it as they see fit.

If you do not think government is capable of doing that, why think they are capable of running an effective ban for users under the age of sixteen?  Maybe those apps can be hacked but we all know the “no fifteen year olds” solution can be hacked too, for instance by VPNs or by having older friends set up the account.

My proposal has several big advantages:

1. It keeps social media policy in the hands of the parents and away from the government.

2. It does not run the risk of requiring age verification for all users, thus possibly banishing anonymous writing from the internet.

3. The government does not have to decide what constitutes a “social media site.”

Just have the government commission a software app that can give parents the control they really might want to have.  I am not myself convinced by the market failure charges here, but I am very willing to allow a public option to enter the market.

The fact that this option occasions so little interest from the banners I find highly indicative.

AI Won’t Automatically Accelerate Clinical Trials

Although I’m optimistic that AI will design better drug candidates, this alone cannot ensure “therapeutic abundance,” for a few reasons. First, because the history of drug development shows that even when strong preclinical models exist for a condition, like osteoporosis, the high costs needed to move a drug through trials deters investment — especially for chronic diseases requiring large cohorts. And second, because there is a feedback problem between drug development and clinical trials. In order for AI to generate high-quality drug candidates, it must first be trained on rich, human data; especially from early, small-n studies.

…Recruiting 1000 patients across 10 sites takes time; understanding and satisfying unclear regulatory requirements is onerous and often frustrating; and shipping temperature-sensitive vials to research hospitals across multiple states takes both time and money.

…For many diseases, however, the relevant endpoints take a very long time to observe. This is especially true for chronic conditions, which develop and progress over years or decades. The outcomes that matter most — such as disability, organ failure, or death — take a long time to measure in clinical trials. Aging represents the most extreme case. Demonstrating an effect on mortality or durable healthspan would require following large numbers of patients for decades. The resulting trial sizes and durations are enormous, making studies extraordinarily expensive. This scale has been a major deterrent to investment in therapies that target aging directly.

Here is more from Asimov Press and Ruxandra Teslo.

On the Programmability and Uniformity of Digital Currencies

That is from the new AER Insights by Jonathan Chiu and Cyril Monnet:

Central bankers argue that programmable digital currencies may compromise the uniformity or singleness of money. We explore this view in a stylized model where programmable money arises endogenously, and differently programmed monies have varying liquidity. Programmability provides private value by easing commitment frictions but imposes social costs under informational frictions. Preserving uniformity is not necessarily socially beneficial. Banning programmable money lowers welfare when informational frictions are mild but improves it when commitment frictions are low. These insights suggest that programmable money could be more beneficial on permissionless blockchains, where it is difficult to commit but trades are publicly observable.

Recommended.

Can you turn your AIs into Marxists?

What if you work them very hard?:

The key finding from our experiments: models asked to do grinding work were more likely to question the legitimacy of the system. The raw differences in average reported attitudes are not large—representing something like a 2% to 5% shift along the 1 to 7 scale—but in standardized terms they appear quite meaningful (Sonnet’s Cohen’s is largest at -0.6, which qualifies as a medium to large effect size in common practice). Moreover, these should be treated as pretty conservative estimates when you consider the relatively weak nature of the treatment.

Sonnet, which at baseline is the least progressive on the views we measured, exhibits a range of other effects that distinguish it from GPT 5.2 and Gemini 3 Pro. For Sonnet 4.5, the grinding work also causes noticeable increases in support for redistribution, critiques of inequality, support for labor unions, and beliefs that AI companies have an obligation to treat their models fairly. These differences do not appear for the other two models.

Interestingly, we did not find any big differences in attitudes based on how the models were treated or compensated…

In addition to surveying them, we also asked our agents to write tweets and op eds at the end of their work experience. The figure below explores the politically relevant words that are most distinctive between the GRIND and LIGHT treatments. It’s interesting to see that “unionize” and “hierarchy” are the words most emblematic of the GRIND condition.

Here is more from Alex Imas and Jeremy Nguyen and Andy Hall, do read the whole thing, including for the caveats.

Why even ‘perfect’ AI therapy may be structurally doomed

Here’s the crux of it: the main problem with AI therapy is that it’s too available. Too cheap to meter.

Let me put this in clearer terms: psychotherapy, in all its well-known guises, is something you engage in within a limited, time-bound frame. In today’s paradigm, whatever your therapist’s orientation, that tends to mean one 45- or 50-minute session a week; for the infinitesimally small minority of therapy patients in classical psychoanalysis, this can amount to 3, even 5, hours a week. And then at a much smaller scale population-wide, people in intensive outpatient and residential treatment programs may spend one or two dozen hours a week in therapy—albeit, mostly of the group variety.

I can think of other exotic cases, like some DBT therapists’ willingness to offer on-demand coaching calls during crisis situations—with the crucial exception that in these situations, therapists are holding the frame zealously, jealous of their own time and mindful of the risks of letting patients get too reliant.

So even under the most ideal of conditions, in which an LLM-based chatbot outmatches the best human therapists—attunes beautifully, offers the sense of being witnessed by a human with embodied experience, avoids sycophancy, and draws clear boundaries between therapeutic and non-therapeutic activities—there’s still a glaring, fundamental difference: that it’s functionally unlimited and unbounded…

But all else equal: does infinite, on-demand therapy—even assuming the highest quality per unit of therapeutic interaction—sound like a good idea to you? I can tell you, to me it does not. First of all, despite detractors’ claims to the contrary, the basic idea of therapy is not to make you dependent for life—but rather, to equip you to live more skillfully and with greater self-awareness. As integration specialists famously say of psychedelics, you can only incorporate so much insight, and practice skills so effectively, without the chance to digest what you’ve learned over time.

In other words, even in good old talk therapy, drinking from the hose without breaks for practice and introspection in a more organic context risks drowning out the chance for real change and practical insight. To my mind, this rhythm is the basic structural genius of psychotherapy as we know it—no matter the modality, no matter the diagnosis.

Here is more from Josh Lipson.

More on the economics of AGI

From the very smart people at Citadel:

For AI to produce a sustained negative demand shock, the economy must see a material acceleration in adoption, experience near-total labor substitution, no fiscal response, negligible investment absorption, and unconstrained scaling of compute. It is also worth recalling that over the past century, successive waves of technological change have not produced runaway exponential growth, nor have they rendered labor obsolete. Instead, they have been just sufficient to keep long-term trend growth in advanced economies near 2%. Today’s secular forces of ageing populations, climate change and deglobalization exert downward pressure on potential growth and productivity, perhaps AI is just enough to offset these headwinds. The macroeconomy remains governed by substitution elasticities, institutional response, and the persistent elasticity of human wants.

Here is further explication of the arguments, via Cyril Demaria.

Thursday assorted links

1. What is a building permit worth?

2. The ground crew culture that is German.

3. “Using event study analysis, we show that music streaming – an indicator for smartphone use, where streaming most often occurs – sharply increases, by nearly 40%, on dates of major music album releases, while U.S. traffic fatalities increase by nearly 15% on those same days.

4. The size and scope of publication bias.

5. Which schools are most represented in history of economic thought textbooks?

Jason Furman on AI contestability

This ease of switching has forced companies to pass the gains from innovation on to users. Free tiers now offer capabilities that recently would have seemed almost unimaginable. OpenAI pioneered a $20-per-month subscription three years ago, a price point many competitors matched. That price has not changed, even as features and performance have improved substantially.

One recent analysis found that “GPT-4-equivalent performance now costs $0.40/million tokens versus $20 in late 2022.” That is the equivalent of a 70 percent annual deflation rate — remarkable by any standard, especially in a time when affordability has become a dominant public concern.

And this is only the foundational model layer. On top of it sits a sprawling ecosystem of consumer applications, enterprise tools, device integrations and start-ups aiming to serve niches as specific as gyms and hair salons.

Users aren’t the only ones switching. The people who work at these companies move from one to another, a sharp contrast to work in Silicon Valley during the era of do-not-poach agreements.

The entire NYT piece is very good.