-
Preference utilitarians should care about the preferences of past people
-
A model of commitments
-
How to do theoretical research, a personal perspective
-
Facts from ECB CH5: DNA and Chromosomes
-
On London
-
Four Perspectives on Small Donors in EA
-
Comparing Utilitarianism to Deontology is a Type Error
-
When Learning a Skill Has Costs
-
Facts from ECB CH4: Protein Structure and Function
-
Removing My Bets Page
-
Advice I Commonly Give People New To Alignment
-
Facts from ECB CH3, Energy, Catalysis, and Biosynthesis
-
Facts from ECB CH2, Chemical Components of Cells
-
Transitioning Away From Veganism
-
Facts from Essential Cell Biology Chapter 1, Cells: The Fundamental Units of Life
-
Insights from Better: A Surgeon's Notes on Performance
-
Notes On SMILE Eye Surgery
-
Things That Should Be Destroyed
-
How To Be (Semi)-Fashionable
-
Retrospective On My 2020 FHI Research Scholar Application
-
Owning Multiple Copies of Objects Can Reduce Attention Costs
-
Coase's Theorem Means The Customer Doesn't Have To Always Be Right
-
Overly Specific Reviews: Tongue Scrapers, Floss Picks, and Whiteboard Markers
-
Your Time Might Be More Valuable Than You Think
-
Review: A Really Short History of Nearly Everything
-
Quirks with the Solomonoff Prior
-
Insights from Complications
-
The Simulation Hypothesis Undercuts the SIA/Great Filter Doomsday Argument
-
How well can people distinguish tastes?
-
TAP Inventory
-
Answering Questions Honestly in the Game of Life
-
Style Guide: How to Sound Like an Evil Robot
-
Signs that Something's Wrong
-
If you weren't such a idiot...
-
Buying Enough Snacks
-
Fractional progress estimates for AI timelines and implied resource requirements
-
Intermittent Distillations #4
-
Anthropic Effects in Estimating Evolution Difficulty
-
A Rough Perspective on Strategy Stealing
-
The Wild World of Policy Debate
-
Rogue AGI Embodies Valuable Intellectual Property
-
An Intuitive Guide to Garrabrant Induction
-
Intermittent Distillations #3
-
Lumenator Recipe
-
Pre-Training + Fine-Tuning Favors Deception
-
Less Realistic Tales of Doom
-
Making Markets Over Beliefs
-
Agents Over Cartesian World Models
-
Intermittent Distillations #2
-
Meta-EA Needs Models
-
RSS Feed
-
Transparency Trichotomy
-
Intermittent Distillations #1
-
Strong Evidence is Common
-
Open Problems in Myopia
-
Towards a Mechanistic Understanding of Goal-Directedness
-
Revenge of the Prediction Market
-
Maslow First and the World Second
-
How Simulacra Levels Increase
-
Seriously, the Map is Not the Territory
-
Some People Are Smarter Than You
-
Coincidences are Improbable
-
Be Reliable
-
Definitions and Examples of Simulacra Levels
-
Be Specific About Your Career
-
Money Can't (Easily) Buy Talent
-
Interpolate Claims (Un)charitably
-
The First Sample Gives the Most Information
-
Defusing AGI Danger
-
Chain Breaking
-
CFAR Retrospective
-
My Routine
-
Be Responsible
-
A Math Student's Guide to Options
-
Be Goal-directed
-
Does SGD Produce Deceptive Alignment?
-
Expected Money at Augur
-
France Bet Postmortem
-
Miscellaneous Mediocre Models
-
The Solomonoff Prior is Malign
-
Death is Bad
-
Stories of my Life
-
How to Construct Bets
-
How to Buy Things
-
These Legal Systems Do Not Exist
-
AI Safety FAQs
-
Be Stupid
-
100 Opinions About Emotions
-
Be Perfect
-
Review: Against the Grain
-
These Are a Few of My Favorite Questions
-
Review: Assorted Snacks
-
Plans Never Err
-
Three Things I've Learned About Bayes' Rule
-
Stop Asking People To Maximize
-
Resonant Quotations
-
Contextualized Norms
-
What I Want To Do With My Life
-
In My Culture