My Ethics
Writing down my current thinking on philosophical issues
I’ve been thinking on and off about ethics for around a decade, but mostly in an informal way rather than through formal philosophy. I expect that if I spent more time seriously stress-testing my views, I’d find inconsistencies or conclusions I’m less comfortable with than I currently think. Still, this seems like a useful exercise. What follows is an attempt to write down my current views as of April 2026.
Meta-ethics
I don’t believe in moral realism overall. In the absence of beings similar to humans, it doesn’t make sense to describe ethics as an objective thing that matters.
I do believe that given the existence of humans, and a few basic axioms, one can arrive at a certain level of constructed moral reasoning. And these derivations can be useful and do a lot of work. The axioms can be very basic. But most of life is empirical, and there is an “is-ought dichotomy” overall.
The axioms I believe in are basically: all suffering is bad, all pleasure is good, death is bad.
This does mean, to some extent, that I cannot with any amount of reasoning, convince someone who has a different set of axioms to me to adopt a moral system similar to me. I can try to tell them that their axioms are wrong. I can argue on empirics once one has some axioms.
But at the end of the day, morality is mostly just based on some internal emotional state of trying to keep track of what one cares about. And this has mostly worked out pretty well regardless. I have yet to be convinced by a solution to nihilism, other than to mostly just not worry about it. And this mostly seems fine.
Axioms
The key axioms that I hold are:
suffering is bad
pleasure is good
death is negative in a way that non-existence is not
The first two are basically tautological and definitional. The last one is mostly based on vibes, and is load-bearing in a way that is perhaps under-justified.
While yes, one can factor in the cost of seeing somebody die as being bad, I think even beyond that, there is a difference between death and non-existence. The main thought experiment I use, is something like: “Would it be better to have 10 people live for 20 years each in series, or 1 person live in series for 200 years?” My answer is that on most variations of this question, I would choose the latter option. (conditioned on each of the people consenting to living that long each)
I think most other moral truths can be derived from the axioms. For example, I think if one truly cares about pleasure being good and suffering being bad, then gaining knowledge and being better at ethics is instrumentally valuable beyond personal enjoyment.
This basically bottoms out on intuition and empirics and vibes.
Person-affecting views
I do care about the livelihoods of people who don’t exist. But I think there are differences in how one might care about them. I think if someone never comes into existence, it’s not equivalent to somebody dying.
Otherwise, I think conditioned on somebody coming into existence in the future, one should take into account their utility. I don’t think exponential discounting is actually correct, but since there is so much uncertainty on how things done now affect things in the future, and since if we will have so much more wealth in the future, that the discounting is de-facto fine overall.
I also feel like I have some mixed utilitarian view of: Wishing everyone to experience extraordinary lives. More good lives is better. Wanting equality between people. But overall, probably prefer something like 1 trillion people living extraordinary lives than 10 trillion people living quite good lives. I don’t like the repugnant conclusion in the limit. But other ethical frameworks have issues that seem often worse, and it’s probably fine in practice. Something like threshold-utilitarian or something has some appeal but I know it also breaks.
I overall feel like this is the least-clean section for my views. I have various intuitions on future people and long-term welfare and the repugnant conclusion. But I feel like they probably contradict in ways that I still am yet to work out.
Policy-level Consequentialism
I believe that we can take actions that alter how good outcomes are based on the axioms. In the end, what matters, is that people live good lives. However, humans are not particularly good at evaluating actions act-by-act due to biases. Instead, we can decide actions and moral rules on a policy level.
I do believe that humans having dignity and self-autonomy is a highly desirable thing, but that this is just another term in the utility function when deciding actions. And this is usually instrumental for people improving their own lives. In unusual cases, it can be worth breaking rules, but by default one should adhere to committed policies.
I also think consequentialists often forget to evaluate at multiple scales. A naive first-order consequential accounting is wrong, because it has indirect effects. It misses that defection causes erodes coordination norms, and doesn’t take into account the cost of then building infrastructure and defenses against the defectors.
One example is things like murder and meat have an offset that is much higher than the direct harm, and cannot be straightforwardly offset. The full cost is far higher than a first-order analysis suggests.
Lastly, I think that when do care about actions, while there is some asymmetry between action vs inaction, inaction is a kind moral action, and one cannot indefinitely avoid imperfect actions behind the vail of inaction.
Action vs Inaction.
One common view by people is that interacting with a problem, making progress on the problem, but having some issues with the problem remain, is bad. A good reductio-ad-absurdum is in this post by Scott Alexander.
For example, if one refuses to intervene in a trolley problem, then this would say that one has not caused anyone to die, but if one does pull the lever and saves 5 people dying but 1 different person dies, then on has now “interacted with the problem” and have now caused one person to die, which is worse. I think this is just a false framing.
More specifically, the “Copenhagen Interpretation of Ethics” is the idea that once you interact with a problem, its moral burden becomes yours, whereas inaction is treated as morally cleaner even when it leads to worse outcomes overall.
I don’t buy this. Refusing to act is still a moral choice, and it does not become innocent just because it leaves the causal chain less visibly attached to you.
That being said, I think that some asymmetry is still needed to not cause other contradictions.
Practical Considerations
Animal Welfare and Veganism
I think animals are able to experience pleasure and suffering, and this, that they morally matter. Their current treatment by society is one of the worst things in the history of humanity. The current trade off in terms of [how convenient + tasty is this food] vs [how many hours of suffering does this amount of food cause] for the vast majority of animal products is far too insanely high, that in the current food landscape the only logical outcome is to abstain from eating animal products entirely. Theoretically one could offset by paying some multiple of the amount of harm caused, but I find this morally unsatisfying personally.
However, offsetting harm from meat usually misses the second-order effects of going vegan, such as normalization the practice, building the needed infrastructure, making accommodations easier for the next person.
Logically I can believe that “there is some amount of donation you could do to offset the bad you have done”, but emotionally I still believe “no you could just have done both!”. It’s a tension that I mostly have not yet solved.
The same holds for murder. I think in all realistic cases that badness is finite, and also that goodness done scales roughly linearly with money/effort. On the other hand if someone commits murder but saves 10 people from dying, it feels like something bad was done. I just generally have a preference for strategies that generalize to “everyone should be able to do this”.
Giving and Personal Sacrifice
I think giving 10% of your income to charity, via the “🔶 10% pledge” by giving what you can is a good thing to do for basically all people in western countries. Being in the top 1% of incomes globally is common, and the personal cost of losing 10% of one’s income for most such people is significantly less than the gain in welfare that a well chosen recipient would get.
Additionally, donating 10% acts as a Schelling point. While it is true in most cases that one could donate even more than 10%, and that this is quite admirable, I think having a “minimum bar” of 10% is a reasonable tradeoff, even if the arbitrary norm could be 9% or 11%.
It probably took me to long to commit to giving 10% of my income, but I did it eventually.
Decision Theory
One should choose policies according to something like Evidential Decision Theory or ideally something like Functional Decision Theory. This means that in Newcomb’s paradox, I would one-box. In trolley problems, I would pull the lever, but would not push someone onto the track (due to norm-corroding effects).
And I think that a significant reason to both of the previous ones (being vegan and personal giving) despite this not benefiting you directly, is that you should follow policies that generalize across all beings in all situations.
One can imagine if there are smarter and more capable beings such as aliens or advanced AI, one would want these to also have moral considerations of less capable or less fortunate beings too, and these in turn would want stronger forces to look after their interests too. I think that a world of pure “might is right” leads to a world that is worse to live in.
I think the framework above is incomplete but it’s a current attempt to write down what I think in a relatively compact way.
There are some clear tensions between “harms can in principle be offset” and “some acts still feel morally wrong even if offset”. Thus I think in practice I am quite drive by self-image and so there are some elements of Vitrue Ethics that do appeal to me.
My framework ends up being mostly reliant on vibes and empirics, and bottoming out on intuition. Death asymmetry is one such of these for me. Because of this I do just have uncertainty in my framework, and as a consequence think autonomy for others can be pretty important, though perhaps not necessarily in ways that can never be overridden either.
I haven’t changed my thoughts here that much, but likely there are subtle things wrong and points missing and things I don’t argue for or against cleanly, and things that have extrapolated consequences I may not always endorse. Regardless I think this is a useful exercise.
I may talk about more fine-grained things I think are consequences of this in a different post.


