Might internal complexity of agents scuttle the doing/allowing distinction, e.g. if one internal module is tackling a module that wouldve done the right thing and stopping it? Local and global epistemic virtues Are agents always optimal Self-forks and games
Dai Zhen on relations between the virtues is the best source
Sub agents and the individuation of virtues: if the relevant sub agents are considered as something like chunks of your utility function/intrinsic values that can/should be rationally revised (possibly together with the more general dispositions to act/feel/etc. that are instrumentally rationally required by having those virtues). This would mean the relevant subagents are the proto/benevolence sub-agent, proto-courage sub agent, etc. This would seem to cut across the notion of having sub agents qua social roles, since moral virtues by their generality apply across social roles/contexts.
The criterion of rightness for virtue ethics re being optimal game theoretic policies avoids the euthyphro dilemma for VE, it is what grounds the rightness of virtue.
- How do the policies relate to each other though? What are the parts of the good? How many combinations of policies are jointly optimal, idk.
- LW on simulators, self-forks, and predictive processing model. This all points to social choice theory being more relevant than decision theory.
Humans are more like states, whose preferences are illegible to them, than bayesian agents. Do humans have an incentove to change them into something more legible?
This connects to conditional desires, some desires are only pseudo-conditional, when you can identify the world history to utility pairs, others are truly conditional, where you cant identify these pairs, either because of uncertainty about what world states are picked out by the antecedent and consequent of the conditional, or because its logical form is too complex for a bounded agent to infer.
So how do democratic states act when they have these illegible comditional preferences based on the future votes of their population which changes? HLS’s book might be relevant for C-S relations, so might that SSC book review about geopolitics from the non rational actor pov. Seeing like a state notes
How would a bargaining theory of the hierarchy/structure of virtues work?
How would it differ from normal bargaining theory/moral uncertainty approaches? See paper summarising krauts aristotle on the human good book mayb.
Look at game theory abt games where ppl dont know their own goals infallibly? Less than partial information?
Seems like benevolence is an agent neutral obligation, and honesty is agent relative. This is a bit odd given I think virtue is necessary for welfare(?) How could honesty be only agent-relative, dont I need to promote it for others as a necessary part of promoting their welfare? One answer might be that agent relative virtues are only beneficial if they’re autonomously developed in some sense. So its good for others if you allow/encourage them to develop virtues, but not if you make them. This suggests virtue pills might not be good for people, supporting the intuitive position.
This is all really good because it sets up a bunch of two-way rigid links.
It must only be good to exercise agent-relative virtues if you autonomously develop them. This suggests they might be connected to authenticity/sincerity/self expression/autonomy. This makes some sense for honesty, it doesnt seem good to be honest about your manipulated beliefs.
On side constraints, see Oakeshott’s idea of laws as giving adverbial conditions to citizens.
(clarify which states are virtues)
(Are your explanations compatible with the fact that, currently, members and leaders of the EA community are overwhelmingly most sympathetic to utilitarian ethical theories, modulo some hedging around moral uncertainty.)
(Note that the three schools aren’t exhaustive, and give examples of each, like stoicism, buddhism, Christian ethics).