Epistemic Chicken

Consider a fixed goal-seeking agent , who is told its own code and that its objective function is U = { T if A(<A>,<U>) halts after T steps, 0 otherwise }.¬†Alternatively, consider a pair of agents A, B, running similar AIs, who are told their own code as well as their own utility function U = { -1 if you don’t halt, 0 if you halt but your opponent halts after at least as many steps, +1 otherwise }. What would you do as A, in either situation? (That is, what happens if A is an appropriate wrapper around an emulation of your brain, giving it access to arbitrarily powerful computational aids?)