This document discusses solving Markov decision processes (MDPs) with a reward function that is the sum of two separate reward functions. It asks whether the problems defined by the individual reward functions can be solved independently and then combined to solve the overall problem. Specifically, it asks if action-value functions, optimal action-value functions, and optimal policies for the individual reward functions can be combined in a simple way to obtain the overall solution.