This paper begins with a distinction between an "active-learning" framework and a "passive-learning" approach when employed as an evaluation of R D strategies. A simple decision-tree formulation is employed to gain insights on the essential differences of these two approaches. Numerical differences between the active- and passive-learning frameworks are derived. It is shown that in a sequence of decisions under uncertainty, where the probabilities of the occurrence of the unknown events are also unknown, much can be gained by taking into consideration the effect of learning along the horizon plan on decisions to be made in the future.