The distribution theory for discrete-time renewal–reward processes with dependent rewards is developed through the derivation of double transforms. By dependent, we mean the more realistic setting in which the reward for an interarrival period is dependent on the duration of the associated interarrival time. The double transforms are the generating functions in time of the time-dependent reward probability-generating functions. Residue and saddlepoint approximations are used to invert such double transforms so that the reward distribution at arbitrary time n can be accurately approximated. In addition, double transforms are developed for the first-passage time distribution that the cumulative reward exceeds a fixed threshold amount. These distributions are accurately approximated by inverting the double transforms using residue and saddlepoint approximation methods. The residue methods also provide asymptotic expansions for moments and allow for the proof of central limit theorems related to these first passage times and reward amounts.