An object is hidden in one of two boxes and occasionally moves between the boxes in accordance with some specified continuous-time Markov process. The objective is to find the object with a minimal expected cost. In this paper it is assumed that search efforts are unlimited. In addition to the search costs, the ‘real time' until the object is found is also taken into account in the cost structure. Our main results are that the optimal policy may consist of five regions and that the controls applied should be of the extreme 0 or ∞ type. The resulting expected cost compares favorably with that of the expected cost with bounded controls studied previously in the search literature.