The risk ratio quantifies the risk of disease in a study population relative to a reference population. Standard methods of estimation and testing assume a perfect diagnostic test having sensitivity and specificity of 100%. However, this assumption typically does not hold, and this may invalidate naive estimation and testing for the risk ratio. We propose procedures that control for sensitivity and specificity of the diagnostic test, given the risks are measured by proportions, as it is in cross-sectional studies or studies with fixed follow-up times. These procedures provide an exact unconditional test and confidence interval for the true risk ratio. The methods also cover the case when sensitivity and specificity differ in the two groups (differential misclassification). The resulting test and confidence interval may be useful in epidemiological studies as well as in clinical and vaccine trials. We illustrate the method with real-life examples which demonstrate that ignoring sensitivity and specificity of the diagnostic test may lead to considerable bias in the estimated risk ratio.