Many algorithms that provide approximate solutions for dynamic stochastic general equilibrium (DSGE) models employ the QZ factorization because it allows a flexible formulation of the model and exempts the researcher from identifying equations that give raise to infinite eigenvalues. We show, by means of an example, that the policy functions obtained by this approach may differ from both the solution of a properly reduced system and the solution obtained from solving the system of nonlinear equations that arises from applying the implicit function theorem to the model's equilibrium conditions. As a consequence, simulation results may depend on the specific algorithm used and on the numerical values of parameters that are theoretically irrelevant. The sources of this inaccuracy are ill-conditioned matrices as they emerge, e.g., in models with strong habits. Researchers should be aware of those strange effects, and we propose several ways to handle them.