The multiple-group categorical factor analysis (FA) model and the graded response model (GRM) are commonly used to examine polytomous items for differential item functioning to detect possible measurement bias in educational testing. In this study, the multiple-group categorical factor analysis model (MC-FA) and multiple-group normal-ogive GRM models are unified under the common framework of discretization of a normal variant. We rigorously justify a set of identified parameters and determine possible identifiability constraints necessary to make the parameters just-identified and estimable in the common framework of MC-FA. By doing so, the difference between categorical FA model and normal-ogive GRM is simply the use of two different sets of identifiability constraints, rather than the seeming distinction between categorical FA and GRM. Thus, we compare the performance on DIF assessment between the categorical FA and GRM approaches through simulation studies on the MC-FA models with their corresponding particular sets of identifiability constraints. Our results show that, under the scenarios with varying degrees of DIF for examinees of different ability levels, models with the GRM type of identifiability constraints generally perform better on DIF detection with a higher testing power. General guidelines regarding the choice of just-identified parameterization are also provided for practical use.