Lie detection research comparing manual and automated coding of linguistic cues is limited. In Experiment 1, we attempted to extend this line of research by directly comparing the veracity differences in manual coding and two coding software programs (Text Inspector and Linguistic Inquiry and Word Count [LIWC]) on the linguistic cue “total details” across eight published datasets. Mixed model analyses revealed that LIWC showed larger veracity differences in total details than Text Inspector and manual coding. Follow-up classification analyses showed that both automated coding and manual coding could accurately classify honest and false accounts. In Experiment 2, we examined if LIWC’s sensitivity to veracity differences was the result of honest accounts including more redundant (repeated) words than false accounts as LIWC—but not Text Inspector or manual coding—accounts for redundancy. Our prediction was supported, and the most redundant words were function words. The results implicated that automated coding can detect veracity differences in total details and redundancy, but it is not necessarily better than manual coding at accurately classifying honest and false accounts.