With recent advances in natural language generation, risks associated with the rapid proliferation and misuse of generative language models for malicious purposes steadily increase. Artificial text detection (ATD) has emerged to develop resources and computational methods to mitigate these risks, such as generating fake news and scientific article reviews. This paper introduces corpus of artificial texts (CoAT), a large-scale corpus of human-written and generated texts for the Russian language. CoAT spans six domains and comprises outputs from 13 text generation models (TGMs), which differ in the number of parameters, architectural choices, pre-training objectives, and downstream applications. We detail the data collection methodology, conduct a linguistic analysis of the corpus, and present a detailed analysis of the ATD experiments with widely used artificial text detectors. The results demonstrate that the detectors perform well on the seen TGMs, but fail to generalise to unseen TGMs and domains. We also find it challenging to identify the author of the given text, and human annotators significantly underperform the detectors. We release CoAT, the codebase, two ATD leaderboards, and other materials used in the paper.