No CrossRef data available.
Just how important is “big data” to humanists? is it mostly hype and hot air, as argued by Timothy Brennan in “The Digital Humanities Bust”? Or is it a genuine new beginning, as argued by Sarah E. Bond, Hoyt Long, and Ted Underwood in response? And what exactly does “big” mean? Is the magnitude necessarily the result of algorithmic computation, a form of digital humanities based on data mining and yielding statistical observations about large corpora? Or could it be nonalgorithmic, coming from collaboration, say, rather than computation? Bond, Long, and Underwood point out that big data projects might look different in the future, transformed by new partnerships with libraries and museums. What might some of these projects be?