The exploration of distant language relationships reaching back to the Neolithic age remains very demanding and is often perceived as controversial. Over the last two decades, significant advances in computational linguistics offer alternatives to the methods in use so far to classify languages, but less quantitative research has been devoted to long-range language relationships. Classical methods reach their limits at a level where the signals they infer are more likely to be due to chance than to relatedness. We created a distance-based method without dependency on human relatedness judgments to set up an automated language classification inference system. We used a backward selection method to optimize the key elements of the system towards existing reference classifications: a word list, sound change rules and other parameters. We arbitrated choices in study design and features, which are beneficial for longer-range inference, if necessary at the cost of the short-range performance. The scope of the project is global: the system processes 1962 languages from all families, the limit for inclusion of more languages being the availability of the material needed by our system. After exploring the methodology, we show that it leads to a reliable language classification, matching the reference listings to a high proportion. We then apply a series of statistical models to target middle- and long-range relationships and explore connections outside the well-established classifications. By shedding light on relationships between top-level language families in the context of chance interference, our results deliver a strong support for two major long-range hypotheses: Eurasiatic (connecting Indo-European, Uralic, Turkic, Mongolic and Tungusic) and Austric (connecting Austronesian, Tai-Kadai, Austroasiatic and Hmong-Mien). Further support is given to parts of the Nostratic macrofamily, to a dozen other long-range hypotheses and to internal classifications of established language families.