Recent advances in recording technology provide unique opportunities to observe children’s everyday speech environments using daylong audio recordings. A number of studies have proposed that speech may support learning differently when speech is directed to a child (target child directed speech, tCDS) than when that speech is directed to others (ODS) (Shneidman & Goldin-Meadow, 2012; Weisleder & Fernald, 2013). To identify periods of tCDS and ODS, researchers typically rely on the time-consuming and laborious work of human listeners who consider numerous features when making judgments. Human listeners are also used to identify periods when children are sleeping or awake. In this paper, we detail our efforts to automate these processes. We analyzed over 1,000 hours of audio from daylong recordings of 153 English- and Spanish-speaking families in the U.S. with 17- to 28-month-old children that had been previously coded for periods of sleep, tCDS, and ODS. We first explored patterns of features that characterized periods of tCDS and ODS. Then, we evaluated two classifiers that were trained using automated measures generated from LENATM, including frequency (AWC, CTC, CVC) and duration (meaningful speech, distant speech, TV, noise, silence) measures. Results revealed high sensitivity and specificity in classifying periods of sleep, and moderate sensitivity and specificity in classifying periods of tCDS and ODS. Model-derived predictions from our tCDS/ODS classifier yielded similar patterns of correlations as previously-published findings, with variation in tCDS, but not ODS, positively linked to children’s later vocabularies (Weisleder & Fernald, 2013). This work offers promising tools for streamlining work with daylong recordings, thereby facilitating research that aims to better understand how children learn from their everyday speech environments.