Due to its increasing ease-of-use and ability to quickly collect large samples, online behavioral research is currently booming. With this increasing popularity, it is important that researchers are aware of who online participants are, and what devices and software they use to access experiments. While it is somewhat obvious that these factors can impact data quality, it remains unclear how big this problem is. To understand how these characteristics impact experiment presentation and data quality, we performed a battery of automated tests on a number of representative setups. We investigated how different web-building platforms (Gorilla, jsPsych, Lab.js, and psychoJS/PsychoPy3), browsers (Chrome, Edge, Firefox, and Safari), and operating systems (mac OS and Windows 10) impact display time across 30 different frame durations for each software combination. In addition, we employed a robot actuator in representative setups to measure response recording across aforementioned platforms, and between different keyboard types (desktop and integrated laptop). We then surveyed over 200 000 participants on their demographics, technology, and software to provide context to our findings. We found that modern web-platforms provide a reasonable accuracy and precision for display duration and manual response time, but also identify specific combinations that produce unexpected variance and delays. While no single platform stands out as the best in all features and conditions, our findings can help researchers make informed decisions about which experiment building platform is most appropriate in their situation, and what equipment their participants are likely to have.