Abstract
AbstractMany collisions between pedestrians and cars are caused by poor visibility, such as occlusion by a parked vehicle. Augmented reality (AR) could help to prevent this problem, but it is unknown to what extent the augmented information needs to be embedded into the world. In this virtual reality experiment with a head-mounted display (HMD), 28 participants were exposed to AR designs, in a scenario where a vehicle approached from behind a parked vehicle. The experimental conditions included a head-locked live video feed of the occluded region, meaning it was fixed in a specific location within the view of the HMD (VideoHead), a world-locked video feed displayed across the street (VideoStreet), and two conformal diminished reality designs: a see-through display on the occluding vehicle (VideoSeeThrough) and a solution where the occluding vehicle has been made semi-transparent (TransparentVehicle). A Baseline condition without augmented information served as a reference. Additionally, the VideoHead and VideoStreet conditions were each tested with and without the addition of a guiding arrow indicating the location of the approaching vehicle. Participants performed 42 trials, 6 per condition, during which they had to hold a key when they felt safe to cross. The keypress percentages and responses from additional questionnaires showed that the diminished-reality TransparentVehicle and VideoSeeThrough designs came out most favourably, while the VideoHead solution caused some discomfort and dissatisfaction. An analysis of head yaw angle showed that VideoHead and VideoStreet caused divided attention between the screen and the approaching vehicle. The use of guiding arrows did not contribute demonstrable added value. AR designs with a high level of local embeddedness are beneficial for addressing occlusion problems when crossing. However, the head-locked solutions should not be immediately dismissed because, according to the literature, such solutions can serve tasks where a salient warning or instruction is beneficial.
Publisher
Springer Science and Business Media LLC
Reference80 articles.
1. Ardino P, Liu Y, Ricci E, Lepri B, De Nadai M (2021) Semantic-guided inpainting network for complex urban scenes manipulation. 2020 25th International Conference on Pattern Recognition, 9280–9287, Milan, Italy. https://doi.org/10.1109/ICPR48806.2021.9412690
2. Bálint A, Labenski V, Köbe M, Vogl C, Stoll J, Schories L, Amann L, Sudhakaran GB, Leyva H, Pallacci P, Östling T, Schmidt M, D., Schindler R (2021) Use case definitions and initial safety-critical scenarios (Report No. D2.6. Project SAFE-UP)
3. Bastani Zadeh R, Ghatee M, Eftekhari HR (2018) Three-phases smartphone-based warning system to protect vulnerable road users under fuzzy conditions. IEEE Trans Intell Transp Syst 19:2086–2098. https://doi.org/10.1109/TITS.2017.2743709
4. Bauerfeind K, Drüke J, Schneider J, Haar A, Bendewald L, Baumann M (2021) Navigating with augmented reality – how does it affect drivers’ mental load? Appl Ergon 94:103398. https://doi.org/10.1016/j.apergo.2021.103398
5. Bazilinskyy P (2020) coupled-sim. https://github.com/bazilinskyy/coupled-sim
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献