Exploring User Experience Evaluation and Commercial Applications of AI Technology in Multimodal Interaction Design for the Automotive Industry
DOI:
https://doi.org/10.56982/dream.v4i12.323Keywords:
AI Technology, Commercial Application, Multi -Modal, Interaction Design, AutomotiveAbstract
This paper explores the integration of artificial intelligence (AI) technology in multimodal interaction design within the automotive industry, focusing on user experience (UX) evaluation and commercial applications. By examining the role of AI in enhancing interaction modalities such as voice, touch, and gesture, this study highlights the potential for more intuitive and personalized in-car experiences. The paper proposes a theoretical framework to understand the relationship between AI-driven designs, UX optimization, and their commercial viability. Drawing from recent advancements, it underscores the importance of balancing technological innovation with user-centered design principles to address safety, usability, and market demands. The findings aim to provide a foundation for future empirical research, guiding automotive stakeholders in developing advanced and commercially viable human-machine interfaces that redefine the driving experience.
Downloads
Metrics
References
Aftab, A. R., von der Beeck, M., & Feld, M. (2020). You Have a Point There: Object Selection Inside an Automobile Using Gaze, Head Pose and Finger Pointing. arXiv preprint arXiv:2012.13449. DOI: https://doi.org/10.1145/3382507.3418836
Auto Connected Car News. (2024). @CES ACC Intros AI Multimodal Interaction. Retrieved from https://www.autoconnectedcar.com/2024/01/ces-acc-intros-ai-motimdal-interaction/
Ara, J., Sik-Lanyi, C., & Kelemen, A. (2024). Accessibility engineering in web evaluation process: a systematic literature review. Universal Access in the Information Society, 23, 653–686. https://doi.org/10.1007/s10209-023-00967-2 DOI: https://doi.org/10.1007/s10209-023-00967-2
Bieniek, M., & others. (2024). Generative AI in Multimodal User Interfaces: Trends, Challenges, and Opportunities. arXiv preprint arXiv:2411.10234.
Choi, G. W., & Seo, J. (2024). Accessibility, Usability, and Universal Design for Learning: Discussion of Three Key LX/UX Elements for Inclusive Learning Design. TechTrends, 68, 936–945. https://doi.org/10.1007/s11528-024-00987-6 DOI: https://doi.org/10.1007/s11528-024-00987-6
DigitalDefynd. (2024). Top 5 AI Use in Automotive Industry Case Studies. Retrieved from https://digitaldefynd.com/IQ/ai-in-automotive-industry-case-studies/
Ebel, P. (2024). Generative AI and Attentive User Interfaces: Five Strategies to Enhance Take-Over Quality in Automated Driving. arXiv preprint arXiv:2402.10664.
Gomaa, A. (2022). Adaptive User-Centered Multimodal Interaction towards Reliable and Trusted Automotive Interfaces. arXiv preprint arXiv:2211.03539.
Gomaa, A. (2022). Adaptive User-Centered Multimodal Interaction towards Reliable and Trusted Automotive Interfaces. arXiv preprint arXiv:2211.03539. Retrieved from https://arxiv.org/abs/2211.03539 DOI: https://doi.org/10.1145/3536221.3557034
HTC Inc. (2023). AI-driven Multimodal Interfaces: The Future of User Experience (UX). Retrieved from https://www.htcinc.com/resources/ai-driven-multimodal-interfaces-the-future-of-user-experience-ux/
Investors Business Daily. (2024). SoundHound Rival Cerence AI Scores Nvidia Automotive Pact. Shares Surge. Retrieved from https://www.investors.com/news/technology/soundhound-stock-cerence-nvidia-automotive-ai-voice-recognition/
Jansen, P., Britten, J., Häusele, A., Segschneider, T., Colley, M., & Rukzio, E. (2023). AutoVis: Enabling Mixed-Immersive Analysis of Automotive User Interface Interaction Studies. arXiv preprint arXiv:2302.10531. DOI: https://doi.org/10.1145/3544548.3580760
Kaushik, N. (2024). Generative AI Creates Challenges and Opportunities for the Entire Automotive Industry. Forbes.
LeewayHertz. (2024). AI for Automotive: Use Cases, Technologies, and Future Prospects. Retrieved from https://www.leewayhertz.com/ai-use-cases-in-the-automotive-industry/
Li, Z., Liang, C., Wang, Y., Qin, Y., Yu, C., Yan, Y., Fan, M., & Shi, Y. (2023). Enabling Voice-Accompanying Hand-to-Face Gesture Recognition with Cross-Device Sensing. arXiv preprint arXiv:2303.10441. https://doi.org/10.48550/arXiv.2303.10441 DOI: https://doi.org/10.1145/3544548.3581008
Nordhoff, S., de Winter, J., Kyriakidis, M., van Arem, B., & Happee, R. (2021). Acceptance of Automated Driving: An Overview of User-Related Issues. In Autonomous Driving (pp. 521-554). Springer, Berlin, Heidelberg.
OpenXcell. (2024). AI in Automotive Industry: Applications, Benefits, and Future Trends. Retrieved from https://www.openxcell.com/blog/ai-in-automotive-industry/
OpenXcell. (2023). AI in Automotive Industry: Applications, Benefits, and Future Trends. Retrieved from https://www.openxcell.com/blog/ai-in-automotive-industry/
Pfleging, B., & Schmidt, A. (2012). Multimodal Interaction in the Car - Combining Speech and Gestures on the Steering Wheel. Proceedings of the 4th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, 155-162. DOI: https://doi.org/10.1145/2390256.2390282
Pfleging, B., & Schmidt, A. (2011). SpeeT: A Multimodal Interaction Style Combining Speech and Touch Interaction in Automotive Environments. Proceedings of the 13th International Conference on Human-Computer Interaction with Mobile Devices and Services, 69-72.
Prabhakar, G., & Biswas, P. (2021). A Wearable Virtual Touch System for Cars. arXiv preprint arXiv:2106.05700.
Prabhakar, G., Rajkhowa, P., & Biswas, P. (2021). A Wearable Virtual Touch System for Cars. arXiv preprint arXiv:2106.05700. DOI: https://doi.org/10.1007/s12193-021-00377-9
Ramseook-Munhurrun, P., Seebaluck, V. N., & Naidoo, P. (2015). Examining the Structural Relationships of Destination Image, Perceived Value, Tourist Satisfaction, and Loyalty: Case of Mauritius. Tourism Management, 48, 362-372. https://doi.org/10.1016/j.tourman.2014.12.011 DOI: https://doi.org/10.1016/j.sbspro.2015.01.1198
Restackio. (2024). Voice Recognition In Automotive Tech. Retrieved from https://www.restack.io/p/ai-powered-autonomous-vehicles-answer-voice-recognition-cat-ai
Restackio. (2023). AI-Driven UX Design Challenges. Retrieved from https://www.restack.io/p/ai-driven-user-experience-answer-ux-design-challenges-cat-ai
Restackio. (2024). Importance Of User-Centered Design In AI. Retrieved from https://www.restack.io/p/human-centric-ai-design-answer-importance-user-centered-design-cat-ai
Reuters. (2024, October 22). Qualcomm, Alphabet team up for automotive AI; Mercedes inks chip deal. Retrieved from https://www.reuters.com/technology/artificial-intelligence/qualcomm-alphabet-team-up-automotive-ai-mercedes-inks-chip-deal-2024-10-22/
Saren, S., Mukhopadhyay, A., Ghose, D., & Biswas, P. (2024). Comparing alternative modalities in the context of multimodal human–robot interaction. Journal on Multimodal User Interfaces, 18, 69–85. https://doi.org/10.1007/s12193-023-00421-w DOI: https://doi.org/10.1007/s12193-023-00421-w
Schaffer, S. M., Böck, R., & Weis, T. (2016). Benefit, design and evaluation of multimodal interaction. In Proceedings of the 1st International Workshop on Designing Speech and Language Interactions (pp. 1-6).
Sun, W., Tang, S., & Liu, F. (2021). Examining Perceived and Projected Destination Image: A Social Media Content Analysis. Sustainability, 13(6), 3354. https://doi.org/10.3390/su13063354 DOI: https://doi.org/10.3390/su13063354
Stappen, L., Dillmann, J., Striegel, S., Vögel, H.-J., Flores-Herr, N., & Schuller, B. W. (2023). Integrating Generative Artificial Intelligence in Intelligent Vehicle Systems. arXiv preprint arXiv:2305.17137.
Stappen, L., Dillmann, J., Striegel, S., Vögel, H.-J., Flores-Herr, N., & Schuller, B. W. (2023). Integrating Generative Artificial Intelligence in Intelligent Vehicle Systems. arXiv preprint arXiv:2305.17137. Retrieved from https://arxiv.org/abs/2305.17137
Stappen, L., Dillmann, J., Striegel, S., Vögel, H. J., Flores-Herr, N., & Schuller, B. W. (2023). Integrating Generative Artificial Intelligence in Intelligent Vehicle Systems. Retrieved from https://arxiv.org/abs/2305.17137 DOI: https://doi.org/10.1109/ITSC57777.2023.10422003
Turunen, M., Hakulinen, J., Melto, A., Heimonen, T., Laivo, T., & Hella, J. (2009). SUXES - user experience evaluation method for spoken and multimodal interaction. Interspeech 2009. DOI: https://doi.org/10.21437/Interspeech.2009-676
Unite.AI. (2022). When AI Meets User Experience: Challenges Linger, Opportunities Shine Ever Brighter. Retrieved from https://www.unite.ai/when-ai-meets-user-experience-challenges-linger-opportunities-shine-ever-brighter/
Valverde, F., & others. (2021). Towards a model-driven approach for multiexperience AI-based user interfaces. Software and Systems Modeling, 20, 1345–1363. DOI: https://doi.org/10.1007/s10270-021-00904-y
Wang, Y., Xue, Z., Li, J., Jia, S., & Yang, B. (2024). Multimodal Interaction Design in Intelligent Vehicles. In Human-Machine Interaction (HMI) Design for Intelligent Vehicles (pp. 161–188). Springer. Retrieved from https://link.springer.com/chapter/10.1007/978-981-97-7823-2_6
Wang, Y., Xue, Z., Li, J., Jia, S., & Yang, B. (2024). Multimodal Interaction Design in Intelligent Vehicles. In Human-Machine Interaction (HMI) Design for Intelligent Vehicles (pp. 161-188). Springer. DOI: https://doi.org/10.1007/978-981-97-7823-2_6
Wei, J., Zhou, L., & Li, L. (2024). A Study on the Impact of Tourism Destination Image and Local Attachment on the Revisit Intention: The Moderating Effect of Perceived Risk. PLOS ONE, 19(1), e0296524. https://doi.org/10.1371/journal.pone.0296524 DOI: https://doi.org/10.1371/journal.pone.0296524
Wechsung, I., & Naumann, A. B. (2008). Evaluation methods for multimodal systems: A comparison of standardized usability questionnaires. In International Conference on Multimodal Interfaces (pp. 276-283). Springer. DOI: https://doi.org/10.1007/978-3-540-69369-7_32
Yu, Y., Zhang, Y., & Burnett, G. (2023). "Tell me about that church": Exploring the Design and User Experience of In-Vehicle Multi-modal Intuitive Interface in the Context of Driving Scenario. arXiv preprint arXiv:2311.04160.
Yu, Y., Zhang, Y., & Burnett, G. (2023). "Tell me about that church": Exploring the Design and User Experience of In-Vehicle Multi-modal Intuitive Interface in the Context of Driving Scenario. arXiv preprint arXiv:2311.04160.
Zamani, M., Mikalef, P., & Zhu, Y. (2023). Artificial intelligence (AI) for user experience (UX) design: a systematic review. Information Technology & People.
Zhou, X., Williams, A. S., & Ortega, F. R. (2022). Eliciting Multimodal Gesture+Speech Interactions in a Multi-Object Augmented Reality Environment. arXiv preprint arXiv:2207.12566. https://doi.org/10.48550/arXiv.2207.12566 DOI: https://doi.org/10.1145/3562939.3565637
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Mu Liyuan, Sharfika Raine, Shi Zhehan

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.


