Modeling Human Algorithm Interaction to Improve Trust and Reliability of Intelligent Decision Support Systems in Data Driven Organizations

Authors

  • Siska Narulita Universitas Nasional Karangturi Semarang
  • Prihati Prihati Institut Teknologi dan Bisnis Semarang
  • Ahmad Nugroho Universitas Tidar

Keywords:

trust-building mechanisms, decision support systems, explainable AI, user engagement, human algorithm interaction

Abstract

This research explores the role of human algorithm interaction mechanisms in enhancing trust, reliability, and user confidence in Decision Support Systems (DSS). Traditional DSS models often focus solely on algorithmic accuracy and performance, neglecting crucial factors such as transparency and user engagement, which are essential for building trust. By incorporating explainable AI (XAI) techniques like SHAP and LIME, real-time feedback mechanisms, and user-friendly interfaces, the study develops structured interaction models that improve the interpretability of AI-driven decisions. The results show that transparent decision-making processes and interactive features significantly enhance user trust, making DSS more reliable and easier to adopt. Users interacting with systems that provide clear, understandable explanations of decisions, along with real-time updates on the system’s confidence, reported higher levels of decision-making confidence, especially in high-stakes scenarios. These improvements lead to greater user engagement and adoption of the system in various domains, including healthcare and finance. The study also highlights the importance of balancing interpretability with efficiency in user interface design to ensure both trust and usability. The findings contribute to the design of more user-centric DSS that prioritize trust, interpretability, and cognitive factors, providing a framework for the successful integration of intelligent decision support systems in complex decision-making environments. Future research should focus on refining interaction models and exploring the broader applicability of these systems in different sectors.

References

[1] A. Kovari, “AI for decision support: Balancing accuracy, transparency, and trust across sectors,” Inf., vol. 15, no. 11, p. 10725, 2024, doi: 10.3390/info15110725.

[2] M. L. Saremi and A. E. Bayrak, “AGENT-BASED SIMULATION OF OPTIMAL TRUST IN A DECISION SUPPORT SYSTEM IN ONE-ON-ONE COLLABORATION,” in Proceedings of the ASME Design Engineering Technical Conference, 2022. doi: 10.1115/DETC2022-90770.

[3] S. Leewis and K. Smit, “What Other Factors Might Impact Building Trust in Government Decisions Based on Decision Support Systems, Except for Transparency and Explainability?,” in Proceedings of the Annual Hawaii International Conference on System Sciences, 2023, pp. 1633 – 1642. [Online]. Available: https://www.scopus.com/inward/record.uri?eid=2-s2.0-105022590769&partnerID=40&md5=425dfa52221e2975b1f0d1848c1a0d35

[4] S. Monica-Teodora and M. M. Ionela, “Integration of Data Science in Institutional Management Decision Support System,” in International Conference on Enterprise Information Systems, ICEIS - Proceedings, 2025, pp. 820 – 829. doi: 10.5220/0013352400003929.

[5] F. J. Roberts, Decision Support Systems: Types, Advantages and Disadvantages. 2021. [Online]. Available: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85115283317&partnerID=40&md5=04a27d5a367498bd882a367230ea6fbc

[6] J. Wanner, L.-V. Herm, K. Heinrich, and C. Janiesch, “The effect of transparency and trust on intelligent system acceptance: Evidence from a user-based study,” Electron. Mark., vol. 32, no. 4, pp. 2079 – 2102, 2022, doi: 10.1007/s12525-022-00593-5.

[7] K. Appelganc, T. Rieger, E. Roesler, and D. Manzey, “How Much Reliability Is Enough? A Context-Specific View on Human Interaction With (Artificial) Agents From Different Perspectives,” J. Cogn. Eng. Decis. Mak., vol. 16, no. 4, pp. 207 – 221, 2022, doi: 10.1177/15553434221104615.

[8] S. Tolmeijer, U. Gadiraju, R. Ghantasala, A. Gupta, and A. Bernstein, “Second chance for a first impression? Trust development in intelligent system interaction,” in UMAP 2021 - Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization, 2021, pp. 77 – 87. doi: 10.1145/3450613.3456817.

[9] J. Li and B. Zhang, “The role of interaction design based on fuzzy decision support system in improving user experience,” Int. J. Fuzzy Syst., vol. 27, no. 8, pp. 2585–2601, 2025, doi: 10.1007/s40815-024-01918-6.

[10] T. Kuflik et al., “TExSS 22: Transparency and Explanations in Smart Systems,” in International Conference on Intelligent User Interfaces, Proceedings IUI, 2022, pp. 16 – 17. doi: 10.1145/3490100.3511165.

[11] J. Frysak, “Feedback mechanisms for decision support systems: A literature review,” Adv. Intell. Syst. Comput., vol. 570, pp. 481 – 490, 2017, doi: 10.1007/978-3-319-56538-5_49.

[12] E. Ö. Dogru and N. C. Krämer, “Investigating appropriate reliance on AI-Based decision support systems: the role of expertise, trust, and self-confidence,” J. Decis. Syst., vol. 34, no. 1, 2025, doi: 10.1080/12460125.2025.2593251.

[13] A. Kaklauskas, “Introduction to intelligent decision support systems,” Intell. Syst. Ref. Libr., vol. 81, pp. 1 – 29, 2015, doi: 10.1007/978-3-319-13659-2_1.

[14] F. Omar, A. Nabot, and R. Alqirem, “Transforming Decision Support Systems Through Artificial Intelligence: Enhancing Analytics, Automation, and Interaction,” Stud. Syst. Decis. Control, vol. 597, pp. 479 – 491, 2025, doi: 10.1007/978-3-031-90271-0_34.

[15] H. Fazlollahtabar and M. Saidi-Mehrabad, “Neuro-fuzzy-regression expert system for AGV optimal path,” Stud. Syst. Decis. Control, vol. 20, pp. 93 – 115, 2015, doi: 10.1007/978-3-319-14747-5_7.

[16] F. Gorunescu and S. Belciug, “Intelligent Decision Support Systems in Automated Medical Diagnosis,” Intell. Syst. Ref. Libr., vol. 137, pp. 161 – 186, 2018, doi: 10.1007/978-3-319-67513-8_8.

[17] A. Gupta, D. Basu, R. Ghantasala, S. Qiu, and U. Gadiraju, “To Trust or Not To Trust: How a Conversational Interface Affects Trust in a Decision Support System,” in WWW 2022 - Proceedings of the ACM Web Conference 2022, 2022, pp. 3531 – 3540. doi: 10.1145/3485447.3512248.

[18] R. Philipsen, P. Brauner, A. C. Valdez, and M. Ziefle, “Evaluating Strategies to Restore Trust in Decision Support Systems in Cross-Company Cooperation,” Adv. Intell. Syst. Comput., vol. 793, pp. 115 – 126, 2019, doi: 10.1007/978-3-319-94196-7_11.

[19] M. C. Cohen, M. V Mancenido, E. K. Chiou, and N. J. Cooke, “Teamness and Trust in AI-Enabled Decision Support Systems: Current Challenges and Future Directions,” in CEUR Workshop Proceedings, 2023, pp. 175 – 187. [Online]. Available: https://www.scopus.com/inward/record.uri?eid=2-s2.0-85171150711&partnerID=40&md5=7a1f3e6aeeac544ddfe4d7e398b0bb1d

[20] K. Litaker and C. B. Mayhorn, “Influences of stress on interactions with decision support,” in Proceedings of the Human Factors and Ergonomics Society, 2020, pp. 1886 – 1890. doi: 10.1177/1071181320641454.

[21] N. Kanchepu, Unveiling the Black Box: The Crucial Role of Interpretability in Machine Learning Models. 2025. doi: 10.2174/9789815305548125010004.

[22] V. N. Orobinskaya, T. N. Mishina, A. P. Mazurenko, and V. V Mishin, “Problems of Interpretability and Transparency of Decisions Made by AI,” in Proceedings - 2024 6th International Conference on Control Systems, Mathematical Modeling, Automation and Energy Efficiency, SUMMA 2024, 2024, pp. 667 – 671. doi: 10.1109/SUMMA64428.2024.10803745.

[23] M. A. Shakir et al., “Developing Interpretable Models for Complex Decision-Making,” in Conference of Open Innovation Association, FRUCT, 2024, pp. 66 – 75. doi: 10.23919/FRUCT64283.2024.10749922.

[24] H.-F. Cheng et al., “Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders,” in Conference on Human Factors in Computing Systems - Proceedings, 2019. doi: 10.1145/3290605.3300789.

[25] H. Li and Z. Sun, “Is Algorithmic Accessibility Sufficient? The Pivotal Role of Accessibility and Accountability in Shaping Trust in Automated Decision-Making,” Governance, vol. 38, no. 4, 2025, doi: 10.1111/gove.70067.

[26] R. Yang, S. Li, Y. Qi, J. Liu, Q. He, and H. Zhao, “Unveiling users’ algorithm trust: The role of task objectivity, time pressure, and cognitive load,” Comput. Hum. Behav. Reports, vol. 18, 2025, doi: 10.1016/j.chbr.2025.100667.

[27] J. Zhou, S. Z. Arshad, S. Luo, and F. Chen, “Effects of uncertainty and cognitive load on user trust in predictive decision making,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 10516 LNCS, pp. 23 – 39, 2017, doi: 10.1007/978-3-319-68059-0_2.

[28] M. Singh, S. Ghai, and R. Sharma, The relationship between humanity versus artificial intelligence trust and personality and locus of control. 2024. doi: 10.4018/979-8-3693-2849-1.ch016.

[29] H. Rumapea, D. R. Manalu, and Y. Y. P. Rumapea, “Interpretable Deep Learning for Enhanced AI Trust and Clarity,” J. Artif. Intell. Technol., vol. 5, pp. 345 – 353, 2025, doi: 10.37965/jait.2025.0748.

Downloads

Published

2026-01-20