浙江大学 心理科学研究中心, 杭州 310058
收稿日期:
2020-03-27出版日期:
2020-09-15发布日期:
2020-07-24通讯作者:
许为E-mail:xuwei11@zju.edu.cnEngineering psychology in the era of artificial intelligence
XU Wei(), GE LiezhongCenter for Psychological Sciences, Zhejiang University, Hangzhou 310058, China
Received:
2020-03-27Online:
2020-09-15Published:
2020-07-24Contact:
XU Wei E-mail:xuwei11@zju.edu.cn摘要/Abstract
摘要: 智能技术为智能时代的工程心理学研究和应用提供了新的机遇。为此, 系统提出了智能时代工程心理学的工作框架。该工作框架包括工程心理学研究和应用的对象、核心问题空间、学科理念、研究重点、应用范围、方法等。智能时代的人机关系呈现出一种新的形式: 人机组队式的人机合作关系。“以人为中心的人工智能”应该是智能时代工程心理学的学科理念。针对智能技术, 近期工程心理学研究者开始开展围绕新型人机关系的理论框架和基本问题、人机组队中的心理结构和决策控制、人机交互等方面的研究工作。为有效支持智能系统的研发, 概括总结了一些工程心理学新方法和提升的方法。最后, 针对当前工程心理学所面对的一些挑战提出了具体的建议。
图/表 7
表1AI的三次浪潮和发展的阶段特征(来源: 许为, 2019b)
第一次浪潮(上世纪50~70年代) | 第二次浪潮(上世纪80~90年代) | 第三次浪潮(2006年~ ) | |
---|---|---|---|
主要技术和方法 | 早期“符号主义和联结主义”学派, 产生式系统, 知识推理, 专家系统 | 统计模型在语音识别、机器翻译的研究, 神经网络的初步应用, 专家系统 | 深度学习技术在语音识别、数据挖掘、自然语言处理、模式识别等方面的突破, 大数据, 计算力等 |
用户需求 | 无法满足 | 无法满足 | 开始提供有用的、解决实际问题的AI应用解决方案 |
工作重点 | 技术探索 | 技术提升 | 技术提升, 应用落地场景, 伦理化设计, 前端应用, 人机交互技术等 |
阶段特征 | 学术主导 | 学术主导 | 技术提升+应用+以人为中心 |
表1AI的三次浪潮和发展的阶段特征(来源: 许为, 2019b)
第一次浪潮(上世纪50~70年代) | 第二次浪潮(上世纪80~90年代) | 第三次浪潮(2006年~ ) | |
---|---|---|---|
主要技术和方法 | 早期“符号主义和联结主义”学派, 产生式系统, 知识推理, 专家系统 | 统计模型在语音识别、机器翻译的研究, 神经网络的初步应用, 专家系统 | 深度学习技术在语音识别、数据挖掘、自然语言处理、模式识别等方面的突破, 大数据, 计算力等 |
用户需求 | 无法满足 | 无法满足 | 开始提供有用的、解决实际问题的AI应用解决方案 |
工作重点 | 技术探索 | 技术提升 | 技术提升, 应用落地场景, 伦理化设计, 前端应用, 人机交互技术等 |
阶段特征 | 学术主导 | 学术主导 | 技术提升+应用+以人为中心 |
表2自动化与自主化之间工程心理学特征的比较 (修改自: 许为, 2020)
工程心理学特征 | 自动化 | 半自主化(针对特定场景、任务) | 全自主化 |
---|---|---|---|
实例: 一般办公软件, 自动化生产线, 自动化飞机驾驶舱 | 实例: 智能音箱, 智能决策系统, 自动驾驶车(L2及以上) | 实例: 科幻电影《终结者》中的Skynet机器人 | |
感应环境的能力 | 比较有限 | 先进的多模态感应 | 更先进的多模态感应 |
认知能力(知觉整合、模式识别、学习、推理、决策等) | 没有 | 有部分 | 有 (包括自主设定目标、调整策略、资源分配等) |
执行操作的能力 | 人工激活操作, 根据预定不变的规则执行操作 | 人工激活操作, 独立执行操作 | 自主激活操作、独立执行操作等 |
对不可预测环境的自适应能力 | 没有 | 有部分 | 有 |
系统操作结果 | 具确定性 | 具不确定性 | 具不确定性 |
系统运行中对人工操作的需求 | 需要(特别是设计无法预料的操作场景, 非正常、应急状态) | 需要(设计无法预料的操作场景,非正常、应急状态) | 一般不需要(人应是系统最终决策者) |
表2自动化与自主化之间工程心理学特征的比较 (修改自: 许为, 2020)
工程心理学特征 | 自动化 | 半自主化(针对特定场景、任务) | 全自主化 |
---|---|---|---|
实例: 一般办公软件, 自动化生产线, 自动化飞机驾驶舱 | 实例: 智能音箱, 智能决策系统, 自动驾驶车(L2及以上) | 实例: 科幻电影《终结者》中的Skynet机器人 | |
感应环境的能力 | 比较有限 | 先进的多模态感应 | 更先进的多模态感应 |
认知能力(知觉整合、模式识别、学习、推理、决策等) | 没有 | 有部分 | 有 (包括自主设定目标、调整策略、资源分配等) |
执行操作的能力 | 人工激活操作, 根据预定不变的规则执行操作 | 人工激活操作, 独立执行操作 | 自主激活操作、独立执行操作等 |
对不可预测环境的自适应能力 | 没有 | 有部分 | 有 |
系统操作结果 | 具确定性 | 具不确定性 | 具不确定性 |
系统运行中对人工操作的需求 | 需要(特别是设计无法预料的操作场景, 非正常、应急状态) | 需要(设计无法预料的操作场景,非正常、应急状态) | 一般不需要(人应是系统最终决策者) |
图1人机关系跨时代的演变
图1人机关系跨时代的演变
表3人机交互与人机组队之间工程心理学特征的比较
工程心理学特征 | 人机交互 | 人机组队 |
---|---|---|
主动性 | 只有人主动地启动任务、行动, 机器被动接受 | 人机双方均可主动地启动任务和行动 |
方向性 | 只有人对机器的单向信任、情景意识、决策 | 人机双向的信任、情景意识、意图, 分享的决策控制权(人应拥有最终控制权) |
互补性 | 人与机之间无智能互补 | 机器智能(模式识别、推理等能力)与人的生物智能(人的信息加工等能力)之间的互补, 优化智能系统设计 |
预测性 | 只有人类操作员拥有这些特征 | 人机双方借助行为、情景意识等模型, 预测对方行为、环境和系统的状态 |
自适应性 | 只有人类操作员拥有这些特征 | 人机双向适应对方以及操作场景的行为 |
目标性 | 只有人类操作员拥有这些特征 | 人机双向均可设置或调整目标 |
替换性 | 机器借助于自动化等技术主要替换人的体力任务 | 机器可以替换人的认知、体力任务(人机双向可主动或被动地接管、委派任务) |
合作性 | 有限的人机合作 | 更大范围的人机合作 |
表3人机交互与人机组队之间工程心理学特征的比较
工程心理学特征 | 人机交互 | 人机组队 |
---|---|---|
主动性 | 只有人主动地启动任务、行动, 机器被动接受 | 人机双方均可主动地启动任务和行动 |
方向性 | 只有人对机器的单向信任、情景意识、决策 | 人机双向的信任、情景意识、意图, 分享的决策控制权(人应拥有最终控制权) |
互补性 | 人与机之间无智能互补 | 机器智能(模式识别、推理等能力)与人的生物智能(人的信息加工等能力)之间的互补, 优化智能系统设计 |
预测性 | 只有人类操作员拥有这些特征 | 人机双方借助行为、情景意识等模型, 预测对方行为、环境和系统的状态 |
自适应性 | 只有人类操作员拥有这些特征 | 人机双向适应对方以及操作场景的行为 |
目标性 | 只有人类操作员拥有这些特征 | 人机双向均可设置或调整目标 |
替换性 | 机器借助于自动化等技术主要替换人的体力任务 | 机器可以替换人的认知、体力任务(人机双向可主动或被动地接管、委派任务) |
合作性 | 有限的人机合作 | 更大范围的人机合作 |
图2智能系统工程心理学核心问题的概念空间(修改自: Xu, 2021; 许为, 2020)
图2智能系统工程心理学核心问题的概念空间(修改自: Xu, 2021; 许为, 2020)
图3“以人为中心的AI”概念模型(来源: 许为, 2019b)
图3“以人为中心的AI”概念模型(来源: 许为, 2019b)
表4工程心理学新方法或提升的方法与传统方法的比较
表4工程心理学新方法或提升的方法与传统方法的比较
参考文献 95
[1] | 范俊君, 田丰, 杜一, 刘正捷, 戴国忠. (2018). 智能时代人机交互的一些思考. 中国科学: 信息科学, 48(4), 361-375. |
[2] | 葛列众, 李宏汀, 王笃明. (2012) 工程心理学. 北京: 中国人民大学出版社. |
[3] | 葛列众, 李宏汀, 王笃明. (2017). 工程心理学. 上海: 华东师范大学出版社. |
[4] | 葛列众, 许为. (2020). 用户体验: 理论和实践. 北京: 中国人民大学出版社. |
[5] | 刘烨, 汪亚珉, 卞玉龙, 任磊, 禤宇明. (2018). 面向智能时代的人机合作心理模型, 中国科学: 信息科学, 48(4), 376-389. |
[6] | 李彦宏. (2017). 智能革命: 迎接人工智能时代的社会、经济与文化变革. 北京: 中信出版集团. |
[7] | 百度. (2019). 2019 AI -人机交互趋势研究, 百度人工智能交互设计院(AIID). |
[8] | 石玉生, 黄伟芬, 田志强. (2017). 团队情景意识的概念、模型及测量方法. 航天医学与医学工程. 6, 463-468. |
[9] | 王巍, 黄晓丹, 赵继军, 申艳光. (2014). 隐式人机交互. 信息与控制, 43(1), 101-109. |
[10] | 孙向红, 吴昌旭, 张亮, 瞿炜娜. (2011). 工程心理学作用、地位和进展. 中国科学院院刊, 26(6), 650-660. |
[11] | 许为. (2003a). 自动化飞机驾驶舱中人-自动化系统交互作用的心理学研究. 心理科学, 26(3), 523-524. |
[12] | 许为. (2003b). 以用户为中心设计: 人机工效学的机遇和挑战. 人类工效学, 9(4), 8-11. |
[13] | 许为. (2005). 人-计算机交互作用研究和应用新思路的探讨. 人类工效学, 11(4), 37-40. |
[14] | 许为. (2017). 再论以用户为中心的设计: 新挑战和新机遇. 人类工效学, 23(1), 82-86. |
[15] | 许为. (2019a). 三论以用户为中心的设计: 智能时代的用户体验和创新设计. 应用心理学, 25(1), 3-17. |
[16] | 许为. (2019b). 四论以用户为中心的设计: 以人为中心的人工智能. 应用心理学, 25(4), 291-305. |
[17] | 许为. (2020). 五论以用户为中心的设计: 从自动化到智能时代的自主化以及自动驾驶车. 应用心理学, 26(2), 108-129. |
[18] | 许为, 葛列众. (2018). 人因学发展的新取向. 心理科学进展, 26(9), 1521-1534. |
[19] | 岳玮宁, 董士海, 王悦, 汪国平, 王衡, 陈文广. (2002). 普适计算的人机交互框架研究. 计算机学报, 27(12), 1657-1664. |
[20] | 朱祖祥. (2003). 工程心理学教程. 北京: 人民教育出版社. |
[21] | 张小龙, 吕菲, 程时伟. (2018). 智能时代的人机交互范式. 中国科学: 信息科学, 48(4), 406-418. |
[22] | Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., … Horvitz, E. (2019). Guidelines for human-AI interaction. CHI 2019, May 49, 2019, Glasgow, Scotland, UK. |
[23] | Bainbridge, L. (1983). Ironies of automation. Automatica, 19(6), 775-779. doi: 10.1016/0005-1098(83)90046-8URL |
[24] | Baker, A. L., Phillips, E. K., Ullman, D., & Keebler, J. R. (2018). Toward an understanding of trust repair in human- robot interaction: Current research and future directions. ACM Trans. Interact. Intell. Syst. 8, 4, Article 30, 30 pages. https://doi.org/10.1145/3181671. . URLpmid: 28966875 |
[25] | Bathaee, Y. (2018). The artificial intelligence black box and the failure of intent and causation. Harvard Journal of Law & Technology, 31(2), 890-938. |
[26] | Brandt, S. L., Lachter, J., Russell, R., & Shively, R. J. (2018). A human-autonomy teaming approach for a flight-following task. In C. Baldwin (ed.), Advances in Neuroergonomics and Cognitive Engineering, Advances in Intelligent Systems and Computing, Springer International Publishing AG. doi: 10.1007/978-3-319-60642-22. |
[27] | Brill, J. C., Cummings, M. L., Evans, A. W. III., Hancock, P. A., Lyons, J. B., & Oden, K. (2018). Navigating the advent of human-machine teaming. Proceedings of the Human Factors and Ergonomics Society 2018 Annual Meeting (pp.455-459). |
[28] | Burns, C. M., & Hajdukiewicz, J.(2004). Ecological Interface Design. CRC Press. |
[29] | Calhoun, G. L., Ruff, H. A., Behymer, K. J., & Frost, E. M. (2018). Human-autonomy teaming interface design considerations for multi-unmanned vehicle control. Theoretical Issues in Ergonomics Science, 19(3), 321-352. doi: 10.1080/1463922X.2017.1315751URL |
[30] | CARAVAN. (2018). CARAVAN public opinion poll: Driverless cars. Report from Advocates for Highway and Auto Safety: https://saferoads.org/wp-content/uploads/2018/ 01/AV-Poll-Report-January-2018-FINAL.pdf |
[31] | Card, S. K., Moran, T. P., & Newell, A. (1983). The psychology of human-computer interaction. Hillsdale: Lawrence Erlbaum Associates. |
[32] | Chen, J. Y. C., & Barnes, M. (2014). Human-agent teaming for multirobot control: A review of human factors issues. IEEE Transactions on Human-Machine Systems, 44(1), 13-29. |
[33] | Chen, J. Y. C., Lakhmani, S. G., Stowers, K., Selkowitz, A. R., Wright, J. L., & Barnes, M.(2017). Situation awareness- based agent transparency and human-autonomy teaming effectiveness. Theoretical Issues in Ergonomics Science, 19(3), 259-282. https://doi.org/10.1080/1463922X.2017. 1315750. |
[34] | de Visser, E. J., Pak, R., & Shaw, T. H. (2018). From automation to autonomy: The importance of trust repair in human- machine interaction, Ergonomics, 61(10), 1409-1427. doi: 10.1080/00140139.2018.1457725. URLpmid: 29578376 |
[35] | Donahoe, E. (2018). Human centered AI: Building trust, democracy and human rights by design. An overview of Stanford’s global digital policy incubator and the XPRIZE foundation’s June 11th Event. Stanford Global Digital Policy Incubator (GDPi). |
[36] | Endsley, M. R., & Jones, D. G.(2012) Designing for situation awareness: An approach to user-centered design (2nd edition). London: CRC Press. |
[37] | Endsley, M. R. (2015). Autonomous horizons: System autonomy in the air force - A path to the future (Autonomo us Horizons No. AF/ST TR 15-01). Washington D.C. Approved for Public Release. |
[38] | Endsley, M. R. (2017). From here to autonomy: Lessons learned from human-automation research. Human Factors, 59(1), 5-27. doi: 10.1177/0018720816681350. URLpmid: 28146676 |
[39] | Endsley, M. R. (2018). Situation awareness in future autonomous vehicles: Beware of the unexpected. Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018), IEA 2018, published by Springer. |
[40] | Foyle, D. C., & Hooey, B. L.(2007). Human Performance Modeling in Aviation. London: CRC Press. |
[41] | Fridman, L. (2018). Human-centered autonomous vehicle systems: Principles of effective shared autonomy. MIT HCAV Research Program: https://arxiv.org/pdf/1810.01835.pdf. |
[42] | Fu, X. L., Cai, L. H., Liu, Y., Jia, J., Chen, W. F., Yi, Z., … Wu, C. X. (2014). A computational cognition model of perception, memory, and judgment. Science China Information Science, 57, 1-15. |
[43] | Garlan, D., Siewiorek, D. P., Smailagic, A., & Steenkiste, P. (2002). Project aura: Toward distraction-free pervasive computing. IEEE Pervasive Computing, 1(2), 22-31. |
[44] | Grubb, P. L., Miller, L. C., Nelson, W. T., Warm, J. S., Dember, W. N., & Davies, D. R. (1994). Cognitive failure and perceived workload in vigilance performance. In M. Mouloua & R. Parasuraman (Eds.), Human performance in automated systems: Current research and trends,(pp. 115-121). Hillsdale, NJ: Lawrence Erlbaum. |
[45] | Gunning, D. (2017). Explainable Artificial Intelligence (XAI) at DARPA. https://www.darpa.mil/attachments/XAIProgram Update. pdf. |
[46] | Hancock, P. A. (2013). In search of vigilance: The problem of iatrogenically created psychological phenomena. American Psychologist, 68, 97-109. doi: 10.1037/a0030214URLpmid: 23088439 |
[47] | Hancock, P. A. (2019). Some pitfalls in the promises of automated and autonomous vehicles. Ergonomics, 62(4), 479-495. doi: 10.1080/00140139.2018.1498136. doi: 10.1080/00140139.2018.1498136URLpmid: 30024303 |
[48] | HFES (Human Factors and Ergonomics Society). (2018). HFES policy statement on autonomous and semiautonomous vehicles. https://www.hfes.org/public-policy/hfes-public- policy/ hfes-policy-statement-on-autonomous-and-semiautonomous- vehicles. |
[49] | Ho, N., Johnson, W., Panesar, K., Wakeland, K., Sadler, G., Wilso xn, N., … Brandt, S. (2017). Application of human- autonomy teaming to an advanced ground station for reduced crew operations. 2017 IEEE/AIAA 36th Digital Avionics Systems Conference (DASC), August, 2017. doi: 10.1109/ DASC.2017.8102124. |
[50] | Hoffman, R., Mueller, S. T., & Klein, G. (2017). Explaining explanation, part 2: Empirical foundations. IEEE Intelligent Systems, July/August, 78-86. |
[51] | Hollnagel, E., & Woods, D.(2005). Joint cognitive systems: Foundations of cognitive systems engineering London: CRC Press Foundations of cognitive systems engineering. London: CRC Press. |
[52] | IEEE. (2019). Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems. The Institute of Electrical and Electronics Engineers (IEEE), Incorporated. |
[53] | ISO (International Organization for Standardization). (2019). Ergonomics of human-system interaction - Part 810: Human-system Issues of Robotic, Intelligent and Autonomous Systems (version for review). |
[54] | Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons, 61(4), 577-586. |
[55] | Kaber, D. B. (2018). A conceptual framework of autonomous and automated agents. Theoretical Issues in Ergonomics Science, 19(4), 406-430. doi: 10.1080/1463922X.2017. 1363314. |
[56] | Kaur, H., Williams, A. C., & Lasecki, W. S. (2019). Building shared mental models between humans and AI for e?ective collaboration. CHI’19, May 2019, Glasgow, Scotland. |
[57] | Kistan, T., Gardi, A., & Sabatini, R. (2018). Machine learning and cognitive ergonomics in air traffic management: Recent developments and considerations for certification. Aerospace, 5, 103. doi: 10.3390/aerospace5040103. |
[58] | Kitchin, J., & Baber, C. (2016). A comparison of shared and distributed situation awareness in teams through the use of agent-based modelling. Theoretical Issues in Ergonomics Science, 17(1), 8-41. doi: 10.1080/1463922X.2015. 1106616. |
[59] | Koene, A., Dowthwaite, L., & Seth, S. (2018). IEEE P7003TM standard for algorithmic bias considerations. 2018 ACM/ IEEE International Workshop on Software Fairness, FairWare’18, May 2018, Gothenburg, Sweden. 38-41. |
[60] | Li, F. F., & Etchemendy, J. (2018). A common goal for the brightest minds from Stanford and beyond: Putting humanity at the center of AI. Stanford Human-Centered AI Center Site: https://hai.stanford.edu/news/introducing- stanfords-human-centered-ai-initiative. |
[61] | Lombrozo, T. (2012). Explanation and abductive inference. Oxford Handbook of Thinking and Reasoning, 260-276. |
[62] | Madhavan, P., & Wiegmann, D. A. (2007). Similarities and differences between human-human and human-automation trust: An integrative review. Theoretical Issues in Ergonomics Science, 8(4), 277-301. |
[63] | Madni, A. M., & Madni, C. C. (2018). Architectural framework for exploring adaptive human-machine teaming options in simulated dynamic environments. Systems, 6(44), 1-17. doi: 10.3390/systems6040044. |
[64] | McNeese, N. J., Demir, M., Chiou, E., & Cooke, N. J. (2019). Understanding the role of trust in human-autonomy teaming. Proceedings of the 52nd Hawaii International Conference on System Sciences (pp.254-263). |
[65] | Mercado, J. E., Rupp, M. A., Chen, J. Y. C., Barnes, M. J., Barber, D., & Procci, K. (2016). Intelligent agent transparency in human-agent teaming for multi-UxV management. Human Factors, 58(3), 401-415. doi: 10.1177/0018720815621206URLpmid: 26867556 |
[66] | Mumaw, R. J., Boonman, D., Griffin, J., & Xu, W. (1999). Training and design approaches for enhancing automation awareness (Boeing Document D6-82577), December, 1999. |
[67] | Muslim, H., & Itoh, M. (2019). A theoretical framework for designing human-centered automotive automation systems. Cognition, Technology & Work, 21, 685-697. doi: 10.1007/s10111- 018-0509-8. |
[68] | Navarro, J. (2018). A state of science on highly automated driving. Theoretical Issues in Ergonomics Science, 20(3), 366-296. doi: 10.1080/1463922X.2018.1439544. |
[69] | NTSB. (2017). Collision between a car operating with automated vehicle control systems and a tractor-semitrailor truck near Williston, Florida, May 7, 2016. Accidents Report, by National Transportation Safety Board (NTSB) 2017, Washington, DC. URLpmid: 24546804 |
[70] | Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse and abuse. Human Factors, 39, 230-253. doi: 10.1518/001872097778543886URL |
[71] | Parasuraman, R., & Rizzo, M. (2006) Neuroergonomics: The brain at work. Oxford: Oxford University Press The brain at work. Oxford: Oxford University Press. |
[72] | Prada, R., & Paiva, A. (2014). Human-agent interaction: Challenges for bringing humans and agents together. https://www.semanticscholar.org/paper/Human-Agent-Interaction-%3A-Challenges-for-Bringing-Prada-Paiva/ebe7774c91eaa3faa4009a58eb3087e930c7cdd5 |
[73] | Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J.-F., Breazeal, C., … Wellman, M. (2019). Machine behaviour. Nature, 568(7753), 477-486. doi: 10.1038/s41586-019-1138-yURLpmid: 31019318 |
[74] | Ramaraj, P., Sahay, S., Kumar, S. H., Lasecki, W., & Laird, J. E. (2019). Towards using transparency mechanisms to build better mental models. Advances in Cognitive Systems, 7, 1-6. |
[75] | Salmon, P. M. (2019). The horse has bolted! Why human factors and ergonomics has to catch up with autonomous vehicles (and other advanced forms of automation). Ergonomics, 62(4), 502-504. doi: 10.1080/00140139.2018. 1563333. doi: 10.1080/00140139.2018.1563333URLpmid: 30957703 |
[76] | Salvucci, D. D. (2006). Modeling driver behavior in a cognitive architecture. Human Factors, 48(2), 362-380. URLpmid: 16884055 |
[77] | Sarter, N. B., & Woods, D. D. (1995). How in the world did we ever get into that mode: Mode error and awareness in supervisory control. Human Factors, 37(1), 5-19. doi: 1518/001872095779049516. doi: 10.1518/001872095779049516URL |
[78] | Sarter, N. B., Wickens, C. D., Mumaw, R. J., Kimball, S., Marsh, R., & Xu, W. (2003). Modern flight deck automation: Pilots’ mental model and monitoring patterns and performance. Conference: 12th International Symposium on Aviation Psychology, August 2003, At: Dayton OH, United States. |
[79] | Santamaria, T., & Nathan-Roberts, D. (2017). Personality measurement and design in human-robot interaction: A systematic and critical review. Proceedings of the Human Factors and Ergonomics Society 2017 Annual Meeting (pp. 853-857). |
[80] | Scheutz, M., DeLoach, S. A., & Adams, J. A. (2017). A framework for developing and using shared mental models in human-agent teams. Journal of Cognitive Engineering and Decision Making, 11(3), 203-224. doi: 10.1177/ 1555343416682891. doi: 10.1177/1555343416682891URL |
[81] | Shively, R. J., Lachter, J., Brandt, S. L., Matessa, M., Battiste, V., & Johnson, W. W. (2018). Why human-autonomy teaming? International Conference on Applied Human Factors and Ergonomics, May 2018. doi: 10.1007/978- 3-319-60642-2_1. |
[82] | Stanton, N. A. (2016). Distributed situation awareness. Theoretical Issues in Ergonomics Science, 17(1), 1-7. doi: 10.1080/1463922X.2015.1106615. |
[83] | Stanton, N. A., Salmon, P. M., Walker, G. H., Salas, E., & Hancock, P. A. (2017). State-of-science: situation awareness in individuals, teams and systems. Ergonomics, 60(4), 449-466. doi: 10.1080/00140139.2017.1278796. URLpmid: 28051356 |
[84] | Strauch, B. (2017). Ironies of automation: Still unresolved after all these years. IEEE Transactions on Human-Machine Systems, 99, 1-15. doi: 10.1109/THMS.2017.2732506 |
[85] | van den Broek, J., Schraagen, J. M. C., te Brake, G. M., & van Diggelin, J. (2017). Approaching full autonomy in the maritime domain: Paradigm choices and human factors challenges. In Proceedings of the MTEC, Singapore, 26- 28 April 2017. |
[86] | van den Bosch, K., & Bronkhorst, A. W. (2018). Human-AI cooperation to benefit military decision making. Technical Evaluation Report. NATO STO, https://www.researchgate. net/publication/325718292_Human-AI_Cooperation_to_Benefit_Military_Decision_Making. |
[87] | Vicente, K. J. (1999). Cognitive Work Analysis: Toward Safe, Productive, and Healthy Computer-Based Work. Hillsdale, NJ: Erlbaum. |
[88] | Vu K-P, L., Lachter, J., Battiste, V., & Strybel, T. (2018). Single pilot operations in domestic commercial aviation. Human Factors, 60(6), 755-762. https://doi.org/10.1177/ 0018720818791372. URLpmid: 29617161 |
[89] | Woods, D. D., Leveson, N., & Hollnagel, E. (2012). Resilience engineering: Concepts and precepts. Aldershot, UK: Ashgate Publishing. |
[90] | Wu, C. (2018). The five key questions of human performance modeling. International Journal of Industrial Ergonomics, 63, 3-6. https://doi.org/10.1016/j.ergon.2016.05.007. URLpmid: 29531424 |
[91] | Xu, W. (2007). Identifying problems and generating recommendations for enhancing complex systems: Applying the abstraction hierarchy framework as an analytical tool. Human Factors, 49(6), 975-994. URLpmid: 18074698 |
[92] | Xu, W., Furie, D., Mahabhaleshwar, M., Suresh, B., & Chouhan, H. (2019). Applications of an interaction, process, integration, and intelligence (IPII) design approach for ergonomics solutions. Ergonomics. 62(7), 954-980. doi. org/10.1080/00140139.2019.1588996. URLpmid: 30836051 |
[93] | Xu, W. (2019). Toward human-centered AI: A perspective from human-computer interaction. ACM Interactions, 26(4), 42-46. |
[94] | Xu, W. (2021). From automation to autonomy and autonomous vehicles: Challenges and opportunities for human-computer interaction. ACM Interactions (to be appeared on No.1). |
[95] | Zhao, G. Z., & Wu, C. X. (2013). Effectiveness and acceptance of the intelligent speeding prediction system (ISPS). Accident Analysis and Prevention, 52, 19-28. doi: 10. 1016/j.aap.2012.12.013. URLpmid: 23298705 |
相关文章 2
[1] | 许为, 葛列众. 人因学发展的新取向[J]. 心理科学进展, 2018, 26(9): 1521-1534. |
[2] | 贾会宾;赵庆柏;周治金. 心理负荷的评估:基于神经人因学的视角[J]. 心理科学进展, 2013, 21(8): 1390-1399. |
PDF全文下载地址:
http://journal.psych.ac.cn/xlkxjz/CN/article/downloadArticleFile.do?attachType=PDF&id=5142