删除或更新信息,请邮件至freekaoyan#163.com(#换成@)

面孔表情和声音情绪信息整合加工的脑机制

本站小编 Free考研考试/2022-01-01

李萍, 张明明, 李帅霞, 张火垠, 罗文波()
辽宁师范大学脑与认知神经科学研究中心, 大连 116029
收稿日期:2018-05-17出版日期:2019-07-15发布日期:2019-05-22
通讯作者:罗文波E-mail:luowb@lnnu.edu.cn

基金资助:* 国家自然科学基金项目资助(31871106)

The integration of facial expression and vocal emotion and its brain mechanism

LI Ping, ZHANG Mingming, LI Shuaixia, ZHANG Huoyin, LUO Wenbo()
Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China
Received:2018-05-17Online:2019-07-15Published:2019-05-22
Contact:LUO Wenbo E-mail:luowb@lnnu.edu.cn






摘要/Abstract


摘要: 在现实生活中, 有效的情绪识别往往依赖于不同通道间的信息整合(如, 面孔、声音)。本文梳理相关研究认为, 面孔表情和声音情绪信息在早期知觉阶段即产生交互作用, 且初级感知觉皮层负责两者信息的编码; 而在晚期决策阶段, 杏仁核、颞叶等高级脑区完成对情绪信息内容的认知评估整合; 此外, 神经振荡活动在多个频段上的功能耦合促进了跨通道情绪信息整合。未来研究需要进一步探究两者整合是否与情绪冲突有关, 以及不一致的情绪信息在整合中是否有优势, 探明不同频段的神经振荡如何促进面孔表情和声音情绪信息整合, 以便更深入地了解面孔表情和声音情绪信息整合的神经动力学基础。


[1] 张亮, 孙向红, 张侃 . (2009). 情绪信息的多通道整合. 心理科学进展, 17(6), 1133-1138.
[2] 王苹, 潘治辉, 张立洁, 陈煦海 . (2015). 动态面孔和语音情绪信息的整合加工及神经生理机制. 心理科学进展, 23(7), 1109-1117.
[3] Armony J. L., & Dolan R. J . (2000). Modulation of attention by threat stimuli: An fMRI study. Journal of Cognitive Neuroscience, 53-53.
[4] Balconi M., & Carrera A. , (2011). Cross-modal integration of emotional face and voice in congruous and incongruous pairs: The P2 ERP effect. Journal of Cognitive Psychology, 23(1), 132-139.
doi: 10.1080/20445911.2011.473560URL
[5] Belyk M., Brown S., Lim J., & Kotz S. A . (2017). Convergence of semantics and emotional expression within the IFG pars orbitalis. Neuroimage, 156, 240-248.
doi: 10.1016/j.neuroimage.2017.04.020URL
[6] Calvo M. G., Beltran D., & Fernandez-Martin A . (2014). Processing of facial expressions in peripheral vision: Neurophysiological evidence. Biological Psychology, 100, 60-70.
doi: 10.1016/j.biopsycho.2014.05.007URL
[7] Calvo M. G., & Nummenmaa L. , (2016). Perceptual and affective mechanisms in facial expression recognition: An integrative review. Cognition and Emotion, 30(6), 1081-1106.
doi: 10.1080/02699931.2015.1049124URL
[8] Campanella S., & Belin P. , (2007). Integrating face and voice in person perception. Trends in Cognitive Sciences, 11(12), 535-543.
doi: 10.1016/j.tics.2007.10.001URL
[9] Campanella S., Bruyer R., Froidbise S., Rossignol M., Joassin F., Kornreich C., ... Verbanck P . (2010). Is two better than one? A cross-modal oddball paradigm reveals greater sensitivity of the P300 to emotional face-voice associations. Clinical Neurophysiology, 121(11), 1855-1862.
doi: 10.1016/j.clinph.2010.04.004URL
[10] Chen X. H., Edgar J. C., Holroyd T., Dammers J., Thoennessen H., Roberts T. P. L., & Mathiak K . (2010). Neuromagnetic oscillations to emotional faces and prosody. European Journal of Neuroscience, 31(10), 1818-1827.
doi: 10.1111/ejn.2010.31.issue-10URL
[11] Chen X. H., Han L. Z., Pan Z. H., Luo Y. M., & Wang P . (2016). Influence of attention on bimodal integration during emotional change decoding: ERP evidence. International Journal of Psychophysiology, 106, 14-20.
doi: 10.1016/j.ijpsycho.2016.05.009URL
[12] Chen X. H., Pan Z. H., Wang P., Yang X. H., Liu P., You X. Q., & Yuan J. J . (2016). The integration of facial and vocal cues during emotional change perception: EEG markers. Social Cognitive and Affective Neuroscience, 11(7), 1152-1161.
doi: 10.1093/scan/nsv083URL
[13] Chen X. H., Pan Z. H., Wang P., Zhang L. J., & Yuan J. J . (2015). EEG oscillations reflect task effects for the change detection in vocal emotion. Cognitive Neurodynamics, 9(3), 351-358.
doi: 10.1007/s11571-014-9326-9URL
[14] Chen X. H., Yang J. F., Gan S. Z., & Yang Y. F . (2012). The contribution of sound intensity in vocal emotion perception: Behavioral and electrophysiological evidence. PLoS One, 7(1), e30278.
doi: 10.1371/journal.pone.0030278URL
[15] Collignon O., Girard S., Gosselin F., Roy S., Saint-Amour D., Lassonde M., & Lepore F . (2008). Audio-visual integration of emotion expression. Brain Research, 1242, 126-135.
doi: 10.1016/j.brainres.2008.04.023URL
[16] Cuthbert B. N., Schupp H. T., Bradley M. M., Birbaumer N., & Lang P. J . (2000). Brain potentials in affective picture processing: Covariation with autonomic arousal and affective report. Biological Psychology, 52(2), 95-111.
doi: 10.1016/S0301-0511(99)00044-7URL
[17] de Gelder B., & Vroomen J., (2000). The perception of emotions by ear and by eye. Cognition and Emotion, 14(3), 289-311.
doi: 10.1080/026999300378824URL
[18] Delle-Vigne D., Kornreich C., Verbanck P., & Campanella S . (2015). The P300 component wave reveals differences in subclinical anxious-depressive states during bimodal oddball tasks: An effect of stimulus congruence. Clinical Neurophysiology, 126(11), 2108-2123.
doi: 10.1016/j.clinph.2015.01.012URL
[19] Ding R., Li P., Wang W., & Luo W . (2017). Emotion processing by ERP combined with development and plasticity. Neural Plasticity, 2017(2), 5282670.
[20] Doi H., & Shinohara K. , (2015). Unconscious presentation of fearful face modulates electrophysiological responses to emotional prosody. Cerebral Cortex, 25(3), 817-832.
doi: 10.1093/cercor/bht282URL
[21] Dolan R. J., Morris J. S., & .. Gelder B . (2001). Crossmodal binding of fear in voice and face. Proceedings of the National Academy of Sciences of the United States of America, 98(17), 10006-10010.
doi: 10.1073/pnas.171288598URL
[22] Epperson C. N., Amin Z., Ruparel K., Gur R., & Loughead J . (2012). Interactive effects of estrogen and serotonin on brain activation during working memory and affective processing in menopausal women. Psychoneuroendocrinology, 37(3), 372-382.
doi: 10.1016/j.psyneuen.2011.07.007URL
[23] Ethofer T., Anders S., Erb M., Herbert C., Wiethoff S., Kissler J., ... Wildgruber D . (2006). Cerebral pathways in processing of affective prosody: A dynamic causal modeling study. Neuroimage, 30(2), 580-597.
doi: 10.1016/j.neuroimage.2005.09.059URL
[24] Ethofer T., Pourtois G., & Wildgruber D . (2006). Investigating audiovisual integration of emotional signals in the human brain. Progress in Brain Research, 156(6), 345-361.
doi: 10.1016/S0079-6123(06)56019-4URL
[25] Fingelkurts A. A., Fingelkurts A. A., & Seppo K. H. N ., (2005). Functional connectivity in the brain--Is it an elusive concept? Neuroscience & Biobehavioral Reviews, 28(8), 827-836.
[26] Focker J., Gondan M., & Roder B . (2011). Preattentive processing of audio-visual emotional signals. Acta Psychologica, 137(1), 36-47.
doi: 10.1016/j.actpsy.2011.02.004URL
[27] Gao Z. F., Goldstein A., Harpaz Y., Hansel M., Zion-Golumbic E., & Bentin S . (2013). A magnetoencephalographic study of face processing: M170, gamma-band oscillations and source localization. Human Brain Mapping, 34(8), 1783-1795.
doi: 10.1002/hbm.v34.8URL
[28] Hagan C. C., Woods W., Johnson S., Calder A. J., Green G. G. R., & Young A. W . (2009). MEG demonstrates a supra-additive response to facial and vocal emotion in the right superior temporal sulcus. Proceedings of the National Academy of Sciences of the United States of America, 106(47), 20010-20015.
doi: 10.1073/pnas.0905792106URL
[29] Hagan C. C., Woods W., Johnson S., Green G. G. R., & Young A. W . (2013). Involvement of right STS in audio-visual integration for affective speech demonstrated using MEG. PLoS One, 8(8), e70648.
doi: 10.1371/journal.pone.0070648URL
[30] Hernandez-Gutierrez D., Abdel Rahman R., Martin-Loeches M., Munoz F., Schacht A., & Sommer W . (2018). Does dynamic information about the speaker's face contribute to semantic speech processing? ERP evidence. Cortex, 104, 12-25.
doi: 10.1016/j.cortex.2018.03.031URL
[31] Ho H. T., Schroger E., & Kotz S. A . (2015). Selective attention modulates early human evoked potentials during emotional face-voice processing. Journal of Cognitive Neuroscience, 27(4), 798-818.
doi: 10.1162/jocn_a_00734URL
[32] Huang X. Q., Zhang J., Liu J., Sun L., Zhao H. Y., Lu Y. G., ... Li J . (2012). C-reactive protein promotes adhesion of monocytes to endothelial cells via NADPH oxidase-mediated oxidative stress. Journal of Cellular Biochemistry, 113(3), 857-867.
doi: 10.1002/jcb.v113.3URL
[33] Jessen S., & Kotz S. A . (2011). The temporal dynamics of processing emotions from vocal, facial, and bodily expressions. Neuroimage, 58(2), 665-674.
doi: 10.1016/j.neuroimage.2011.06.035URL
[34] Jia G., Peng X., Li Y., Hua S., & Zhao X. J . (2012). The oscillatory activities and its synchronization in auditory-visual integration as revealed by event-related potentials to bimodal stimuli. Proceedings of SPIE - The International Society for Optical Engineering, 8291(1), 52.
[35] Jochen K., Ingo H., Hermann A., Klaus M., & Werner L . (2005). Hearing lips: Gamma-band activity during audiovisual speech perception. Cerebral Cortex, 15(5), 646-653.
doi: 10.1093/cercor/bhh166URL
[36] Klasen M., Chen Y. H., & Mathiak K . (2012). Multisensory emotions: Perception, combination and underlying neural processes. Reviews in the Neurosciences, 23(4), 381-392.
[37] Klasen M., Kenworthy C. A., Mathiak K. A., Kircher T. T. J., & Mathiak K . (2011). Supramodal representation of emotions. Journal of Neuroscience, 31(38), 15218-15218.
[38] Klasen M., Kreifelts B., Chen Y. H., Seubert J., & Mathiak K . (2014). Neural processing of emotion in multimodal settings. Frontiers in Human Neuroscience, 8(8), 822.
[39] Knowland V. C. P., Mercure E., Karmiloff-Smith A., Dick F., & Thomas M. S. C ., (2014). Audio-visual speech perception: A developmental ERP investigation. Developmental Science, 17(1), 110-124.
doi: 10.1111/desc.12098URL
[40] Kober H., Barrett L. F., Joseph J., Bliss-Moreau E., Lindquist K., & Wager T. D . (2008). Functional grouping and cortical-subcortical interactions in emotion: A meta-analysis of neuroimaging studies. Neuroimage, 42(2), 998-1031.
doi: 10.1016/j.neuroimage.2008.03.059URL
[41] Kokinous J., Kotz S. A., Tavano A., & Schroger E . (2015). The role of emotion in dynamic audiovisual integration of faces and voices. Social Cognitive and Affective Neuroscience, 10(5), 713-720.
doi: 10.1093/scan/nsu105URL
[42] Kokinous J., Tavano A., Kotz S. A., & Schroeger E . (2017). Perceptual integration of faces and voices depends on the interaction of emotional content and spatial frequency. Biological Psychology, 123, 155-165.
doi: 10.1016/j.biopsycho.2016.12.007URL
[43] Kreifelts B., Ethofer T., Grodd W., Erb M., & Wildgruber D . (2007). Audiovisual integration of emotional signals in voice and face: An event-related fMRI study. Neuroimage, 37(4), 1445-1456.
doi: 10.1016/j.neuroimage.2007.06.020URL
[44] Kreifelts B., Ethofer T., Huberle E., Grodd W., & Wildgruber D . (2010). Association of trait emotional intelligence and individual fMRI-activation patterns during the perception of social signals from voice and face. Human Brain Mapping, 31(7), 979-991.
[45] Kreifelts B., Ethofer T., Shiozawa T., Grodd W., & Wildgruber D . (2009). Cerebral representation of non-verbal emotional perception: fMRI reveals audiovisual integration area between voice-and face-sensitive regions in the superior temporal sulcus. Neuropsychologia, 47(14), 3059-3066.
doi: 10.1016/j.neuropsychologia.2009.07.001URL
[46] Kuhn L. K., Wydell T., Lavan N., McGettigan C., & Garrido L . (2017). Similar representations of emotions across faces and voices. Emotion, 17(6), 912-937.
doi: 10.1037/emo0000282URL
[47] Kumar G. V., Kumar N., Roy D., & Banerjee A . (2018). Segregation and integration of cortical information processing underlying cross-modal perception. Multisensory Research, 31(5), 481-500.
doi: 10.1163/22134808-00002574URL
[48] Lin Y. F., Liu B. L., Liu Z. W., & Gao X. R . (2015). EEG gamma-band activity during audiovisual speech comprehension in different noise environments. Cognitive Neurodynamics, 9(4), 389-398.
doi: 10.1007/s11571-015-9333-5URL
[49] Liu P., Rigoulot S., & Pell M. D . (2015). Culture modulates the brain response to human expressions of emotion: Electrophysiological evidence. Neuropsychologia, 67, 1-13.
doi: 10.1016/j.neuropsychologia.2014.11.034URL
[50] Maier J. X., Chandrasekaran C., & Ghazanfar A. A . (2008). Integration of bimodal looming signals through neuronal coherence in the temporal lobe. Current Biology, 18(13), 963-968.
doi: 10.1016/j.cub.2008.05.043URL
[51] Mileva M., Tompkinson J., Watt D., & Burton A. M . (2018). Audiovisual integration in social evaluation. Journal of Experimental Psychology: Human Perception and Performance, 44(1), 128-138.
doi: 10.1037/xhp0000439URL
[52] Muller V. I., Cieslik E. C., Turetsky B. I., & Eickhoff S. B . (2012). Crossmodal interactions in audiovisual emotion processing. Neuroimage, 60(1), 553-561.
doi: 10.1016/j.neuroimage.2011.12.007URL
[53] Noy D., Mouta S., Lamas J., Basso D., Silva C., & Santos J. A . (2017). Audiovisual integration increases the intentional step synchronization of side-by-side walkers. Human Movement Science, 56, 71-87.
doi: 10.1016/j.humov.2017.10.007URL
[54] Olofsson J. K., & Polich J. , (2007). Affective visual event-related potentials: Arousal, repetition, and time-on-task. Biological Psychology, 75(1), 101-108.
doi: 10.1016/j.biopsycho.2006.12.006URL
[55] Pan Z. H., Liu X., Luo Y. M., & Chen X. H . (2017). Emotional intensity modulates the integration of bimodal angry expressions: ERP evidence. Frontiers in Neuroscience, 11.
[56] Park J. Y., Gu B. M., Kang D. H., Shin Y. W., Choi C. H., Lee J. M., & Kwon J. S . (2010). Integration of cross-modal emotional information in the human brain: An fMRI study. Cortex, 46(2), 161-169.
doi: 10.1016/j.cortex.2008.06.008URL
[57] Paulmann S., Jessen S., & Kotz S. A . (2009). Investigating the multimodal nature of human communication insights from ERPs. Journal of Psychophysiology, 23(2), 63-76.
doi: 10.1027/0269-8803.23.2.63URL
[58] Paulmann S., & Pell M. D . (2010a). Contextual influences of emotional speech prosody on face processing: How much is enough? Cognitive Affective & Behavioral Neuroscience, 10(2), 230-242.
[59] Paulmann S., & Pell M. D . (2010b). Dynamic emotion processing in Parkinson's disease as a function of channel availability. Journal of Clinical and Experimental Neuropsychology, 32(8), 822-835.
doi: 10.1080/13803391003596371URL
[60] Pourtois G., de Gelder B., Vroomen J., Rossion B., & Crommelinck M . (2000). The time-course of intermodal binding between seeing and hearing affective information. Neuroreport, 11(6), 1329-1333.
doi: 10.1097/00001756-200004270-00036URL
[61] Pourtois G., Debatisse D., Despland P. A., & de Gelder B . (2002). Facial expressions modulate the time course of long latency auditory brain potentials. Cognitive Brain Research, 14(1), 99-105.
doi: 10.1016/S0926-6410(02)00064-2URL
[62] Pourtois G., Thut G., de Peralta R. G., Michel C., & Vuilleumier P . (2005). Two electrophysiological stages of spatial orienting towards fearful faces: Early temporo-parietal activation preceding gain control in extrastriate visual cortex. Neuroimage, 26(1), 149-163.
doi: 10.1016/j.neuroimage.2005.01.015URL
[63] Proverbio A. M., & De Benedetto , F. (2018). Auditory enhancement of visual memory encoding is driven by emotional content of the auditory material and mediated by superior frontal cortex. Biological Psychology, 132, 164-175.
doi: 10.1016/j.biopsycho.2017.12.003URL
[64] Robins D. L., Hunyadi E., & Schultz R. T . (2009). Superior temporal activation in response to dynamic audio-visual emotional cues. Brain and Cognition, 69(2), 269-278.
doi: 10.1016/j.bandc.2008.08.007URL
[65] Romero Y. R., Senkowski D., & Keil J . (2015). Early and late beta-band power reflect audiovisual perception in the McGurk illusion. Journal of Neurophysiology, 113(7), 2342-2350.
doi: 10.1152/jn.00783.2014URL
[66] Schelenz P. D., Klasen M., Reese B., Regenbogen C., Wolf D., Kato Y., & Mathiak K . (2013). Multisensory integration of dynamic emotional faces and voices: Method for simultaneous EEG-fMRI measurements. Frontiers in Human Neuroscience, 7(1), 729.
[67] Schupp H. T., Stockburger J., Codispoti M., Junghoefer M., Weike A. I., & Hamm A. O . (2007). Selective visual attention to emotion. Journal of Neuroscience, 27(5), 1082-1089.
doi: 10.1523/JNEUROSCI.3223-06.2007URL
[68] Simon D. M., & Wallace M. T . (2018). Integration and temporal processing of asynchronous audiovisual speech. Journal of Cognitive Neuroscience, 30(3), 319-337.
doi: 10.1162/jocn_a_01205URL
[69] Stein B. E., & Stanford T. R . (2008). Multisensory integration: Current issues from the perspective of the single neuron. Nature Reviews Neuroscience, 9(4), 255-266.
[70] Stein B. E., Stanford T. R., Ramachandran R., de Perrault T. J., & Rowland B. A . (2009). Challenges in quantifying multisensory integration: Alternative criteria, models, and inverse effectiveness. Experimental Brain Research, 198(2-3), 113-126.
doi: 10.1007/s00221-009-1880-8URL
[71] Strelnikov K., Foxton J., Marx M., & Barone P . (2015). Brain prediction of auditory emphasis by facialS expressions during audiovisual continuous speech. Brain Topography, 28(3), 494-505.
doi: 10.1007/s10548-013-0338-2URL
[72] Symons A. E., El-Deredy W., Schwartze M., & Kotz S. A . (2016). The functional role ofneural oscillations in non-verbal emotional communication. Frontiers in Human Neuroscience, 10, 239.
[73] Tallon-Baudry C., & Bertrand O. , (1999). Oscillatory gamma activity in humans and its role in object representation. Trends in Cognitive Sciences, 3(4), 151-162.
doi: 10.1016/S1364-6613(99)01299-1URL
[74] Tang X. Y., Wu J. L., & Shen Y . (2016). The interactions of multisensory integration with endogenous and exogenous attention. Neuroscience and Biobehavioral Reviews, 61, 208-224.
doi: 10.1016/j.neubiorev.2015.11.002URL
[75] Van Kleef, G. A . (2009). How emotions regulate social life: The emotions as social information (EASI) model. Current Directions in Psychological Science, 18(3), 184-188.
doi: 10.1111/j.1467-8721.2009.01633.xURL
[76] van Wassenhove V., Grant K. W., & Poeppel D . (2005). Visual speech speeds up the neural processing of auditory speech. Proceedings of the National Academy of Sciences of the United States of America, 102(4), 1181-1186.
doi: 10.1073/pnas.0408949102URL
[77] Yang C. Y., & Lin C. P . (2017). Magnetoencephalography study of different relationships among low- and high-frequency-band neural activities during the induction of peaceful and fearful audiovisual modalities among males and females. Journal of Neuroscience Research, 95(1-2), 176-188.
doi: 10.1002/jnr.23885URL
[78] Yaple Z. A., Vakhrushev R., & Jolij J . (2016). Investigating emotional top down modulation of ambiguous faces by single pulse TMS on early visual cortices Frontiers in Neuroscience, 10, 305.
[79] Yeh P. w., Geangu E., & Reid V . (2016). Coherent emotional perception from body expressions and the voice. Neuropsychologia, 91, 99-108.
doi: 10.1016/j.neuropsychologia.2016.07.038URL
[80] Zhu L. L., & Beauchamp M. S . (2017). Mouth and voice: A relationship between visual and auditory preference in the human superior temporal sulcus. Journal of Neuroscience, 37(10), 2697-2708.
doi: 10.1523/JNEUROSCI.2914-16.2017URL
[81] Zinchenko A., Obermeier C., Kanske P., Schroger E., & Kotz S. A . (2017). Positive emotion impedes emotional but not cognitive conflict processing. Cognitive Affective & Behavioral Neuroscience, 17(3), 665-677.




[1]张辉华. 社会网络视角的团队情绪智力[J]. 心理科学进展, 2021, 29(8): 1381-1395.
[2]李晓明, 邹是, 高友明. 失望情绪在不作为惯性产生中的作用[J]. 心理科学进展, 2021, 29(8): 1396-1401.
[3]张珊珊, 王婧怡, 李昱汝. 情绪自旋及其心理健康功能[J]. 心理科学进展, 2021, 29(8): 1430-1437.
[4]曾宪卿, 许冰, 孙博, 叶健彤, 傅世敏. EMMN受偏差-标准刺激对类型和情绪类型影响: 来自元分析的证据[J]. 心理科学进展, 2021, 29(7): 1163-1178.
[5]王晓田, 王娜, 何金波. 前瞻性情绪作为社会风险的信息源假说:公共场景下风险决策的情绪及文化机制探讨[J]. 心理科学进展, 2021, 29(6): 959-966.
[6]彭坚, 曹兵兵. 追随者主动工作行为的上行影响:内隐追随视角[J]. 心理科学进展, 2021, 29(6): 967-977.
[7]何蔚祺, 李帅霞, 赵东方. 群体面孔情绪感知的神经机制[J]. 心理科学进展, 2021, 29(5): 761-772.
[8]尹俊婷, 王冠, 罗俊龙. 威胁对创造力的影响:认知与情绪双加工路径[J]. 心理科学进展, 2021, 29(5): 815-826.
[9]王学思, 李静雅, 王美芳. 父母婚姻冲突对儿童发展的影响及其机制[J]. 心理科学进展, 2021, 29(5): 875-884.
[10]丁琳洁, 李旭, 尹述飞. 工作记忆中的积极效应:情绪效价与任务相关性的影响[J]. 心理科学进展, 2021, 29(4): 652-664.
[11]关旭旭, 王红波. 抑制引起的遗忘及其神经机制[J]. 心理科学进展, 2021, 29(4): 665-676.
[12]叶超群, 林郁泓, 刘春雷. 创造力产生过程中的神经振荡机制[J]. 心理科学进展, 2021, 29(4): 697-706.
[13]方岚, 郑苑仪, 金晗, 李晓庆, 杨玉芳, 王瑞明. 口语句子的韵律边界:窥探言语理解的秘窗[J]. 心理科学进展, 2021, 29(3): 425-437.
[14]章小丹, 张沥今, 丁玉珑, 曲折. 注意过程中的行为振荡现象[J]. 心理科学进展, 2021, 29(3): 460-471.
[15]张衍, 王俊秀, 席居哲. 幸灾乐祸的重新审视和互动过程模型的构想[J]. 心理科学进展, 2021, 29(3): 505-519.





PDF全文下载地址:

http://journal.psych.ac.cn/xlkxjz/CN/article/downloadArticleFile.do?attachType=PDF&id=4720
相关话题/科学 心理 信息 神经 社会