Secondary Logo

Journal Logo

Face-sensitive P1 and N170 components are related to the perception of two-dimensional and three-dimensional objects

Tanaka, Hideaki

doi: 10.1097/WNR.0000000000001003
COGNITIVE NEUROSCIENCE AND NEUROPSYCHOLOGY
Open

Studies investigating event-related potentials have reported on face-sensitive P1 and N170 components, as well as object-related N170 components. Face-sensitive N170 is also sensitive to face inversion, which has been defined as the face-inversion effect. This study aimed to directly compare the relationship between face-sensitive N170 during face perception (upright and inverted faces) and object-related N170 during object perception (two-dimensional and three-dimensional objects). More specifically, the purpose was to clarify whether face-sensitive P1 and N170 components are related to the perception of two-dimensional and three-dimensional objects. Electroencephalography was performed in participants who were shown one of the four types of stimuli: upright faces, inverted faces, two-dimensional objects, or three-dimensional objects. The results revealed that the latency of P1 for three-dimensional objects was significantly longer than that for two-dimensional objects, the latency of N170 for three-dimensional objects was significantly longer than that for two-dimensional objects, and the latency of N170 for inverted faces was significantly longer than that for upright faces. These findings suggest that face-sensitive P1 and N170 components are related to the perception of two-dimensional and three-dimensional objects. Moreover, the results suggest that, similar to the face-inversion effect of face-sensitive N170 affected by mental rotation of the face, the object-related N170 of three-dimensional objects was affected by the mental rotation of two-dimensional objects. This suggests the novel possibility that face-sensitive P1 and N170 components can be used as an index for the perception of two-dimensional and three-dimensional objects.

Department of Psychology, Faculty of Psychology, Otemon Gakuin University, Ibaraki, Osaka Prefecture, Japan

Correspondence to Hideaki Tanaka, PhD, Department of Psychology, Faculty of Psychology, Otemon Gakuin University, 2-1-15 Nishiai, Ibaraki 567-8502, Osaka Prefecture, Japan Tel: +81 72 641 9694; fax: +81 72 643 9432; e-mail: tanahide@otemon.ac.jp

This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal. http://creativecommons.org/licenses/by-nc-nd/4.0/

Received February 15, 2018

Accepted February 16, 2018

Back to Top | Article Outline

Introduction

Studies investigating event-related potential (ERP) have reported on face-sensitive P1 and N170 components 1,2. According to Bentin et al.1, N170 reflected high-level face perception. N170 demonstrates sharp negativity, has a posterior temporal scalp distribution, and peaks ∼170 ms after human face presentation. The amplitude of N170 is larger for human faces than nonface objects and scrambled faces 3–10. Functional MRI experiments have reported that the neural generators of N170 are estimated to lie in the fusiform area of the brain 11,12. Moreover, P1, an early positive component of the ERP that emerges on medial occipital electrodes, appears nearly 100 ms after human face presentation. This component was reported to reflect the processing of low-level visual features, such as color and contrast 2. Moreover, N170 is sensitive to face inversion, which has been defined as the face-inversion effect (FIE). The FIE showed that the N170 latency of inverted faces is more delayed than that of upright ones, and that the N170 amplitude of inverted faces is larger than that of upright ones 1,3,13–15. Through changes in the familiar configuration of facial features and spatial relationships, the FIE disrupts configured face perception processing 15.

The face-sensitive N170 component has been reported to reflect the perception of two-dimensional (2D) and three-dimensional (3D) objects 16–19. According to Wang and Kameda 16, object-related N170 was significantly sensitive to view association and clarified the difference between the neuronal activity occurring in 3D object recognition and 2D image identification. This previous study supported that objects may be recognized by mental transformation (i.e. rotation) of the input and stored views, or both 16,20–22. Furthermore, face-sensitive N170 amplitude changes have been reported to be associated with the rotation angle of faces 14. Magnuski and Gola 14 indicated the highest N170 amplitude for faces rotated by 90°, medium N170 amplitude for inverted faces, and smallest N170 amplitude for upright faces. Therefore, similar to object-related N170, it may be presumed that the FIE of face-sensitive N170 is affected by the transformation of upright and inverted faces, or both, by mental rotation.

However, a previous study 16 did not directly compare the N170 component of faces (upright and inverted faces) and 3D objects, or that of 2D and 3D objects. Therefore, the relationship between face-sensitive N170 and object-related N170, as well as that between the FIE and perception of 2D and 3D objects remains unclear. Accordingly, this study aimed to directly compare the relationship between face-sensitive N170 during face perception (upright and inverted faces) and object-related N170 during object perception (2D and 3D objects). More specifically, the purpose of this study was to clarify whether face-sensitive P1 and N170 components are related to the perception of 2D and 3D objects. If face-sensitive N170 is related to the perception of 2D and 3D objects, the results of this study would demonstrate that the object-related N170 of 3D objects was influenced by the mental rotation of 2D objects, similar to the FIE of face-sensitive N170 influenced by the mental rotation of faces.

Back to Top | Article Outline

Participants and methods

Participants

Twenty-one healthy, right-handed, East Asian participants [eight women; age 18–24 years (mean age 21.0 years)] participated in this study. All participants had normal or corrected-to-normal vision, were naive to the purpose of the experiment, and were students of Otemon Gakuin University. All participants provided written informed consent before the study, in accordance with the Declaration of Helsinki, and were compensated for their participation. This study was approved by the Ethics Committee of Otemon Gakuin University.

Back to Top | Article Outline

Stimuli

The stimulus faces were those of five East Asian young women and five men and were downloaded from various websites. All face stimuli were unfamiliar to the study participants. A total of 20 stimuli (all face stimuli with two types of upright or inverted faces) were developed using Photoshop 12 (Adobe Systems Inc., San Diego, California, USA). The stimulus 2D and 3D objects were 10 types of graphics (e.g. triangles and squares). Twenty stimuli (all stimulus graphics with two types of 2D or 3D objects) were developed using the Paint and Paint 3D functions of Windows 10 (Microsoft Corporation, Redmond, Washington, USA).

All face stimuli with two types of upright or inverted faces and all object stimuli with two types of 2D and 3D objects were airbrushed using Photoshop 12 to eliminate any outstanding features or blemishes and were presented on a white background. All face stimuli with two types of upright or inverted faces and all object stimuli with two types of 2D and 3D objects were presented with a front-on view and in grayscale; their mean luminance was adjusted so as to equate this across all stimuli (luminance values=13.9 cd/m2) using Photoshop 12 (see Fig. 1 for examples). All face and object stimuli occupied a visual angle of ∼6.9×7.2° and were presented in the center of a 22-inch cathode ray tube monitor (Diamondtron M2, RDF223G; Mitsubishi, Tokyo, Japan), which was placed 100 cm in front of the participants. The screen resolution was 1280×1024 with a refresh rate of 100 Hz.

Fig. 1

Fig. 1

Back to Top | Article Outline

Procedure

The participants were seated comfortably 1.0 m in front of a 22-inch cathode ray tube monitor, on which stimuli were presented using a Multi Trigger System (Medical Try System, Tokyo, Japan). Each trial was conducted as follows: (i) a fixation mark (+) presented for 500 ms; (ii) a stimulus presented for 500 ms; and (iii) a judgment screen presented for 1000 ms. The intertrial interval varied between 1000 and 1500 ms. On the judgment screen, the four types of stimuli (all faces stimuli with two types of upright or inverted faces and all object stimuli with two types of 2D and 3D objects) were randomly assigned a number (1, 2, 3, or 4) for each participant. The participant was instructed to identify the type of all stimuli as quickly and accurately as possible. They responded by pressing one of the four buttons that corresponded to 1, 2, 3, or 4 with their right index finger to indicate whether all stimuli had upright or inverted faces, and were 2D or 3D objects. The reaction time was measured using a digital timer accurate to 1 ms, beginning with the onset of the stimulus presentation and stopping once the participants responded to the stimuli. The participants performed 10 practice trials, followed by 300 experimental trials separated into three blocks. The four types of all stimuli were presented in a random order and with equal probability.

Back to Top | Article Outline

Recording and analysis

Electroencephalography (EEG) was performed using a 128-channel Sensor Net (Electrical Geodesic Inc., Eugene, Oregon, USA) and recordings were analyzed using the standard EGI Net Station 5.2.01 package. Electrooculography (EOG) electrodes were placed above and below both eyes and at the outer canthi of both eyes to detect movement artefacts. EEG and EOG data were recorded using Ag/AgCl electrodes using the 10–5 system 23,24. Each electrode was referenced to Cz, and subsequently re-referenced offline to the common average. EEG and EOG were recorded using a band-pass filter at 0.01–30 Hz and with an electrode impedance less than 5 kΩ. EEG and EOG signals were digitized at a sampling rate of 500 Hz. Periods containing artifacts exceeding 140 μV in both vertical and horizontal EOG voltages were excluded from the averages.

Stimulus-locked ERPs were separately extracted for upright and inverted faces, as well as for 2D and 3D objects from 200 ms before to 400 ms after the presentation of the stimulus and were baseline corrected using the 200-ms prestimulus window. For P1 analyses, the electrode sites O1 and O2 were selected. Furthermore, the P1 latency and amplitude of the positive peak of the EEG signal in the window from 60 to 110 ms after the presentation of the stimulus were quantified at these electrodes. For N170 analyses, the electrode sites P7, PO7, PO8, and P8 were selected; the N170 latency and amplitude of the negative peak of the EEG signal in the window from 110 to 180 ms after the presentation of the stimulus were quantified at these electrodes. The mean reaction time and ERP latency and amplitude (peak to peak) were analyzed for each participant for each stimulus type.

Back to Top | Article Outline

Statistical analysis

The reaction time was analyzed using a one-way repeated-measures analysis of variance (ANOVA) over all stimulus types (2D and 3D objects and upright and inverted faces). P1 latency and amplitude were analyzed using a two-way (4×2) repeated-measures ANOVA over all stimulus types and electrodes (O1 and O2), and N170 latency and amplitude were analyzed using a three-way (4×2×2) repeated-measures ANOVA over all stimulus type, hemispheres (left and right), and electrodes (P7 vs. PO7 and PO8 vs. P8), with post-hoc comparisons performed using Bonferroni’s correction. ERPs were analyzed using Greenhouse–Geisser corrections applied to P values associated with repeated-measures comparisons with multiple degrees of freedom.

Back to Top | Article Outline

Results

Behavioral results

There was no main effect of all stimulus types on the reaction time (mean±SD): 2D objects, 360.51±82.07 ms; 3D objects, 354.60±84.35 ms; upright faces, 338.28±51.64 ms; and inverted faces, 355.31±67.84 ms [F(3, 60)=0.98; P=0.393; ηp2=0.05].

Back to Top | Article Outline

Event-related potential results

The grand-averaged EEG waveforms (P1 component) for each stimulus type are presented in Fig. 2. P1 latency for 2D object stimuli was 87.43±12.16 ms at O1 and 87.43±12.85 ms at O2. P1 latency for 3D object stimuli was 95.14±13.52 ms at O1 and 94.00±11.43 ms at O2. Moreover, P1 latency for upright face stimuli was 93.14±13.59 ms at O1 as well as 89.52±7.95 ms at O2. P1 latency for inverted face stimuli was 90.57±14.72 ms at O1 and 88.10±13.61 ms at O2. There was a significant main effect of all stimulus type on P1 latency [F(3, 60)=4.27, P=0.012, ηp2=0.18]. Next, P1 latency for 3D object stimuli was longer than that for 2D object stimuli (P<0.05). P1 amplitude for 2D object stimuli was 6.36±3.97 μV at O1 and 6.01±3.32 μV at O2. P1 amplitude for 3D object stimuli was 6.57±4.14 μV at O1 and 6.47±2.99 μV at O2. P1 amplitude for upright face stimuli was 7.12±4.44 μV at O1 as well as 6.24±3.17 μV at O2. P1 amplitude for inverted face stimuli was 8.09±4.66 μV at O1 and 7.08±3.37 μV at O2. There was a significant main effect of all stimulus type on P1 amplitude [F(3, 60)=4.26, P=0.016, ηp2=0.18]. Next, P1 amplitude for inverted face stimuli was greater than that for 2D object stimuli (P<0.05).

Fig. 2

Fig. 2

The grand-averaged EEG waveforms (N170 component) for each of the all stimulus types are also presented in Fig. 2. Table 1 presents the mean N170 peak latency for all types of stimuli at four electrode sites (P7/P8 and PO7/PO8). There was a significant main effect of all stimulus type [F(3, 60)=21.00, P=0.000001, ηp2=0.51]. Next, N170 latency for 3D object stimuli was longer than that for 2D objects stimuli (P<0.01), and N170 latency for inverted face stimuli was longer than that for upright face stimuli (P<0.01). Table 1 also reports the mean N170 peak amplitude for all types of stimuli at four electrode sites (P7/P8 and PO7/PO8). There was a significant main effect of all stimulus type [F(3, 60)=14.03, P=0.001, ηp2=0.41]. Next, N170 amplitude for upright face stimuli was greater than that for 2D and 3D object stimuli (P<0.01), and N170 amplitude for inverted face stimuli was greater than that for 2D and 3D objects stimuli (P<0.01).

Table 1

Table 1

Back to Top | Article Outline

Discussion

Results of this study clearly indicate that face-sensitive P1 and N170 components are related to the perception of 2D and 3D objects. Moreover, N170 latency for 3D object stimuli was longer than that for 2D object stimuli, and N170 latency for inverted face stimuli was longer than that for upright face stimuli. These results suggest that, similar to the FIE of face-sensitive N170 influenced by the mental rotation of faces, object-related N170 of 3D objects was influenced by the mental rotation of 2D objects. This suggests the novel possibility that face-sensitive P1 and N170 components can be used as an index for the perception of 2D and 3D objects.

A previous study compared the N170 components of six upright and inverted visual categories (human faces, cars, chairs, shoes, houses, and greebles) and reported that only inverted human faces delayed and enhanced N170 and that the N170 components of the other five visual categories (all nonface objects) were not different between those that were upright and inverted 19. Because N170 components were more sensitive to human faces than to nonface objects 3–10, it may be believed that the inversion effect of N170 occurred because of human faces but not nonface objects. However, findings of this study indicated that N170 components were different between nonface 2D and nonface 3D objects. To perceive 3D objects, 2D objects must complicatedly be mentally rotated at various angles and directions 16,20–22. In contrast, because the inversion of nonface 2D objects is simply a rotation at various angles, the mental rotation of an upside down, a nonface 2D object is simple. Combined with these results, it may be presumed that the workload of mental rotation from 2D to 3D objects was larger than that of mental rotation from upright nonface 2D objects to inverted nonface 2D objects. Therefore, the object-related N170 component may result in differences between 2D and 3D objects. Because the neural generators of N170 have been estimated to lie in the fusiform area of the brain 11,12, the results of the present and a previous study 16 suggest that the fusiform area of the brain is related to the perception of 2D and 3D objects.

Back to Top | Article Outline

Conclusion

Findings of this study clearly indicate that face-sensitive P1 and N170 components are related to the perception of 2D and 3D objects. This is important because this suggests the novel possibility that face-sensitive P1 and N170 components can be used as an index for the perception of 2D and 3D objects.

Back to Top | Article Outline

Acknowledgements

This study was supported by Aki Matsumi, a student of Department of Psychology, Otemon Gakuin University. This study reanalyzed data from her undergraduate thesis.

Back to Top | Article Outline

Conflicts of interest

There are no conflicts of interest.

Back to Top | Article Outline

References

1. Bentin S, Allison T, Puce A, Perez E, McCarthy G. Electrophysiological studies of face perception in humans. J Cogn Neurosci 1996; 8:551–565.
2. Rossion B, Caharel S. ERP evidence for the speed of face categorization in the human brain: Disentangling the contribution of low-level visual cues from face perception. Vision Res 2011; 51:1297–1311.
3. Itier RJ, Taylor MJ. N170 or N1? Spatiotemporal differences between object and face processing using ERPs. Cereb Cortex 2004; 14:32–142.
4. Eimer M. Does the face-specific N170 component reflect the activity of a specialized eye processor? Neuroreport 1998; 9:2945–2948.
5. Eimer M, McCarthy RA. Prosopagnosia and structural encoding of faces: evidence from event-related potentials. Neuroreport 1999; 10:255–259.
6. Eimer M. The face-specific N170 component reflects late stages in the structural encoding of faces. Neuroreport 2000; 11:2319–2324.
7. George N, Evans J, Fiori N, Davidoff J, Renault B. Brain events related to normal and moderately scrambled faces. Brain Res Cogn Brain Res 1996; 4:65–76.
8. Jemel B, George N, Chaby L, Fiori N, Renault B. Differential processing of part-to-whole and part-to-part face priming: an ERP study. Neuroreport 1999; 10:1069–1075.
9. Bentin S, Deouell LY. Structural encoding and identification in face processing: ERP evidence for separate mechanisms. Cogn Neuropsychol 2000; 17:35–54.
10. Niina M, Okamura JY, Wang G. Electrophysiological evidence for separation between human face and non-face object processing only in the right hemisphere. Int J Psychophysiol 2015; 98:119–127.
11. Puce A, Allison T, Gore JC, MacCarthy G. Face-sensitive regions in human extrastriate cortex studied by functional MRI. J Neurophysiol 1995; 74:1192–1199.
12. Sadeh B, Podlipsky I, Zhdanov A, Yovel G. Event-related potential and functional MRI measures of face-selectivity are highly correlated: a simultaneous ERP-fMRI investigation. Hum Brain Mapp 2010; 31:1490–1501.
13. Rossion B, Delvenne JF, Debatisse D, Goffaux V, Bruyer R, Crommelinck M, et al. Spatio-temporal localization of the face inversion effect: an event-related potentials study. Biol Psychol 1999; 50:173–189.
14. Magnuski M, Gola M. It’s not only in the eyes: nonlinear relationship between face orientation and N170 amplitude irrespective of eye presence. Int J Psychophysiol 2013; 89:358–365.
15. Munk AJ, Hermann A, El Shazly J, Grant P, Hennig J. The idea is good, but…: Failure to replicate associations of oxytocinergic polymorphisms with face-inversion in the N170. PLoS One 2016; 11:e0151991.
16. Wang G, Kameda S. Event-related potential component associated with the recognition of three-dimensional objects. Neuroreport 2005; 16:767–771.
17. Thorpe S, Fize D, Marlot C. Speed of processing in the human visual system. Nature 1996; 381:520–522.
18. Johnson JS, Olshausen BA. Timecourse of neural signatures of object recognition. J Vis 2003; 3:499–512.
19. Rossion B, Gauthier I, Tarr MJ, Despland P, Bruyer R, Linotte S, et al. The N170 occipito-temporal component is delayed and enhanced to inverted faces but not to inverted objects: an electrophysiological account of face-specific processes in the human brain. Neuroreport 2000; 11:69–74.
20. Bülthoff HH, Edelman S. Psychophysical support for a two-dimensional view interpolation theory of object recognition. Proc Natl Acad Sci USA 1992; 89:60–64.
21. Riesenhuber M, Poggio T. Models of object recognition. Nat Neurosci 2000; 3:1199–1204.
22. Tarr MJ. Rotating objects to recognize them: a case study on the role of viewpoint dependency in the recognition of three-dimensional objects. Psychon Bull Rev 1995; 2:55–82.
23. Jurcak V, Tsuzuki D, Dan I. 10/20, 10/10, and 10/5 systems revisited: their validity as relative head-surface-based positioning systems. Neuroimage 2007; 34:1600–1611.
24. Oostenveld R, Praamstra P. The five percent electrode system for high-resolution EEG and ERP measurements. Clin Neurophysiol 2001; 112:713–719.
Keywords:

event-related potential; face-inversion effect; face-sensitive N170; object-related N170; P1; two-dimensional and three-dimensional objects

© 2018 Wolters Kluwer Health | Lippincott Williams & Wilkins