transparency modulates the functional asymmetry in the fusiform cortex: an artificial language training study.

The laterality difference in the occipitotemporal region between Chinese (bilaterality) and alphabetic lan- guages (left laterality) has been attributed to their difference in visual appearance. However, these languages also differ in orthographic transparency. To disentangle the effect of orthographic transparency from visual appearance, we trained subjects to read the same artiﬁcial script either as an alphabetic (i.e., transparent orthography) or a logographic (i.e., nontransparent orthography) language. Consistent with our previous results, both types of phonological training enhanced activations in the left fusiform gyrus. More interestingly, the laterality in the fusiform gyrus (especially the posterior region) was mod- ulated by the orthographic transparency of the artiﬁcial script (more left-lateralized activation after alphabetic training than after logographic training). These results provide an alternative account (i.e., orthographic transparency) for the laterality difference between Chinese and alphabetic languages, and may have important implications for the role of the fusiform in reading. (cid:2) 2012 Elsevier Inc. All rights reserved.


Introduction
A longstanding question in the neurobiology of language is whether there are specific neural networks for different language systems (e.g., Chen, Xue, Mei, Chen, & Dong, 2009;Paulesu, Démonet, et al., 2001;Paulesu, McCrory, et al., 2000;Siok, Perfetti, Jin, & Tan, 2004;Tan, Laird, Li, & Fox, 2005). One way to address this question is to use the contrast of logographic (e.g., Chinese) and alphabetic (e.g., English) languages, because of their dramatic differences in visual appearance (Bolger, Perfetti, & Schneider, 2005;Perfetti et al., 2007). Chinese characters possess a number of intricate strokes that are packed into a square shape, whereas alphabetic languages have linear combinations of letters. Based on this difference, researchers have hypothesized that, compared with alphabetic languages, reading Chinese characters might involve more visuospatial analysis, and consequently recruit more regions in the right hemisphere (Liu, Dunlap, Fiez, & Perfetti, 2007;Tan et al., 2000).
Currently, the prevailing explanation for this laterality difference in the occipitotemporal region is that more visuospatial analysis is needed for processing Chinese characters compared with alphabetic writings Tan et al., 2000). However, in addition to visual appearance, Chinese and alphabetic languages also differ significantly in orthographic transparency (Chen et al., 2009;Perfetti et al., 2007). Alphabetic languages typically use letter-phoneme mapping and reading in alphabetic languages can be achieved through grapheme-to-phoneme correspondence (GPC) rules, although there are variations between shallow (e.g., Italian) and deep orthographies (e.g., English). In contrast, Chinese is a nontransparent orthography: there is no letter-phoneme mapping in Chinese. Although most of them have a phonetic radical that can provide clues to the pronunciation, only a small proportion of Chinese characters sound the same as their phonetic radicals. Thus, reading Chinese characters mainly relies on the association of whole characters and sounds Tan et al., 2005).
Since Chinese (or other logographic languages) and alphabetic languages differ in both visual appearance and orthographic transparency, studies relying on the contrast between them have difficulties in testing whether their differences in orthographic transparency can account for the different laterality patterns in the occipitotemporal region. One way to tease apart the effect of orthographic transparency from visual appearance is to use the artificial language training paradigm, which allows researchers to manipulate the unit size of orthography-to-phonology mapping (i.e., orthographic transparency) while controlling for visual appearance (i.e., using the same set of words). In a recent ERP study, Yoncheva, Blau, Maurer, and McCandliss (2010) trained two groups of subjects to read an artificial script (letter-like figures) either as an alphabetic or logographic (i.e., non-alphabetic) language for about 20 min. ERP recordings during a reading verification task (visual-auditory matching) after training showed a leftlateralized N170 response in the alphabetic condition, but a rightlateralized response in the logographic condition. This result suggests an important role of orthographic transparency in shaping the laterality in the occipitotemporal region in reading tasks.
Due to the limited spatial resolution of ERP, it is however unclear from Yoncheva et al. (2010) whether subregions in the occipitotemporal cortex are differentially modulated by the script's orthographic transparency. Several studies have suggested that the anterior and posterior parts of the occipitotemporal region are engaged in lexico-semantic and visuo-perceptual processing, respectively (Simons, Koutstaal, Prince, Wagner, & Schacter, 2003;Xue & Poldrack, 2007). Our previous studies have revealed that, although bilaterality in the middle fusiform was found for novel logographic characters, i.e., Korean Hangul (Xue, Chen, Jin, & Dong, 2006a), processing familiar logographic characters such as Chinese showed bilaterality in the posterior regions but left laterality in the middle and anterior fusiform cortex (Xue et al., 2005(Xue et al., , 2006a. Consistent with this idea, one recent study has suggested that the functional asymmetry in the anterior and posterior fusiform cortex was determined by semantic and visuospatial factors, respectively (Seghier & Price, 2011).
Disentangling effects of orthographic transparency and visual appearance on the laterality patterns would also provide important clues to the functional role of the left occipitotemporal region in reading. There are currently two prevailing perspectives. The visual word form area (VWFA) perspective (Cohen & Dehaene, 2004;Cohen et al., 2002) has proposed that the left mid-fusiform is specialized in processing abstract visual word forms. It predicts no effect of orthographic transparency on fusiform laterality if the visual forms are the same. In contrast, the interactive perspective (Price & Devlin, 2011) posits that the VWFA integrates low-level visuospatial features with higher level associations (e.g., phonology and semantics), and its activity emerges from the interaction between bottom-up sensory inputs and top-down predictions. In supporting the interactive perspective, our previous artificial language training result has clearly isolated the role of phonological association in modulating fusiform activity (Xue, Chen, Jin, & Dong, 2006b). The present study aimed at extending this line of research by examining how different phonological access routes, as determined by orthographic transparency, would differentially modulate fusiform activity.
To examine the effect of orthographic transparency on occipitotemporal laterality in reading, the present study used (1) fMRI technology, with a higher spatial resolution, to examine the effect of orthographic transparency in different subregions in the occipitotemporal cortex, (2) a perceptual task (i.e., underline detection, see Fig. 1), administered both before and after training, to control potential laterality differences before training, and (3) a longer training period (8 days, 1 h per day) than Yoncheva et al.'s study (20 min) to reach a higher level of reading automaticity. The artificial language used in our study was created based on the visual forms and sounds of 60 Korean Hangul characters (see Fig. 1A for examples) because the design principle of Korean Hangul, i.e., logographic visual appearance but alphabetic orthography, is ideal for our purposes. We trained two matched groups of subjects to read the artificial language either as an alphabetic (letter-to-phoneme mapping) or a logographic (word-to-sound mapping) language. Training-related changes in neural activity were compared between the two training conditions to examine whether occipitotemporal laterality was modulated by the script's orthographic transparency, and in which subregions such modulation effect occurred.

Subjects
Forty-four Chinese college students (23 males; mean age = 22.04 ± 1.82 years old, with a range from 19 to 25 years) participated in this study. They were divided into two groups to receive either alphabetic or logographic training. The two groups were matched on age, gender (12 males and 10 females in the logographic group; 11 males and 11 females in the alphabetic group), nonverbal intelligence, and performance on Chinese reading tasks (see Table 1). All subjects had normal or corrected-to-normal vision, with no previous history of neurological or psychiatric disease and no previous experience with Korean, and were strongly Two matched groups of subjects received alphabetic and logographic training (A) for 8 days (1 h per day). Before and after training, subjects were scanned when performing a perceptual task (B), in which subjects were asked to respond to the underlined words.
right-handed as judged by Snyder and Harris's handedness inventory (Snyder & Harris, 1993). Informed written consent was obtained from the subjects before the experiment. This study was approved by the IRBs of the University of California, Irvine, the National Key Laboratory of Cognitive Neuroscience and Learning at Beijing Normal University, and the University of Southern California. Fig. 1 illustrates the materials and experimental design. In total, 60 Chinese words and 60 artificial language words were used in the study. The Chinese words were medium-to high-frequency words (higher than 50 per million according to the Chinese word frequency dictionary, mean = 498.50 per million) (Wang & Chang, 1985), with 2-9 strokes (mean = 5.98), and 2-3 units (mean = 2.70) according to the definition by Chen, Allport, and Marshall (1996). The artificial language words were constructed using 22 Korean Hangul letters, including 12 consonants and 10 vowels. All the phonemes we chose are present in Chinese because of our specific focus on form-sound association but not on new phonemes. To confirm our judgment, three Chinese college students were asked to assess the ease of pronouncing the phonemes. The average scores were higher than 3 on a 5-point scale (1 = very difficult to pronounce; 5 = very easy to pronounce) for all phonemes. The artificial words were matched with Chinese words in visual complexity (mean number of units = 2.67; mean number of strokes = 6.15). All stimuli were presented in gray-scale with 151 Â 151 pixels in size.

Materials
The sounds of the Chinese and artificial language words were recorded from a native Chinese female speaker and a native Korean female speaker, respectively. All the sounds were denoised and normalized to the same length (600 ms) and loudness using Audacity 1.3 (audacity.sourceforge.net).

Training procedure and behavioral task
All subjects underwent an 8-day training program (about 1 h per day) on the association between visual forms and sounds of 60 artificial language words. Two training conditions (i.e., alphabetic and logographic training) were designed to examine the effect of orthographic transparency on the laterality of the fusiform cortex (see Fig. 1A). As noted before, two matched groups of subjects received the two types of training, respectively, to avoid interference between training conditions. In the logographic group, subjects were asked to memorize the association between each whole word and its pronunciation. The original correspondences between the visual forms and sounds were shuffled to avoid implicit acquisition of the grapheme-phoneme correspondence (GPC) rules. In the alphabetic group, subjects were first taught the pronunciation of the letters and then to assemble the phonology of the words from their letters. To facilitate learning of the GPC rules, 30 new words were tested in the end of each training session. For both groups, we used a combination of various learning tasks to maximize the efficiency and to ensure that subjects could acquire the respective phonology at the end of the training. The learning tasks included letter learning (memorizing the sounds of letters in the alphabetic condition), whole word learning (learning the sounds of whole words), naming, naming with feedback, fast naming (reading ten randomly selected words (from the 60 trained words) as fast as possible), and phonological choice task (selecting the correct pronunciation for a word from four potential pronunciations). It should be noted that, except the type of training, all other intervening variables such as time-on-task were controlled across the two groups.
After 8 days of training, a naming task was adopted to test the effect of training. For both groups, all 60 trained words were tested. For the alphabetic group, 60 new words were tested to evaluate transfer of learning. In each trial, a word was presented for 3 s, followed by a 1 s blank interval. Subjects were asked to read each word aloud as fast and accurately as possible. Subjects' oral responses were recorded through a microphone connected to the computer.

fMRI task
Before and after training, subjects were scanned while performing a perceptual task (Fig. 1B) (Chen et al., 2007;Cohen et al., 2002;Xue et al., 2006b) that consisted of four types of stimuli, namely Chinese words, English words, English pseudowords, and artificial words. Each type of materials contained 60 items. A rapid event-related design was used, with the four types of materials pseudo-randomly mixed. Trial sequences were optimized with OPTSEQ (http://www.surfer.nmr.mgh.harvard.edu/optseq/) (Dale, 1999). The English materials were included to address other research questions, and thus excluded from data analysis in this paper. Stimulus presentation and response collection was programmed using Matlab (Mathworks) with Psychtoolbox extensions (http:// www.psychtoolbox.org) on an IBM laptop.
The fMRI task consisted of two runs. Each run lasted for 10 min 10 s. During each run, the stimuli were presented either in the visual, auditory, or audiovisual modality. The stimuli in the auditory and audiovisual modality were included to address other research questions, and consequently excluded from data analysis in the current paper. Each trial lasted for 600 ms, followed by a blank that varied randomly from 1.4 to 6.4 s (mean = 1.9 s) to improve design efficiency. Subjects were asked to carefully view and/or listen to the stimuli. To ensure that subjects were awake and attentive, they were instructed to press a key whenever they noticed that the visual word was underlined. This happened six times per run. The Note: Numbers inside the parentheses represent standard deviations. The scores for the rapid color and object naming tasks are the total number of seconds taken to name all the items, and those for all other tests are the number of correct items. The Chinese word efficiency and identification tasks were designed by the authors of this study; the visual-auditory learning was a subtest of the Woodcock Reading Mastery Tests-Revised (WRMT-R); the rapid color and object naming, and memory of digits were subtests of the Comprehensive Test of Phonological Processing (CTOPP).
task has at least two advantages: (1) it puts relatively low demand on phonological access, and consequently reduces the effect of the top-down process; and (2) it can be administered both before and after training, allowing us to examine training-related neural changes. Subjects correctly responded to 10.8 ± 1.0 of 12 underlined words at the pre-training stage and 11.3 ± 1.1 at the posttraining stage, suggesting that they were attentive to the stimuli during the perceptual task.

MRI data acquisition
Data were acquired with a 3.0 T Siemens MRI scanner in the MRI Center of Beijing Normal University. A single-shot T2 Ãweighted gradient-echo EPI sequence was used for functional imaging acquisition with the following parameters: TR/TE/ h = 2000 ms/25 ms/90°, FOV = 192 Â 192 mm, matrix = 64 Â 64, and slice thickness = 3 mm. Forty-one contiguous axial slices parallel to the AC-PC line were obtained to cover the whole cerebrum and part of the cerebellum. Anatomical MRI was acquired using a T1-weighted, three-dimensional, gradient-echo pulse-sequence (MPRAGE) with TR/TE/h = 2530 ms/3.09 ms/10°, FOV = 256 Â 256 mm, matrix = 256 Â 256, and slice thickness = 1 mm. Two hundred and eight sagittal slices were acquired to provide a high-resolution structural image of the whole brain.

Image preprocessing and statistical analysis
Initial analysis was carried out using tools from the FMRIB's software library (http://www.fmrib.ox.ac.uk/fsl) version 4.1.2. The first three volumes in each time series were automatically discarded by the scanner to allow for T1 equilibrium effects. The remaining images were then realigned to compensate for small head movements (Jenkinson & Smith, 2001). Translational movement parameters never exceeded 1 voxel in any direction for any subject or session. All data were spatially smoothed using a 5mm full-width-half-maximum Gaussian kernel. The smoothed data were then filtered in the temporal domain using a nonlinear high-pass filter with a 60-s cutoff. A 2-step registration procedure was used whereby EPI images were first registered to the MPRAGE structural image, and then into the standard (Montreal Neurological Institute [MNI]) space, using affine transformations with FLIRT (Jenkinson & Smith, 2001) to the avg152 T1 MNI template.
At the first level, the data were modeled with the general linear model within the FILM module of FSL for each subject and each session. Events were modeled at the time of the stimulus presentation. These event onsets and their durations were convolved with a canonical hemodynamic response function (double-gamma) to generate the regressors used in the general linear model. Temporal derivatives and the six motion parameters were included as covariates of no interest to improve statistical sensitivity. Null events were not explicitly modeled, and therefore constituted an implicit baseline. In this analysis, the underlined words were modeled as nuisance variables to avoid the effect of other confounding factors. Three contrast images (Chinese words-baseline, artificial wordsbaseline, and artificial words-Chinese words) were computed for each session and for each subject.
A second-level model (fixed-effects model) created a cross-run contrast to examine the training effect for each subject. Training effect was calculated across the four sessions (two at the pre-training stage, and the other two at the post-training stage) for each condition and for each subject by using the contrast of post-training minus pre-training. They were then put in the third-level analyses to compute group differences using the contrast of artificial words-Chinese words in the alphabetic group versus that in the logographic group. Group activations were computed using a random-effects model (treating subjects as a random effect) with FLAME stage 1 only (Beckmann, Jenkinson, & Smith, 2003;Woolrich, 2008;Woolrich, Behrens, Beckmann, Jenkinson, & Smith, 2004). Unless otherwise indicated, group images were thresholded with a height threshold of z > 2.3 and a cluster probability, P < 0.05, corrected for whole-brain multiple comparisons using the Gaussian random field theory.

Region of interest analysis
The fusiform gyrus was defined as the region of interest (ROI) to assess functional laterality. Following Xue and Poldrack (2007), we split the fusiform region into three smaller equal sized regions, namely the anterior fusiform region (MNI center: À40, À48, À18), middle fusiform region (MNI center: À40, À60, À18), and posterior fusiform region (MNI center: À40, À72, À18). It should be noted that the center of the middle fusiform region is near the VWFA region (À42, À57, À15) defined by Cohen et al. (2002). Each ROI was defined as a region of a 6 mm radius sphere around the center. The right homologue of these regions was also defined. In each ROI, we extracted parameter estimates (betas) of each event type from the fitted GLM model and averaged them across all voxels in the cluster for each subject. Percent signal changes were calculated using the following formula: [contrast image/(mean of run)] Â ppheight Â 100%, where ppheight is the peak height of the hemodynamic response versus the baseline level of activity (Mumford, 2007).
To quantify the laterality of the three subregions in the fusiform gyrus, we calculated the laterality index (LI) using the following formula: LI = L-R, where L and R represent percent signal changes in the left and right ROI, respectively (Vigneau et al., 2005). A positive LI indicates left-hemispheric lateralization and a negative number indicates right-hemispheric lateralization. It should be noted that the percent signal changes used to calculate the LIs were extracted using the contrast between artificial words and Chinese words to control for the test-retest variability of BOLD response. As a result, the LI is not an absolute measure of laterality, but instead it is sensitive to the relative differences in laterality change between different training methods.

Behavioral performance after training
For both groups, subjects correctly named more than 95% of the trained words after training (the alphabetic group: 96.6% ± 2.7; the logographic group: 95.8% ± 6.1), suggesting that the training was effective ( Fig. 2A). There was no significant between-group fference for the trained words (t(42) = 0.53, n.s.). Subjects in the alphabetic group also correctly named 83.7% of the untrained words, suggesting that they had learned the GPC rules.

Training enhanced neural activities in the fusiform cortex for both groups
In this analysis, we first examined between-group differences before training. For both Chinese and artificial language words, no region exhibited significant between-group differences in neural activities, suggesting the two groups of subjects were well matched.
We then examined the training effect by comparing the BOLD activations at the post-training stage with those at the pre-training stage. In this analysis, data from the Chinese word condition were used as the baseline to control for test-retest variability of the BOLD response. Results showed that training significantly enhanced neural activities of the artificial language words in the left fusiform cortex for both groups (MNI center: À48, À54, À20, Z = 3.89 for the alphabetic group, and À48, À58, À14, Z = 3.60 for the logographic group, see Fig. 3 and Table 2). It should be noted that the peak coordinates of activations in the left fusiform cortex were close to the VWFA reported in previous studies (Bolger et al., 2005;Cohen & Dehaene, 2004;Cohen et al., 2002).
Finally, we examined between-group differences in terms of the training effect. There were no significant differences across the two groups in the bilateral fusiform cortex with a threshold of Z > 2.3 (whole-brain corrected).

Differential effects of alphabetic and logographic training on fusiform laterality
In this section, we first extracted the percent signal changes from the six pre-defined ROIs (i.e., the bilateral anterior, middle, and posterior fusiform regions) for the Chinese words to evaluate the test-retest variability of the BOLD responses. Results showed no significant differences across the two scans for the Chinese words in all six ROIs (all ps > .1), although the neural activities decreased slightly after training (Fig. S1). We then extracted the percent signal changes from the six ROIs (see Fig. S2) to examine the effect of orthographic transparency on laterality. As described in the ''Methods'' section, the fusiform laterality was calculated by using the neural activities in the left ROI minus those in the right homologue. We first performed a three-way (region: anterior, middle, and posterior; group: alphabetic and logographic; test: preand post-training) analysis of variance (ANOVA) to examine between-group differences in fusiform laterality. This analysis revealed a trend of group-by-test interaction (F(1, 42) = 2.80, p = .102), and suggested that alphabetic and logographic training resulted in left-and right-lateralized activations, respectively. None of the main effects or other interactions was significant. We then performed a two-way (i.e., group and test) ANOVA for each region to examine between-group differences in the three subregions in the fusiform cortex (Fig. 4). Results showed that the fusiform laterality demonstrated gradient sensitivity to the learning method (i.e., orthographic transparency) from the anterior to posterior regions. Specifically, compared with the pre-training stage, neural activities of the artificial language words in the posterior fusiform region were more left-lateralized after alphabetic training, but more right-lateralized after logographic training (group-by-test interaction: F(1, 42) = 4.92, p < .05). The neural activities in the middle fusiform region showed the same trend, but it was not statistically significant (group-by-test interaction: F(1, 42) = 1, p = .324). In contrast, in the anterior region, both groups showed a trend of more left-lateralized activations after training, and consequently demonstrated no significant groupby-test interaction (F(1, 42) = 0.10, n.s.). In the three regions, none of the main effects was significant (all ps > .1).

Discussion
Using an artificial language training paradigm, the present study examined the effect of orthographic transparency on the laterality of the subregions in the fusiform gyrus. Consistent with one previous study (Xue et al., 2006b), we found that phonological training resulted in increased activations in the fusiform gyrus. More importantly, we found that the laterality of fusiform activation was significantly modulated by orthographic transparency of the artificial language, with more left-lateralized activation after alphabetic training than after logographic training. This difference manifested in the posterior portion of the fusiform gyrus, decreased in the middle portion, and diminished in the anterior portion. This result provides clear evidence for the effect of orthographic transparency on fusiform laterality, and has improved our understanding of the functions of the fusiform cortex in reading. The effect of orthographic transparency on fusiform laterality found in this study provides an alternative account to the observed difference in the occipitotemporal laterality between Chinese and alphabetic languages. As discussed in the ''Introduction'' section, different laterality in occipitotemporal cortex between Chinese and alphabetic languages has been attributed to their difference in visual appearance Tan et al., 2000). In contrast, using the artificial language training paradigm, our study showed that functional laterality in the fusiform cortex was modulated by the scripts' orthographic transparency (another important difference between Chinese and alphabetic languages) after controlling for visual appearance. Specifically, neural activities in the fusiform gyrus were more left-lateralized after alphabetic training, but more right-lateralized after logographic training. Consistent with our results, one previous ERP study has revealed similar dissociation in N170 laterality following alphabetic and logographic training (Yoncheva et al., 2010). These results suggest that the laterality difference between Chinese and alphabetic languages in the occipitotemporal region may at least partially be accounted for by their difference in orthographic transparency.
Our results also have important implications for understanding the functions of the fusiform gyrus. Two rival perspectives (the VWFA perspective and interactive perspective) have been proposed regarding fusiform functions. Our study on phonological training provides evidence against the VWFA perspective (Cohen & Dehaene, 2004;Cohen et al., 2002). On the one hand, we found that phonological training enhanced neural activations in the VWFA for both groups in a passive viewing task (i.e., little effort was involved). Although our paradigm involved a combination of orthographic and phonological trainings, the increased activations in the VWFA were probably caused by the phonological training, but not by the orthographic training, because our previous studies revealed decreased activation in the fusiform after orthographic training (Xue et al., 2006b;Xue & Poldrack, 2007), but increased activation after phonological training (Xue et al., 2006b). On the other hand, the laterality of fusiform activation was modulated by the script's orthographic transparency. These two lines of evidence suggest an important role of phonology in shaping VWFA activations, and argue against the VWFA hypothesis that the mid-fusiform is specialized for processing abstract visual word forms and is thus not influenced by other linguistic features such as phonology. Instead, our results support the interactive account of VWFA function, that is, activation of the VWFA results from interactions between the process of low-level visuospatial features (the bottom-up process) and that of higher level associations such as phonology and semantics (the top-down process) (Price & Devlin, 2011).
From this perspective, the phonological pathway determined by orthographic transparency might shape visual processing of scripts. Specifically, the part information was emphasized during learning under the alphabetic condition, whereas the global information was emphasized by the whole word learning under the logographic condition. Therefore, the alphabetic and logographic groups would adopt part-and whole-based processing strategies during reading after training, respectively, either because of their differential instructions during training or because of differential top-down requirement determined by different phonological access routes. As a result, our observation is consistent with the interactive model of reading (Price & Devlin, 2011) as well as the hemispheric specialization view (Hellige, Laeng, & Michimata, 2010), which posits that the left and the right hemispheres are specialized for processing, respectively, high-versus low-spatial-frequency information (Kitterle & Selig, 1991), part versus whole (Robertson & Lamb, 1991), and features versus holistic information (Grill-Spector, 2001). Consistent with our results, one neuroimaging study revealed a right-fusiform advantage for the processing of faces as a whole and a left-fusiform superiority for the processing of facial features (Rossion et al., 2000). Similarly, another recent study reported that successful episodic memory encoding of faces relied on the left fusiform cortex because of the involvement of feature/part information processing (Mei et al., 2010).
We also found that the laterality in the fusiform subregions was differentially modulated by the artificial language's orthographic Fig. 4. The between-group (alphabetic and logographic) differences in fusiform laterality for the artificial words. Percent signal changes were extracted from the left and right anterior, middle, and posterior fusiform regions at pre-and post-training stages. The laterality index (vertical coordinate) was calculated by comparing the percent signal change in the left and right regions. Error bars represent the standard error of the mean.

Table 2
Brain regions showing training-related increases for the artificial words in the logographic and alphabetic groups.

Brain regions
Logographic Alphabetic transparency. In particular, the effect of orthographic transparency on the laterality manifested in the posterior fusiform region, but not in the anterior region. The lack of effect in the anterior fusiform region probably reflects the observation that the anterior fusiform region is mainly responsible for semantic processing (Seghier & Price, 2011;Simons et al., 2003;Xue & Poldrack, 2007), and no semantic component was included in our training. The involvement of the anterior fusiform region in semantic processing was also suggested by many other studies, which found more activations in the anterior fusiform region for materials with semantics (e.g., words) than those without semantics (e.g., pseudowords) (Herbster, Mintun, Nebes, & Becker, 1997;Mechelli et al., 2005), or for semantic tasks than perceptual/phonological tasks (Binder, Desai, Graves, & Conant, 2009;Sharp et al., 2010). In sum, using the artificial language training paradigm to control the effect of visual appearance, our study clearly demonstrated that orthographic transparency affected fusiform laterality and that the effect varied across different subregions of the fusiform. These results provide an alternative account (i.e., orthographic transparency) for the differences between Chinese and alphabetic language processing in fusiform laterality, and support the interactive account of the functions of the fusiform in reading.