A new approach to detect and study spatial–temporal intracranial EEG frames

Studies on electroencephalography have revealed pointers for repetitive phase transitions in neocortex at frame rates in the theta and alpha bands. Phase transitions are the ﬁrst step for frame emergence. Within the frame, brain activity is synchronized and amplitude and phase modulation pattern materialize. The phase patterns have radial symmetry resembling a cone. Cone ﬁtting has been used to detect and study frames in different brain states, including task, awake, sleep and transition into epileptic seizure, revealing signs of disorganization before the seizure episode started. In this paper a new methodology to detect frames is presented. It is faster than the cone ﬁtting previously used to study frames. The results are compared with frames obtained from cone ﬁtting in awake, task, sleep and seizure states. The frames detected by cone ﬁtting and the new method showed high levels of coincidence in time. The disorganization signs observed between pre-ictal period and normal brain state were also observed in the frames detected with the new method. Finally amplitude modulation patterns related to the different behavioral states were clearly distinguishable using the new method frames as time markers for pattern extraction.


Introduction
Studies on animal and human electroencephalographic signals with a high density electrode array have shown that spatialtemporal patterns often occurs in cerebral activity [1][2][3][4]. These patterns usually have chaotic carrier waves in beta or gamma range, which are spatially modulated in amplitude (AM) and phase (PM) [5][6][7][8]. They are described as frames which recurs at rates in theta or alpha band [6,9]. They emerge after sudden jumps in cortical activity called phase transitions.
Different approaches, such as the Fourier transform (FFT) and the Hilbert transform (HT), have been used by Freeman and his colleagues to detect cerebral activity patterns [1,5,8,10,11]. Using HT two main structures are revealed in analytic signals. One, in time domain, is called "coordinated analytic phase differences" (CAPD) and it is used to bracket the time samples having stable phase increment, which means a nearly constant frequency [1,6]. The other, in spatial domain, is called "phase cone" [1,[6][7][8]10]. When a phase cone persists over certain time samples, it defines a frame. Those structures often prelude to emergence by phase transitions of AM patterns that relate to behavior [2,5,9,12,13].
In previous works, cone fitting is the procedure employed to detect frames and estimate instantaneous gradient [7,8,10]; but cone fitting is a time consuming tool. The reduction of the processing time to study brain electrical activity patterns is an important research goal [13][14][15][16]. The reduction in the processing time may allow the implementation of programs in real time to be applied on brain computer interfaces (BCI), neuroprosthetics devices, prediction of abnormal brain activity and so on [17][18][19][20][21][22][23][24].
In this work we developed a new technique to detect frames and estimate the instantaneous gradient. The instantaneous gradient was estimated by line fitting instead of cone fitting, in order to reduce the processing time. The cone fitting method is a more complex fitting process than line fitting. The former requires the optimization of 4 parameters and it is carried out two times for the phase structure on each time sample [1,8,10]; while the latter requires the optimization of 2 parameters and it is carried out only one time on each time sample.
The coincidence level in time of the frames detected using the cone fitting method and the new method was high. The disorganization signs observed on the frames detected by cone fitting methods during the pre-ictal periods [7,8] were also observed with the new method. Additionally classification of brain patterns related to three different behavioral states (task, awake and sleep) were made using frames as the time marker of pattern emergence. Higher classification rates were achieved based on the frames detected using the new method than using the cone fitting method.

Experimental subject and ECoG recordings
The electrocorticograms (ECoG) were recorded from a 34-yearold woman with a history of medically refractory complex partial seizures who was a candidate for surgical therapy. Before surgery the patient gave informed consent for placement of the clinical electrode and the high-density electrode array. The high-density array (8 × 8 electrodes separated 1.25 mm) was placed at the same time that the clinical electrodes onto the surface of the right inferior temporal gyrus. The Cz and Pz scalp locations were respectively the reference and the ground. The data were collected in the EEG and Clinical Neurophysiology Laboratory of Harborview Hospital, University of Washington, Seattle. The data collection and management were governed by protocols approved by Institutional Review Boards in the institution.
The recordings were made during epochs of NREM sleep (behavioral sleep accompanied by large slow waves in the EEG and not with rapid eye movements, sleep stage), epochs of resting wakefulness (wake stage), the subject naming objects presented as visual images (task stage) and the subject going into an epileptic seizure episode (seizure stage).
A Nicolet BMSI 5000 system having a fixed amplification of 1628× and analog filters set at 0.5 Hz high pass and 120 Hz low pass was used to record the signals. The ADC gave 12 bits with the least significant bit of 0.9 uV and maximal range of ±2048 × 0.9 uV. Data were digitized at 420 Hz and down-sampled to 200 Hz.

Signal processing
Four segments of one minute during task, wake and sleep state and during typical seizure were selected as the dataset to perform the comparison between the cone fitting method and the new method to detect frames. Three minutes of signal during task, awake and sleep states were chosen for classification.
The ECoG signals from each behavioral state were preprocessed by subtracting the channel means of every entire recording segment to remove channel bias and by dividing all 64 ECoGs by the global standard deviation of each segment to normalizing to unit standard deviation. A spatial low pass filter with the cutoff at 0.2 c/mm was applied to remove channel noise and a temporal band pass filter was applied to get the band of interest, in this work 12-30 Hz following the results obtained in Refs. [7,8]. The Hilbert transform was applied to ECoG signals to study both the analytic amplitude (square root of the sums of the analytic signal real and imaginary parts squares A j (t)) and the analytic phase (arctangent of the analytic signal imaginary part to the analytic signal real part ratio ϕ j (t)) [7,8,10,25,26].
Each time the ECoG signal goes to zero, the analytic phase jumps from π to −π . It makes the analytic phase waveform look like a sawtooth. These jumps can be eliminated adding π to the analytic phase every time when the ECoG signals go to zero giving a waveform similar to a ramp. This procedure is called unwrapping [6,[25][26][27]. After unwrapping the analytic phase, the instantaneous frequency was estimated as the successive differences of the unwrapped analytic phase divided by the digitizing step and 2π (see Eq. (1)) [7,8,25], were N represents the electrode number.
A new technique to estimate instantaneous gradient based on the phase differences among electrodes was used in this report. The instantaneous gradient was estimated as the slope of the line fitted to the differences between every analytic phase value and the other phase values for each interelectrode distance.

Frames detection
There are some theories and clinical evidences suggesting that cortical dynamics in perception operates in steps that resembles frames in cinema [9,[28][29][30]. Spatial patterns take place in discrete time step separated by phase transition [29,31,32]. The spatial patterns formation occurs in a sequence of phase re-initialization of the carrier wave, emergence of a coherence domain and stabilization of the AM pattern [1,31,33]. In this paper the coherence domain materialization was studied using the covariance over the 64 values of analytic phase and analytic amplitude, with the covariance calculated using the covariance function "cov" in Matlab. Coherence domain manifestation at one frequency can be expressed by a decrease in phase variance over multiple channels recorded simultaneously [26,34,35]. Before AM pattern stabilization, analytic amplitude also changes in order to form or transit to a new pattern. It will induce an increment on analytic amplitude covariance.
Initially some technical criterion for selecting a time period phase as frame candidate was applied as follows. Thresholds for each covariance, te1 for the analytic phase covariance (APC) and te2 for the analytic amplitude covariance (AAC) were set and a time sample was considered as frame candidate if the APC was lower than te1, the AAC was higher than te2, the sign of the instantaneous gradient did not change from one sample to the next and the instantaneous frequency was within the temporal band used.
Secondly, at the time period that phase structures were selected as frame candidates frame frequency and frame gradient were calculated using Eqs. (2) and (3) [7,8,10] where n is the time step number across which a frame candidate had been defined, w i is the instantaneous frequency (Eq. (1)) and γ is the instantaneous gradient (from line fitting).
Finally, supplemental anatomical and physiological evaluations were made of the acceptable parameter rank in order to exclude spurious frames from the analysis. The frame velocity has to be within the range of conduction velocities of cortical axons (1-10 m/s), duration should be larger than 6 ms and diameter has to be smaller than the width of the cerebrum (200 mm) [7,8]. Frame duration was given by the number of digitizing steps over which the frame candidate was detected multiplied by the digitizing step. Two parameters were defined in order to evaluate the frames detection process and select values for covariance thresholds that allow obtain higher overlap between frames detected with the new method and cone fitting method. The first parameter was the coincidence rate (CR) between frames detected with new method (N LF ) and frames detected with cone fitting method (N CF ) (see Eq. (8)).
The second parameter, Q , was defined as the ratio of the coincided frame number obtained with the new method and the cone fitting method (N CLF ) to the total frame number obtained with the new method (see Eq. (9)). One frame from the new method was considered as coincident with a frame from cone fitting methods if (a) the frame from the new method begin or end close to the time sample that the frame from the cone fitting begin, (b) the frame from the new method begin or end close to the time sample that the frame from the cone fitting end, (c) the frame from the new method happen at the same time samples that the frame from the cone fitting method but is duration is shorter, (d) the frame from the new method happen at the same time samples that the frame from the cone fitting method but is duration is longer. The differences between the begin or the end of the frames for (a) and (b) should be lower than 4 samples (see Appendix A).

Feature selection and classification
Three minutes ECoG during task, awake and sleep were processed and the frames were detected. The time series in each behavioral state were partitioned in segments of 10 s. There was a total 54 segments, 18 for each behavioral state.
The frames in each segment were used as the time marker for feature extraction. Features were composed by the root mean square (rms) of the analytic amplitude normalized to zero mean and unit standard deviation during the time period that frames where detected. Feature vectors have 64 × 1 dimensions, one value per ECoG channel.
Features were extracted using the frames detected by cone fitting method and new method. Then Euclidean Distance (ED) was used to perform the classification. The objective was to compare the separability of the features extracted from the frames detected by both methods. In the same way, Sammon maps were used to transform the high-dimensional space (here 64) to lowdimensional space (here 2) and display the clusters of points that  represent the differing spatial patterns corresponding to each behavioral state [2,9,36,37].
For ED classifier, the segments were divided in two subsets, one for training and the other for testing. The centers of gravity of each class were calculated for the training set. Classification was performed by calculating the distance of each feature vector to the centers of gravity. The classification was 'correct' when the distance to its center of gravity was shorter than the distance to the other centers of gravity. The two subsets were then reversed in cross validation. The Sammon map iteratively mapped the location of points defined from a high-dimensional space to low-dimensional space, preserving the relative distances between points [2,9,36,37].

Detecting frames by the new method and cone fitting
Frames were detected by cone fitting method following Refs. [7,8]. The frames detected by cone fitting methods usually showed up at the time intervals with low APC and high AAC (see Fig. 1). The thresholds of covariance were varied from low covariance values up to around 4% of the maximal covariance or 45% of the mean covariance and the coincidence rate (CR) and Q value calculated (see Figs. 2 and 3).
On one hand for te1 values higher than 0.20 the coincidence rate became almost constant. On the other hand for te2 values higher than 0.00075 the coincidence rate was lower. Values of CR between 70% and 95% were obtained when te1 and te2 were properly selected. However the Q value was around 0.5 because the number of frames detected with the new method was almost twice the number obtained using the cone fitting method. Fig. 4 shows some examples of the phase structures of the frames detected with the new method which were not detected with the cone fitting method. Those phase structures were also smooth and similar during the frame, they could be rejected by the cone fitting method due to its technical constraints.
Choosing te1 = 0.175 and te2 = 0.00065 the coincidence percent was higher than 80% for task, awake and seizure and around 65% for sleep and the Q value around 0.5 for all behavioral states. Frame parameter analyses and classification were performed using the former threshold values.

Study on frame parameters
The frame parameters were studied during task, awake, sleep states and during transition into seizure. Frame parameters were calculated in each state from 12 five second segments using the cone fitting method and the new method.
Frame parameters estimated using both methods for task, awake and sleep states are shown in Tables 1 and 2. Frequency and temporal wavelength values were very similar for both cases. Gradients estimated by new method were lower, provoking an increment in spatial wavelength, diameter and velocity values but they still have fractal properties. Duration of the cones obtained using the improved method was also higher. Mean and standard  deviations of task and wake states were similar. Slightly lower frequency and gradient values were obtained for sleep. The parameters of seizure state were comparable to those of task, awake and sleep states in some segments but higher gradient values showed up in others, especially at the frames started from 5th and 30th seconds before seizure. Those high gradient values reflect a decrease in velocity and diameter (see Fig. 5 and Table 3). The similar changes in frame parameters were observed using the cone fitting method [7].

Brain state classification
The "rms" values of analytic amplitude, at the time samples when the frames were detected, were used to form a 64dimensional feature vectors and they were classified by Euclidean Distance classifier. The classification rate of the features extracted from the frames detected by cone fitting and new method was 75.93% and 85.18% respectively showing that AM patterns are better distinguished using the new method frames as time marker for feature extraction. Table 3 Comparison of frame parameters with the subject going into seizure (mean ± SD). Projection of the 64-dimensional feature vectors in a 2D plane, which was made using Sammon mapping [2,9,36,37] also shows that the feature vectors obtained with the new method were more clearly clustered so that they were easier to be classified (see Fig. 6).

Conclusions
Using the methodology presented in this paper to detect frames and estimate instantaneous gradient, the processing time is significantly reduced, while the coincidence level of the frames from cone fitting method and new method is kept high.
The reduction on the processing time is achieved by two reasons: (a) the time samples with high covariance in phase and low covariance in amplitude are excluded from the fitting process, (b) the instantaneous gradient was estimated by line fitting which is simpler than cone fitting. Using line fitting just two parameters are optimized, and the fitting process is carried out only one time for the phase differences on each time sample. On the other hand using cone fitting four parameters have to be optimized, and the fitting process is repeated two times for the phase structure on each time sample [7,8,10].
The striking difference between frame parameters derived from the cone fitting and the new method is the gradient, however the differences between brain states are maintained. The new method can detect significant differences in velocity and diameter emerged at the frames started from 5th and 30th second before seizure, same as the cone fitting [7,8]. The reason for the reduction on gradient values is arguably due to the discarding of some differences in the analytic phases during the cone fitting process because just a cone is used to represent the whole phase structure.
AM patterns were clearly distinguishable using the new method. The use of frame detected with the new method as time markers for patterns extraction allows achieve higher classification rates.  The new methodology is useful to detect frames quickly. Consequently, it is a promising tool to study ECoG in real time.
The following algorithm was used to determine if two frames were coincident in time or not: where N CF and N LF are number of frames detect by cone fitting method and new method respectively, F CF and F LF are initial positions of frames (sample number) from each method, T CF and T LF are the duration of the detected frames using each method, t an error parameter to measure how exactly the coincidence in time is and N CLF the number coincident frames, in our experiments t = 4 samples.