Methods: six groups of 21 participants each will be included. There will be five groups
of participants with psychiatric disorders (at-risk-mental state - ARMS, first-episode
psychosis - FEP, schizophrenia - SCZ, major depressive disorder - MDD, and autism
spectrum disorders - ASD) and one group of healthy controls. Our main objective is to
test differences in means between the groups, at rest and during sleep, for each of the
variables characterizing each of the microstates (duration, frequency, occupation time)
as well as, secondarily, EEG measures of connectivity (somatosensory evoked potentials),
cortical excitability (alpha-band power), and prosodic and conversational linguistic
measures.
Regarding the microstates measures: a five minute eyes-closed resting-state EEG
with 64 channels will be recorded (as part of the larger task including the
sensorimotor task described below). A minimal preprocessing will be done with the
MNE EEG software on Python, which includes a bandpass filter between 0.5 and 40 Hz,
rereferencing to the mean, and visual and automatic correction for artifacts. Each
recording will be visually reanalyzed by clinical neurophysiologists to ensure it
is indeed an alpha-dominant, resting rhythm without any residual artifact.
Microstate analysis will be done using the Pycrostates package. Global field power
(GFP) will be determined for each participant. Only EEG topographies at GFP peaks
will be retained to determine microstates' topographies, through a modified K-means
clustering. For each subject the same number of GFP peaks will be extracted and
concatenated into a single data set for clustering. A combined score will be used
to compute the optimal number of clusters. The resulting clusters will be
backfitted to each individual maps. Temporal smoothing will be used to ensure that
periods of inter-peak noise, of low GFP, did not interrupt the sequences of
quasi-stable segments. For each subject, three parameters will be computed for each
microstate class: frequency of occurrence ("occurrence"), temporal coverage
("coverage") and mean duration. Occurrence is the average number of times a given
microstate occurs per second. Coverage (in %) is the percentage of total analysis
time spent in a given microstate. Mean duration (in ms) is the average time during
which a given microstate was present in an uninterrupted manner (after temporal
smoothing).
Regarding the linguistic measures: each participant undergoes a semi-structured
interview with a trained experimenter. Both the participant and the interviewer
wear head-set AKG-C544L condenser microphones, connected via AKG MPA VL phantom
adaptors to a Zoom H4n Pro Handy recorder. Speech is digitally recorded at a
sampling rating of 44000 Hz (16-bit). The distance between the mouth and the
microphone is kept as constant as possible (2 cm) to assure consistent levels of
vocal loudness. The interviews are done in a quiet room to limit environmental
noise; the two interactants are placed as far as possible, to prevent crosstalk
(i.e. speech of the interviewer caught by participant's microphone and vice versa).
The .wav files obtained from the recordings are annotated using the Praat software
and subsequently analysed with Praat and R. Prosodic features are extracted using
the Prosogram tool (a set of Praat scripts, open-source) and a new modified version
of scripts from the Prosogram tool. Turn-taking variables are extracted with new
combined Praat and R scripts.
Regarding the sensorimotor intergration measures: the sensorimotor integration is
investigated using a visuo-haptic task. On each trial, the participant, seated in
front of a screen, has a visual instruction (a point to the right or left of the
screen). The task consists of pressing one of the two buttons positioned on each
side of the body with the index finger of the corresponding hand according to the
visual instruction. A vibrotactile stimulator (small speakers wired to an Arduino
electronic card modulated by an amplifier) is applied to the first dorsal
interosseous muscle of both hands. 400 msec before the visual instruction, one of
the two hands receives a tactile cue (vibration) on one hand for 100 msec. This cue
is more or less reliable depending on the block. In some blocks, it is quite
reliable, since 90% of the trials present the vibration and visual instruction
congruently (indicating the same hand). Another condition is composed of only 50%
of the congruent trials, and in this case, the tactile cue is not reliable. Two
blocks with 70% congruent cases are carried out intermediately. Finally, a baseline
block which does not contain any tactile cues is presented at the beginning and the
end of the task. The order of the 90% and 50% blocks is randomized. The tactile and
visual stimuli are generated with a MATLAB script. Each block consists of 100
trials, in total 500 trials. Electroencephalographic (EEG) data is recorded
throughout the task, using a 64-channel EEG cap (from Biosemi) in order to record
the electrical brain activity. The setup is coupled to an eyetracker, to control
that the participant is fixating the cross at the center of the screen during each
block.
Regarding the multidimensional self and episodic memory task (task design:
Laboratoire Mémoire, Cerveau et Cognition): at baseline, participants will be
submitted to self-reported questionnaires assessing their sense of minimal Self on
8 domains (Multidimensional Assessment of Interoceptive Awareness - Version 2) and
sense of narrative Self on 5 domains (Tennessee Self Concept Scale - Short Form,
Present). They will undergo a neuropsychological test assessing their visual
episodic memory performance (Family Pictures from Wechsler Memory Scale-III). They
will rate their current emotional state on a visual analogue scale on 4 domains
(Mood Visual Analogue Scale). Following each of the two navigation sessions in
virtual reality, which consist in a walk through a virtual city where participants
encounter daily life events that aim to be incidentally encoded in episodic memory,
associated with different levels of self-reference, participants will be submitted
to self-reported questionnaires assessing their sense of embodiment on 4 domains
(Embodiment Questionnaire), their sense of presence on 4 domains (Igroup Presence
Questionnaire), and their cybersickness on 2 domains (Simulator Sickness
Questionnaire). They will rate again their current emotional state on a visual
analogue scale (Mood Visual Analogue Scale). Finally, participants will undergo two
episodic memory tests: a free recall task and a recognition task. The free recall
will be based on a verbal interview of 20 minutes, during which participants will
be asked to recall all the events that they remember encountering in the virtual
city. For each event, they will be asked to recall systematically and the most
precisely possible: what was the event, where and when it happened during the
navigation, in which of the two navigation it happened (source), who was the
referent according to which the personal significance of the event was assessed,
objective (perceptive) and subjective (phenomenological) details of the event, and
if the event was vividly relived or felt merely familiar (Remember/Know procedure).
The recognition test will be performed on a computer and programmed using the
Python module Neuropsydia. All 32 encountered events mixed with 16 lures which were
not encountered will be displayed successfully in a random order on a computer
screen. For each event, several questions will be asked successfully and
participants will click on what they consider the correct answer among several
propositions: did they encounter the event (Yes/No), and if yes where it happened
(among several possible localisations on a picture of the zone where the event
occurred), when it happened (replacing the event in the chronological order with
two other events), in which navigation (first or second navigation), and who was
the referent (Me/Other). For each event, participants will also rate on scales
ranging from 0 to 100: the degree of reliving or familiarity of the event (100 =
Remember, 0 = Know), the perspective of the memory (100 = first-person perspective,
0 = third-person perspective), its vivacity, fidelity, emotional intensity,
strength of associated bodily sensations, episodic self-reference, and semantic
self-reference.