Bilingual Disparities Review Meeting Notes

 Meeting minutes

Spreadsheet Key:

@Rylee Manning and @Maria Quinones Vieta to add spreadsheets and descriptions here

Reliability Ratings Sheet: https://utexas.box.com/s/ptixovlw80lp63kmi25cf6vqeeh6vo0a Paper distribution sheet: https://utexas.box.com/s/k75reyq7bsxad71b4xtbiwq0xhv1inc1

 

Date

Attendees

Agenda

Notes, decisions and action items

Date

Attendees

Agenda

Notes, decisions and action items

Sep 6, 2024

@Stephanie Grasso @Maria Quinones Vieta @Rylee Manning

Manuscript:
Methods Section

  • Target journal: AJSLP

    • AJSLP journal guidelines are in Box folder

  • Organize the Methods section similar to the Scoping review and the ANCDS review

Criteria Section

  • Create a table with search terms (similar to the one in Scoping Review)

    • Use Excel to create the tables; use different tabs in the spreadsheet for each table

Data Items Section

  • Reference the ANCDS review and only briefly summarize the main items that were rated. Mention that it is already published and open access. Then, include describe the items that were rated for our review in more detail.

 

 

Results Section

  • Report general features of the studies and our newly extracted info

 

 

Tasks

@Rylee Manning will finish double-checking her ratings for papers 146-149.
Once all data for the ratings are included in the spreadsheet, @Maria Quinones Vieta and @Rylee Manning can check final reliability scores for all reviewers
@Rylee Manning and @Maria Quinones Vietawill establish mutual consensus for papers where reviewers showed discrepancy for Explicit vs. Implied Criteria 
Next week, @Stephanie Grasso @Maria Quinones Vieta @Rylee Manning will look at examples of papers with ambiguity regarding eligibility criteria (i.e. whether it was implied or explicit) and decide on tie-breakers for these instances
If we see a pattern wherein differences in ratings between implicit and implied criteria cannot be clearly attributed to specific raters, we will need to re-review the explicit vs. implied criteria for all studies since this is a central component of our study
Examples of what we consider explicit is they state that the features are part of their inclusion/exclusion criteria OR they are discussing their inclusion/exclusion criteria preceding or following the description of these features
e.g., Participants were monolingual English speakers. In addition, other inclusion criteria were…
e.g., Inclusion criteria including the absence of another neurological condition. Participants were also all monolingual English speakers.
Implied: All participants were monolingual, right handed, below 880 years of age.

Sep 13, 2024

@Rylee Manning @Maria Quinones Vieta @Stephanie Grasso

  • Rylee created a rough draft of some tables to include in the manuscript

    • Marifer to verify search terms, Dr. Grasso to review as well

  • Discrepancies & Reliability

  • Preliminary results to be included with ICAA poster

@Rylee Manning will work on Methods section of ICAA poster
  • Reference Abstract and Manuscript draft

  • Use KC & GY’s poster as a guide

@Maria Quinones Vieta will compile data from individual reviewers into one sheet and add a column for Rater Number
  • We identified the problem of redundancy from our initial consensus review

    • Compile reviews into one sheet.
      in that sheet, identify the double ratings and delete the redundancy.

    • Copy that sheet and delete the double ratings.

  • Columns BE - BL (highlighted in blue)

    • Rylee will review for redundancy

      • where the data in these columns is redundant, Rylee will edit, highlight and then copy-paste the new data into the Consensus sheet

  • New papers to rate (28)

    • Initially we had considered all of the studies including Ana Volkmer's as ANCDS because they were in the master table, then the newly found studies were assigned to Lucas with the video instructions. And then Ana's studies were identified as not part of the initial review. papers that did not qualify based on our criteria were deleted.

      Lucas will rate 105,108,111,112,113,114,115,116,117,118,119
      @Rylee Manning will rate121,122,123,124,125,128,129,130,131,132,133
      @Maria Quinones Vieta will rate135,136,137,138,139,140,141,142
      @Rylee Manning took over MQ’s papers (135,136,137,138,139,140,141,142)
  • Get in touch with Lucas to inform him about the redundancy pattern and make sure he will enter data in the correct way

Sep 16, 2024

@Rylee Manning @Stephanie Grasso

Preparing the spreadsheet without Double Ratings for data analysis (to present preliminary findings at ICAA)

  • Dr. Grasso edited the No Double Ratings spreadsheet

  • @Rylee Manning will delete red papers (i.e., excluded papers indicated in red in the Master Table sheet) from the spreadsheet without double ratings

    • then, Rylee will insert data for the following variables in the papers that do not already have it in the sheet:

      • Number of Participants

      • Age

      • Years Post-Onset

Sep 20, 2024

@Rylee Manning @Maria Quinones Vieta

ICAA Poster

  • Dr. Grasso created the figures for Number_Participants, Languages_Spoken, and figure by Country

    • figure for Race_Ethnicity to be ready on Sunday 9/22

  • Rylee updated Languages_Spoken to indicate the number of studies (n=149)

    • also inserted white text box to cover “final adjustments” in figure by Country

  • Rylee and Marifer made additional poster edits

    • formatting and captions

    • Rylee will discuss with Dr. Grasso on Monday 9/23 before sending the poster to the printer

Oct 4, 2024

 

  1. Rylee’s reviews are nearly complete; Lucas is making progress; Marifer is just getting started

  2. Rylee worked on methods

    1. Notes on introduction as well

  3. Rylee will be working on redundancy and we will touch base Oct 11, 2024 on progress

  4. Inter-rater reliability: Marifer will calculate this between raters when the final ratings are completed, which should account for redundancies being removed- in other words we want each of the raters “final ratings” to not have the redundancies in them prior to conducting the IRR.

    1. Copy and paste columns we identified as having redundancies and then Marifer will recalculate (in columns BE - BL)

    2. After we work through IRR, we establish final consensus, and use those consensus ratings to replace the final ratings used for the data reported in the paper

      1. Final datasheet will be the No Double Ratings sheet used from the poster BUT we are re-creating it to have all the ratings and changes to ratings made during the consensus process (so we will delete the old No Double Ratings spreadsheet, in order to replace it with the updated version from the steps outlined above)

  1. Rylee has copy/pasted the corrected redundancies from the Double Ratings sheet into the Consensus spreadsheet.
    *Note: Columns BE - BL in the Double Ratings sheet. correspond to columns BD - BK in the Consensus sheet.
  2. This has been done for the following papers.
  • Beales et al. 2021
    Making the right connections…”

    • Reviewers 1 & 2

  • Cartwright et al. 2009
    Promoting strategic television…”

    • Reviewers 1 & 2

  • Cotelli et al. 2014
    Treatment of primary progressive…”

    • Reviewers 3 & 4

  • Cotelli et al. 2016
    Grey Matter Density…”

    • Reviewers 3 & 4

  • de Aguiar et al. 2020
    Cognitive and language…

    • Reviewers 5 & 6

  • de Aguiar et al. 2020
    Brain volumes as…

    • Reviewers 5 & 6

  • Dial et al. 2019
    Investigating the utility…

    • Reviewers 10 & 4

  • Farrajota et al. 2012
    Speech Therapy in…

    • Reviewers 10 & 4

    • Could not find Reviewer 10’s data in their individual sheet, so I left the original data in the Consensus spreadsheet but pasted in the corrected redundancies of Reviewer 4 for the relevant columns

  • Fenner et al. 2019
    Written Verb Naming…”

    • Reviewers 7 & 8

  • Ficek et al. 2019
    ”The effect of tDCS…”

    • Reviewers 7 & 8

  • Flurie et al. 2020
    Evaluating a maintenance-based…

    • Reviewers 1 & 2

  • Harris et al. 2019
    Reductions in GABA…

    • Reviewers 1 & 2

  • Henry et al. 2018
    Retraining speech production…”

    • Reviewers 1 & 2

  • Themistocleous et al. 2021
    Effects of tDCS…

    • Reviewers 2 & 3

  • Tsapkini et al. 2018
    Electrical Brain Stimulation…

    • Reviewers 2 & 3

  • Zhao et al. 2021
    White matter integrity…

    • Reviewers 2 & 3

  • Croot et al. 2019
    Lexical Retrieval Treatment…

    • Reviewers 3 & 4

  • de Aguiar et al. 2021
    Treating Lexical Retrieval…

    • Reviewers 3 & 4

  • Dewar et al. 2009
    Reacquisition of person-know…”

    • Reviewers 3 & 4

  • Heredia et al. 2009
    Relearning and retention…”

    • Reviewers 4 & 5

  • Hoffman et al. 2015
    Vocabulary Relearning in…

    • Reviewers 4 & 5

  • Jafari et al. 2018
    The Effect of…

    • Reviewers 4 & 5

  • Mahendra et al. 2020
    Nonfluent Primary Progressive…

    • Reviewers 5 & 6

  • Marcotte et al. 2010
    The neural correlates…

    • Reviewer 5 & 6

  • Mayberry et al. 2011
    An emergent effect…

    • Reviewer 5 & 6

  • Rebstock et al. 2020
    Effects of a Combined…”

    • Reviewers 6 & 7

  • Reilly et al. 2016
    How to Constrain…

    • Reviewers 6 & 7

  • Robinson et al. 2009
    The Treatment of Object…

    • Reviewers 6 & 7

  • Suarez-Gonzalez et al. 2018
    Successful short-term…

    • Reviewers 7 & 8

  • Taylor-Rubin et al. 2021
    Exploring the effects…

    • Reviewers 7 & 8

  • Thompson & Shapiro, 1994
    A linguistic-specific…”

    • Reviewers 7 & 8

  • Nissim et al. 2022
    Through Thick and Thin…”

    • Reviewers 4 & 10

  • Richardson et al. 2022
    Feasibility of Remotely…”

    • Reviewer 4 & 10

  • Lerman et al. 2023
    Preserving Lexical Retrieval…”

    • Reviewers 4 & 10

  • McConathey et al. 2017
    Baseline Performance Predicts…”

    • Reviewers 10 & 1

  • Nickels et al. 2023
    Positive changes to written…”

    • Reviewers 11 & 4

  • Meyers et al. 2024
    Baseline Conceptual-Semantic…”

    • Reviewers 11 & 4

  • Jokel et al. 2002
    Therapy for anomia…”

    • Reviewers 11 & 4

  • Savage et al. 2023
    No negative impact…”

    • Reviewers 11 & 4

Nov 22, 2024

 

  • Yesterday I was double-checking that all of the corrected redundancies were copy/pasted in properly from the Double Ratings sheet into the Consensus sheet. I noticed that some of the paper ratings for some of the reviewers (3, 5 & 10) had not been correctly pasted into the Double Ratings spreadsheet. It looks like some of the rows/columns that had been hidden in individual sheets did not get copied over into the Double Ratings sheet.

  • I have gone back to ensure that all papers required for Consensus have been checked/corrected for redundancies and that these corrected data have been pasted into the Consensus sheet.

  • Papers that were not in the double ratings sheet have been pasted in only if they were included for Consensus.
    These updated corrections are in the Double Ratings spreadsheet in bold text.

  • The Consensus spreadsheet should now be up to date with the redundancies removed. We can continue with Consensus / inter-rater reliability as planned.

 

Dec 4, 2024 and Dec 11, 2024

 

Dec 4, 2024

  1. Marifer completed consensus ratings see table below

  2. Rylee recalls that we set an 80% threshold for reliability but we decided to also review the accuracy of specific rating pairs that are on the lower end (70-low 80s) to ensure accuracy for implied vs. explicit ratings

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Dec 11, 2024

  1. Reviewed progress on prior action items and documented decisions.

  2. Next steps broadly are as follows:

    1. Re-do IRR

    2. Ensure data is organized for analysis

      1. Finalize spreadsheets that are being stored as original/master sheets for the study

        1. Spreadsheet WITH double-ratings

          1. This includes the final double-ratings with any edits required from the consensus sheet but does not include the final consensus rating that was conducted between the studies

        2. Consensus Sheet

          1. Final consensus ratings

          2. Row that contains consensus ratings for each pair

        3. NEED TO CREATE: Spreadsheet without double-ratings, that replaces the double-rated consensus-reviewed papers, with the final consensus row from the consensus ratings spreadsheet

 

There is low reliability of rater 5 and 6. Next steps are as follows

  1. Only a couple of cells were incorrect in the ratings and these were corrected by Rylee. Nothing significant in terms of correction
  1. In the consensus and double ratings spreadsheets, Rylee switched columns BD-BK for Reviewer 5’s ratings of the two papers by Aguiar et al., 2020 where info has been transposed. For Reviewer 5’s ratings of these papers, only columns BD-BK were updated because other columns seemed correctly in alignment with data reported in each paper
  1. We decided it would be easiest to update and highlight the updates simultaneously in the Double Ratings sheet and the consensus sheet so that consensus can be updated simultaneously and we no longer need to wait for that to be re-done.
  2. Rylee reviewed reviewer pairs with agreement scores below 85. For other rater pairs with low agreement scores, I examined the papers used for Consensus and identified / updated any incorrect data. Updates are highlighted in Orange in both the Consensus and Double Ratings sheets. I also highlighted places where IRR was marked incorrectly (e.g. United Kingdom vs England marked as disagreement)

Dec 11, 2024

  1. Rylee updated IRR for all data on Dec 11, 2024. @Maria Quinones Vieta will double-check this on Dec 13, 2024 to ensure that no mistakes were made.

 

  1. I (Rylee) created a new sheet of No Double Ratings. I noticed 16 papers had two reviewers but no Consensus had been established. I can either establish consensus for these or choose to use only one of the reviewers' ratings. Waiting to hear from Dr. Grasso as to which of these options is best at this point

 

 

 

 

Dec 12, 2024

 

RM meets with Dr. Grasso

Rylee met with Dr. Grasso to discuss IRR pairs and why some ratings overlap but were not included in Consensus.

  • We remembered that Julia initially rated the incorrect set of papers, which explains the overlap between Reviewer 2 (Julia) with 1, 3, & 4

  • Rater 9’s reviews had been excluded and their papers were redistributed. This explains why there is overlap of papers 111, 153 and 155.

    • where there is overlap with Reviewer 4 (Rylee), use Reviewer 4 ratings

We identified some points that will need to be confirmed with MFQ:

  • We have questions about consensus, including decisions we made about the # of papers per reviewer pair- I think we decided 3 were needed but I want to confirm with you so please look at your notes and the spreadsheet you used to determine double-rating assignments

  • We see there are some papers (three) that were double-rated between rater 4 (Rylee) and either 8 (Jessica, n=1) or 11 (Lucas, n=2). We think these were redistributed after removing rater 9s ratings but since these did not get assigned to through consensus we are confused on the decision-making there (which the double-rating assignments will help us to understand) 

  • Can you confirm that rater 10 and rater 1 currently have only one double-rating because we had to remove rater 9? We want to be sure this is the case 

  • We also see that at present there are some pairs missing from reliability including 4 & 8 and 8 & 10, didn't we need to adjust this prior to removing reviewer 9, so all reviewers have pairwise comparisons or do you have notes indicating we made a different decision. 

  • I also think for Lucas (rater 11), the 15 papers he has were those that needed to be FULLY rated (all cells in the table not just for our study) and so we only needed the pairwise between him and Rylee to ensure that we felt his ratings were stable. Is this correct also?

MFQ response:

  1. All of these papers (16, 18, 19, 48, 49, 50, 52, 57, 58, 60, 62, 63, 66, 111, 153, 155) were double rated by Julia by mistake. Yes, we decided on 3 papers per reviewer pair.
  2. Papers 111, 153 and 155 were indeed redistributed after removing rater 9s ratings. So rater 4 and 8 (Jessica and Rylee) do not have a pairwise comparison.
  3. For papers 153 and 155, we should use Rylee’s reviews. They were partially double rated by Rylee to check some inconsistencies I was noticing on Luca’s reviews. She did not start from scratch but went in and made edits to Luca’s ratings so hers are the most accurate.
  4. I can confirm that rater 10 and 1 only have one but it is because the papers double rated did not qualify for our study since we went in and realized that papers from Ana V's story were not included in the original ANCDS review. 
  5. I don’t have any notes on adjusting the missing pairs of reliability. Rylee and I discussed that we can either include the missing pairs in our consensus sheet or trust the reviewer’s rating skills (Rylee, Jessica and myself). We can chat about what may be best during our meeting later!
  6. Yes that’s correct, the 15 papers he has were those that needed to be fully rated. Rylee double rated 146-149 I don't have any notes on why we decided to have her double rate four instead of 3. I am assuming because it was a larger set of data we wanted an extra rating.
  • Regarding Consensus, we were internally concerned about a number of rater pairs who scored below 85%, however this did not play a role in the Consensus methodology.

  • On 12/13/2024, Rylee updated the percentage reported for papers included in Consensus and updated the total agreement percentage across raters' reliability.

  1. Next steps due to snag we identified with missing data for the old/ANCDS variables (for both for papers included in ANCDS review, and those that were added post ANCDS review)
  1. Which papers are missing data and which rater would have completed that data and for which variables

    1. Keeping in mind that ANCDS papers have their FULL review of OLD/ORIGINAL variables here: https://utexas.app.box.com/integrations/officeonline/openOfficeOnline?fileId=1304243663973&sharedAccessCode=

      1. Studies 1-103 were included in ANCDS review

    2. Papers were discovered to not have the OLD variables entered/were not included in the original ANCDS review. This was discovered after the new vairables had been rated by one rater, so a second rater had to review and enter data for the old/ANCDS variables. These are documented, below:

      1. *Look at Marifer’s response re: which papers we went with Rylee’s ratings over Lucas' ratings

        1. Study #, Rater of New, Rater of Old

  2.  

3.

4.

 

 

Dec 19, 2024

Rylee to supplement data for old variables based on which reviewer completed them

Any data for old variables that were missing from the following papers were completed by:

105 - 119: Lucas

120-142: Rylee

146-149: Lucas (both Rylee & Lucas completed full ratings but Lucas’s are included in CAC datasheet)

150-162: Lucas

158: Wenfu

163: Sonia

  1. Rylee identified which reviewers rated old variables for all the new papers (#104-163).
  2. Rylee copy/pasted old variables into the CAC sheet of Double Ratings.
  3. After this, Rylee created a new CAC sheet with No Double Ratings. This sheet includes data for all variables for all papers.
  4. Rylee then made a copy of the sheet with No Double Ratings and renamed it to indicate that we will use this sheet for Analysis.
  1. Conventions for frequently reported Country Where Study Conducted:
    - “United States of America”
    - “United Kingdom”
    - “Canada”
  2. Conventions for frequently reported Race of Participants:
    - “White”
    - “Asian”
    - “Mixed”
  3. Conventions for frequently reported Ethnicity of Participants:
    - “Hispanic or Latino”
    - “Not Hispanic or Latino”
  4. Explicitly Stated: Languages Spoken:
    If multiple languages were spoken across participants, data is reported by participant (e.g., “P1: English, P2: French, P3: English, etc.”)
  5. Conventions for Languages:
    - Mandarin → “Chinese” Is this OK? Check with Dr. Grasso
    - N/A → “NA”
    - English and another language → “English, Other”
  6. *Note: If more than one item is included per variable, the variables are separated by a comma (e.g. for Country: “France, Spain”)
  7. Conventions for Implied Inclusionary/Exclusionary Criteria: each item is enumerated
    (e.g., “Inclusionary: 1. ___ , 2. ___ , 3. ___ ; Exclusionary: 1. ___, 2. ___, 3. ___”)
    Or listed individually if not reported or not applicable
    (e.g., ”Inclusionary: NA; Exclusionary: NA”)
  8. Conventions for Implied: Do Inclusionary/Exclusionary Criteria Require the Patient Speak a Specific Language:
    Only 4 options: “Y / N / NR/ NA”
  9. Conventions for Implied: If Yes, Which Language?
    - ASL → “American Sign Language”

 

Related pages