Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 36 Next »

📝 Meeting minutes

Spreadsheet Key:

Rylee Manning and Maria Quinones Vieta to add spreadsheets and descriptions here

Reliability Ratings Sheet: https://utexas.box.com/s/ptixovlw80lp63kmi25cf6vqeeh6vo0a

Date

Attendees

Agenda

Notes, decisions and action items

Stephanie Grasso Maria Quinones Vieta Rylee Manning

Manuscript:
Methods Section

  • Target journal: AJSLP

    • AJSLP journal guidelines are in Box folder

  • Organize the Methods section similar to the Scoping review and the ANCDS review

Criteria Section

  • Create a table with search terms (similar to the one in Scoping Review)

    • Use Excel to create the tables; use different tabs in the spreadsheet for each table

Data Items Section

  • Reference the ANCDS review and only briefly summarize the main items that were rated. Mention that it is already published and open access. Then, include describe the items that were rated for our review in more detail.

Results Section

  • Report general features of the studies and our newly extracted info

Tasks

  • Rylee Manning will finish double-checking her ratings for papers 146-149.
  • Once all data for the ratings are included in the spreadsheet, Maria Quinones Vieta and Rylee Manning can check final reliability scores for all reviewers
  • Rylee Manning and Maria Quinones Vietawill establish mutual consensus for papers where reviewers showed discrepancy for Explicit vs. Implied Criteria 
  • Next week, Stephanie Grasso Maria Quinones Vieta Rylee Manning will look at examples of papers with ambiguity regarding eligibility criteria (i.e. whether it was implied or explicit) and decide on tie-breakers for these instances
    • If we see a pattern wherein differences in ratings between implicit and implied criteria cannot be clearly attributed to specific raters, we will need to re-review the explicit vs. implied criteria for all studies since this is a central component of our study
    • Examples of what we consider explicit is they state that the features are part of their inclusion/exclusion criteria OR they are discussing their inclusion/exclusion criteria preceding or following the description of these features
      • e.g., Participants were monolingual English speakers. In addition, other inclusion criteria were…
      • e.g., Inclusion criteria including the absence of another neurological condition. Participants were also all monolingual English speakers.
      • Implied: All participants were monolingual, right handed, below 880 years of age.

Rylee Manning Maria Quinones Vieta Stephanie Grasso

  • Rylee created a rough draft of some tables to include in the manuscript

    • Marifer to verify search terms, Dr. Grasso to review as well

  • Discrepancies & Reliability

  • Preliminary results to be included with ICAA poster

  • Reference Abstract and Manuscript draft

  • Use KC & GY’s poster as a guide

  • Maria Quinones Vieta will compile data from individual reviewers into one sheet and add a column for Rater Number
  • We identified the problem of redundancy from our initial consensus review

    • Compile reviews into one sheet.
      in that sheet, identify the double ratings and delete the redundancy.

    • Copy that sheet and delete the double ratings.

  • Columns BE - BL (highlighted in blue)

    • Rylee will review for redundancy

      • where the data in these columns is redundant, Rylee will edit, highlight and then copy-paste the new data into the Consensus sheet

  • New papers to rate (28)

    • Initially we had considered all of the studies including Ana Volkmer's as ANCDS because they were in the master table, then the newly found studies were assigned to Lucas with the video instructions. And then Ana's studies were identified as not part of the initial review. papers that did not qualify based on our criteria were deleted.

      • Lucas will rate 105,108,111,112,113,114,115,116,117,118,119
      • Rylee Manning will rate121,122,123,124,125,128,129,130,131,132,133
      • Maria Quinones Vieta will rate135,136,137,138,139,140,141,142
  • Get in touch with Lucas to inform him about the redundancy pattern and make sure he will enter data in the correct way

Rylee Manning Stephanie Grasso

Preparing the spreadsheet without Double Ratings for data analysis (to present preliminary findings at ICAA)

  • Dr. Grasso edited the No Double Ratings spreadsheet

  • Rylee Manning will delete red papers (i.e., excluded papers indicated in red in the Master Table sheet) from the spreadsheet without double ratings

    • then, Rylee will insert data for the following variables in the papers that do not already have it in the sheet:

      • Number of Participants

      • Age

      • Years Post-Onset

Rylee Manning Maria Quinones Vieta

ICAA Poster

  • Dr. Grasso created the figures for Number_Participants, Languages_Spoken, and figure by Country

    • figure for Race_Ethnicity to be ready on Sunday 9/22

  • Rylee updated Languages_Spoken to indicate the number of studies (n=149)

    • also inserted white text box to cover “final adjustments” in figure by Country

  • Rylee and Marifer made additional poster edits

    • formatting and captions

    • Rylee will discuss with Dr. Grasso on Monday 9/23 before sending the poster to the printer

  1. Rylee’s reviews are nearly complete; Lucas is making progress; Marifer is just getting started

  2. Rylee worked on methods

    1. Notes on introduction as well

  3. Rylee will be working on redundancy and we will touch base on progress

  4. Inter-rater reliability: Marifer will calculate this between raters when the final ratings are completed, which should account for redundancies being removed- in other words we want each of the raters “final ratings” to not have the redundancies in them prior to conducting the IRR.

    1. Copy and paste columns we identified as having redundancies and then Marifer will recalculate (in columns BE - BL)

    2. After we work through IRR, we establish final consensus, and use those consensus ratings to replace the final ratings used for the data reported in the paper

      1. Final datasheet will be the No Double Ratings sheet used from the poster BUT we are re-creating it to have all the ratings and changes to ratings made during the consensus process (so we will delete the old No Double Ratings spreadsheet, in order to replace it with the updated version from the steps outlined above)

  • Rylee has copy/pasted the corrected redundancies from the Double Ratings sheet into the Consensus spreadsheet.
    *Note: Columns BE - BL in the Double Ratings sheet. correspond to columns BD - BK in the Consensus sheet.
  • This has been done for the following papers.
  • Beales et al. 2021
    Making the right connections…”

    • Reviewers 1 & 2

  • Cartwright et al. 2009
    Promoting strategic television…”

    • Reviewers 1 & 2

  • Cotelli et al. 2014
    Treatment of primary progressive…”

    • Reviewers 3 & 4

  • Cotelli et al. 2016
    Grey Matter Density…”

    • Reviewers 3 & 4

  • de Aguiar et al. 2020
    Cognitive and language…

    • Reviewers 5 & 6

  • de Aguiar et al. 2020
    Brain volumes as…

    • Reviewers 5 & 6

  • Dial et al. 2019
    Investigating the utility…

    • Reviewers 10 & 4

  • Farrajota et al. 2012
    Speech Therapy in…

    • Reviewers 10 & 4

    • Could not find Reviewer 10’s data in their individual sheet, so I left the original data in the Consensus spreadsheet but pasted in the corrected redundancies of Reviewer 4 for the relevant columns

  • Fenner et al. 2019
    Written Verb Naming…”

    • Reviewers 7 & 8

  • Ficek et al. 2019
    ”The effect of tDCS…”

    • Reviewers 7 & 8

  • Flurie et al. 2020
    Evaluating a maintenance-based…

    • Reviewers 1 & 2

  • Harris et al. 2019
    Reductions in GABA…

    • Reviewers 1 & 2

  • Henry et al. 2018
    Retraining speech production…”

    • Reviewers 1 & 2

  • Themistocleous et al. 2021
    Effects of tDCS…

    • Reviewers 2 & 3

  • Tsapkini et al. 2018
    Electrical Brain Stimulation…

    • Reviewers 2 & 3

  • Zhao et al. 2021
    White matter integrity…

    • Reviewers 2 & 3

  • Croot et al. 2019
    Lexical Retrieval Treatment…

    • Reviewers 3 & 4

  • de Aguiar et al. 2021
    Treating Lexical Retrieval…

    • Reviewers 3 & 4

  • Dewar et al. 2009
    Reacquisition of person-know…”

    • Reviewers 3 & 4

  • Heredia et al. 2009
    Relearning and retention…”

    • Reviewers 4 & 5

  • Hoffman et al. 2015
    Vocabulary Relearning in…

    • Reviewers 4 & 5

  • Jafari et al. 2018
    The Effect of…

    • Reviewers 4 & 5

  • Mahendra et al. 2020
    Nonfluent Primary Progressive…

    • Reviewers 5 & 6

  • Marcotte et al. 2010
    The neural correlates…

    • Reviewer 5 & 6

  • Mayberry et al. 2011
    An emergent effect…

    • Reviewer 5 & 6

  • Rebstock et al. 2020
    Effects of a Combined…”

    • Reviewers 6 & 7

  • Reilly et al. 2016
    How to Constrain…

    • Reviewers 6 & 7

  • Robinson et al. 2009
    The Treatment of Object…

    • Reviewers 6 & 7

  • Suarez-Gonzalez et al. 2018
    Successful short-term…

    • Reviewers 7 & 8

  • Taylor-Rubin et al. 2021
    Exploring the effects…

    • Reviewers 7 & 8

  • Thompson & Shapiro, 1994
    A linguistic-specific…”

    • Reviewers 7 & 8

  • Nissim et al. 2022
    Through Thick and Thin…”

    • Reviewers 4 & 10

  • Richardson et al. 2022
    Feasibility of Remotely…”

    • Reviewer 4 & 10

  • Lerman et al. 2023
    Preserving Lexical Retrieval…”

    • Reviewers 4 & 10

  • McConathey et al. 2017
    Baseline Performance Predicts…”

    • Reviewers 10 & 1

  • Nickels et al. 2023
    Positive changes to written…”

    • Reviewers 11 & 4

  • Meyers et al. 2024
    Baseline Conceptual-Semantic…”

    • Reviewers 11 & 4

  • Jokel et al. 2002
    Therapy for anomia…”

    • Reviewers 11 & 4

  • Savage et al. 2023
    No negative impact…”

    • Reviewers 11 & 4

  • Yesterday I was double-checking that all of the corrected redundancies were copy/pasted in properly from the Double Ratings sheet into the Consensus sheet. I noticed that some of the paper ratings for some of the reviewers (3, 5 & 10) had not been correctly pasted into the Double Ratings spreadsheet. It looks like some of the rows/columns that had been hidden in individual sheets did not get copied over into the Double Ratings sheet.

  • I have gone back to ensure that all papers required for Consensus have been checked/corrected for redundancies and that these corrected data have been pasted into the Consensus sheet.

  • Papers that were not in the double ratings sheet have been pasted in only if they were included for Consensus.
    These updated corrections are in the Double Ratings spreadsheet in bold text.

  • The Consensus spreadsheet should now be up to date with the redundancies removed. We can continue with Consensus / inter-rater reliability as planned.

and

  1. Marifer completed consensus ratings see table below

  2. Rylee recalls that we set an 80% threshold for reliability but we decided to also review the accuracy of specific rating pairs that are on the lower end (70-low 80s) to ensure accuracy for implied vs. explicit ratings

  1. Reviewed progress on prior action items and documented decisions.

  2. Next steps broadly are as follows:

    1. Re-do IRR

    2. Ensure data is organized for analysis

      1. Finalize spreadsheets that are being stored as original/master sheets for the study

        1. Spreadsheet WITH double-ratings

          1. This includes the final double-ratings with any edits required from the consensus sheet but does not include the final consensus rating that was conducted between the studies

        2. Consensus Sheet

          1. Final consensus ratings

          2. Row that contains consensus ratings for each pair

        3. NEED TO CREATE: Spreadsheet without double-ratings, that replaces the double-rated consensus-reviewed papers, with the final consensus row from the consensus ratings spreadsheet

There is low reliability of rater 5 and 6. Next steps are as follows

  • Rylee Manning to correct error where the wrong paper was used for reliability (same author/year, but wrong paper included). Once this is corrected reliability needs to be recalculated between raters 5 and 6
  • Only a couple of cells were incorrect in the ratings and these were corrected by Rylee. Nothing significant in terms of correction
  • Rylee Manning to check all of the implied/explicit ratings of Rater 5 as there is one clear case where the rater put NA but in reality the languages were clearly stated. Any issue with this will be overridden by Rylee with the correct rating but she will note which papers required updating
  • In the consensus and double ratings spreadsheets, Rylee switched columns BD-BK for Reviewer 5’s ratings of the two papers by Aguiar et al., 2020 where info has been transposed. For Reviewer 5’s ratings of these papers, only columns BD-BK were updated because other columns seemed correctly in alignment with data reported in each paper
  • Rylee Manning to also check 6 and 7, 1 and 2, 4 and 5, since these are on the lower end specifically looking at any issues with implied vs explicit categories. Take note which papers and raters are problematic
    • Establish highlighting key to indicate if it was the first rater who was correct, the second rater who was correct, or neither, this way we can track the changes for our discussion
  • We decided it would be easiest to update and highlight the updates simultaneously in the Double Ratings sheet and the consensus sheet so that consensus can be updated simultaneously and we no longer need to wait for that to be re-done.
  • Rylee reviewed reviewer pairs with agreement scores below 85. For other rater pairs with low agreement scores, I examined the papers used for Consensus and identified / updated any incorrect data. Updates are highlighted in Orange in both the Consensus and Double Ratings sheets. I also highlighted places where IRR was marked incorrectly (e.g. United Kingdom vs England marked as disagreement)

  • Rylee will check IRR for all data as a double-check and update by

  • Rylee Manning to attempt to make the No Double Ratings sheet based off of graph we drafted together by
    • IRR first.
      • start by making a copy of the Double Ratings Sheet, THEN for papers included in Consensus, insert the Agreement row for each paper and delete the double ratings data for those papers → we should end up with the Agreement rating for each of those papers and for papers not included in Consensus, we will have the rest of that data
      • There should be one rating per paper at the end. 149 ratings.

sign up for the ISTAART program. look into this

  • No labels