Organize the Methods section similar to the Scoping review and the ANCDS review
Criteria Section
Create a table with search terms (similar to the one in Scoping Review)
Use Excel to create the tables; use different tabs in the spreadsheet for each table
Data Items Section
Reference the ANCDS review and only briefly summarize the main items that were rated. Mention that it is already published and open access. Then, include describe the items that were rated for our review in more detail.
Results Section
Report general features of the studies and our newly extracted info
Tasks
Rylee Manning will finish double-checking her ratings for papers 146-149.
Once all data for the ratings are included in the spreadsheet, Maria Quinones Vieta and Rylee Manning can check final reliability scores for all reviewers
Rylee Manning and Maria Quinones Vietawill establish mutual consensus for papers where reviewers showed discrepancy for Explicit vs. Implied Criteria
Next week, Stephanie GrassoMaria Quinones VietaRylee Manning will look at examples of papers with ambiguity regarding eligibility criteria (i.e. whether it was implied or explicit) and decide on tie-breakers for these instances
If we see a pattern wherein differences in ratings between implicit and implied criteria cannot be clearly attributed to specific raters, we will need to re-review the explicit vs. implied criteria for all studies since this is a central component of our study
Examples of what we consider explicit is they state that the features are part of their inclusion/exclusion criteria OR they are discussing their inclusion/exclusion criteria preceding or following the description of these features
e.g., Participants were monolingual English speakers. In addition, other inclusion criteria were…
e.g., Inclusion criteria including the absence of another neurological condition. Participants were also all monolingual English speakers.
Implied: All participants were monolingual, right handed, below 880 years of age.
Rylee created a rough draft of some tables to include in the manuscript
Marifer to verify search terms, Dr. Grasso to review as well
Discrepancies & Reliability
Preliminary results to be included with ICAA poster
Rylee Manning will work on Methods section of ICAA poster
Reference Abstract and Manuscript draft
Use KC & GY’s poster as a guide
Maria Quinones Vieta will compile data from individual reviewers into one sheet and add a column for Rater Number
We identified the problem of redundancy from our initial consensus review
Compile reviews into one sheet. in that sheet, identify the double ratings and delete the redundancy.
Copy that sheet and delete the double ratings.
Columns BE - BL (highlighted in blue)
Rylee will review for redundancy
where the data in these columns is redundant, Rylee will edit, highlight and then copy-paste the new data into the Consensus sheet
New papers to rate (28)
Initially we had considered all of the studies including Ana Volkmer's as ANCDS because they were in the master table, then the newly found studies were assigned to Lucas with the video instructions. And then Ana's studies were identified as not part of the initial review. papers that did not qualify based on our criteria were deleted.
Lucas will rate 105,108,111,112,113,114,115,116,117,118,119
Rylee Manning will rate121,122,123,124,125,128,129,130,131,132,133
Dr. Grasso created the figures for Number_Participants, Languages_Spoken, and figure by Country
figure for Race_Ethnicity to be ready on Sunday 9/22
Rylee updated Languages_Spoken to indicate the number of studies (n=149)
also inserted white text box to cover “final adjustments” in figure by Country
Rylee and Marifer made additional poster edits
formatting and captions
Rylee will discuss with Dr. Grasso on Monday 9/23 before sending the poster to the printer
Rylee’s reviews are nearly complete; Lucas is making progress; Marifer is just getting started
Rylee worked on methods
Notes on introduction as well
Rylee will be working on redundancy and we will touch base on progress
Inter-rater reliability: Marifer will calculate this between raters when the final ratings are completed, which should account for redundancies being removed- in other words we want each of the raters “final ratings” to not have the redundancies in them prior to conducting the IRR.
Copy and paste columns we identified as having redundancies and then Marifer will recalculate (in columns BE - BL)
After we work through IRR, we establish final consensus, and use those consensus ratings to replace the final ratings used for the data reported in the paper
Final datasheet will be the No Double Ratings sheet used from the poster BUT we are re-creating it to have all the ratings and changes to ratings made during the consensus process (so we will delete the old No Double Ratings spreadsheet, in order to replace it with the updated version from the steps outlined above)
Rylee has copy/pasted the corrected redundancies from the Double Ratings sheet into the Consensus spreadsheet. *Note: Columns BE - BL in the Double Ratings sheet. correspond to columns BD - BK in the Consensus sheet.
This has been done for the following papers.
Beales et al. 2021 ”Making the right connections…”
Reviewers 1 & 2
Cartwright et al. 2009 ”Promoting strategic television…”
Reviewers 1 & 2
Cotelli et al. 2014 ”Treatment of primary progressive…”
Reviewers 3 & 4
Cotelli et al. 2016 ”Grey Matter Density…”
Reviewers 3 & 4
de Aguiar et al. 2020 ”Cognitive and language…”
Reviewers 5 & 6
de Aguiar et al. 2020 ”Brain volumes as…”
Reviewers 5 & 6
Dial et al. 2019 ”Investigating the utility…”
Reviewers 10 & 4
Farrajota et al. 2012 ”Speech Therapy in…”
Reviewers 10 & 4
Could not find Reviewer 10’s data in their individual sheet, so I left the original data in the Consensus spreadsheet but pasted in the corrected redundancies of Reviewer 4 for the relevant columns
Fenner et al. 2019 ”Written Verb Naming…”
Reviewers 7 & 8
Ficek et al. 2019 ”The effect of tDCS…”
Reviewers 7 & 8
Flurie et al. 2020 ”Evaluating a maintenance-based…”
Reviewers 1 & 2
Harris et al. 2019 ”Reductions in GABA…”
Reviewers 1 & 2
Henry et al. 2018 ”Retraining speech production…”
Reviewers 1 & 2
Themistocleous et al. 2021 ”Effects of tDCS…”
Reviewers 2 & 3
Tsapkini et al. 2018 ”Electrical Brain Stimulation…”
Reviewers 2 & 3
Zhao et al. 2021 ”White matter integrity…”
Reviewers 2 & 3
Croot et al. 2019 ”Lexical Retrieval Treatment…”
Reviewers 3 & 4
de Aguiar et al. 2021 ”Treating Lexical Retrieval…”
Reviewers 3 & 4
Dewar et al. 2009 ”Reacquisition of person-know…”
Reviewers 3 & 4
Heredia et al. 2009 ”Relearning and retention…”
Reviewers 4 & 5
Hoffman et al. 2015 ”Vocabulary Relearning in…”
Reviewers 4 & 5
Jafari et al. 2018 ”The Effect of…”
Reviewers 4 & 5
Mahendra et al. 2020 ”Nonfluent Primary Progressive…”
Reviewers 5 & 6
Marcotte et al. 2010 ”The neural correlates…”
Reviewer 5 & 6
Mayberry et al. 2011 ”An emergent effect…”
Reviewer 5 & 6
Rebstock et al. 2020 ”Effects of a Combined…”
Reviewers 6 & 7
Reilly et al. 2016 ”How to Constrain…”
Reviewers 6 & 7
Robinson et al. 2009 ”The Treatment of Object…”
Reviewers 6 & 7
Suarez-Gonzalez et al. 2018 ”Successful short-term…”
Reviewers 7 & 8
Taylor-Rubin et al. 2021 ”Exploring the effects…”
Reviewers 7 & 8
Thompson & Shapiro, 1994 ”A linguistic-specific…”
Reviewers 7 & 8
Nissim et al. 2022 ”Through Thick and Thin…”
Reviewers 4 & 10
Richardson et al. 2022 ”Feasibility of Remotely…”
Reviewer 4 & 10
Lerman et al. 2023 ”Preserving Lexical Retrieval…”
Reviewers 4 & 10
McConathey et al. 2017 ”Baseline Performance Predicts…”
Reviewers 10 & 1
Nickels et al. 2023 ”Positive changes to written…”
Reviewers 11 & 4
Meyers et al. 2024 ”Baseline Conceptual-Semantic…”
Reviewers 11 & 4
Jokel et al. 2002 ”Therapy for anomia…”
Reviewers 11 & 4
Savage et al. 2023 ”No negative impact…”
Reviewers 11 & 4
Yesterday I was double-checking that all of the corrected redundancies were copy/pasted in properly from the Double Ratings sheet into the Consensus sheet. I noticed that some of the paper ratings for some of the reviewers (3, 5 & 10) had not been correctly pasted into the Double Ratings spreadsheet. It looks like some of the rows/columns that had been hidden in individual sheets did not get copied over into the Double Ratings sheet.
I have gone back to ensure that all papers required for Consensus have been checked/corrected for redundancies and that these corrected data have been pasted into the Consensus sheet.
Papers that were not in the double ratings sheet have been pasted in only if they were included for Consensus. These updated corrections are in the Double Ratings spreadsheet in bold text.
The Consensus spreadsheet should now be up to date with the redundancies removed. We can continue with Consensus / inter-rater reliability as planned.
Maria Quinones Vieta will complete inter-rater reliability for the papers in the Consensus document
and
Marifer completed consensus ratings see table below
Rylee recalls that we set an 80% threshold for reliability but we decided to also review the accuracy of specific rating pairs that are on the lower end (70-low 80s) to ensure accuracy for implied vs. explicit ratings
Reviewed progress on prior action items and documented decisions.
Next steps broadly are as follows:
Re-do IRR
Ensure data is organized for analysis
Finalize spreadsheets that are being stored as original/master sheets for the study
Spreadsheet WITH double-ratings
This includes the final double-ratings with any edits required from the consensus sheet but does not include the final consensus rating that was conducted between the studies
Consensus Sheet
Final consensus ratings
Row that contains consensus ratings for each pair
NEED TO CREATE: Spreadsheet without double-ratings, that replaces the double-rated consensus-reviewed papers, with the final consensus row from the consensus ratings spreadsheet
There is low reliability of rater 5 and 6. Next steps are as follows
Rylee Manning to correct error where the wrong paper was used for reliability (same author/year, but wrong paper included). Once this is corrected reliability needs to be recalculated between raters 5 and 6
Only a couple of cells were incorrect in the ratings and these were corrected by Rylee. Nothing significant in terms of correction
Rylee Manning to check all of the implied/explicit ratings of Rater 5 as there is one clear case where the rater put NA but in reality the languages were clearly stated. Any issue with this will be overridden by Rylee with the correct rating but she will note which papers required updating
In the consensus and double ratings spreadsheets, Rylee switched columns BD-BK for Reviewer 5’s ratings of the two papers by Aguiar et al., 2020 where info has been transposed. For Reviewer 5’s ratings of these papers, only columns BD-BK were updated because other columns seemed correctly in alignment with data reported in each paper
Rylee Manning to also check 6 and 7, 1 and 2, 4 and 5, since these are on the lower end specifically looking at any issues with implied vs explicit categories. Take note which papers and raters are problematic
Establish highlighting key to indicate if it was the first rater who was correct, the second rater who was correct, or neither, this way we can track the changes for our discussion
We decided it would be easiest to update and highlight the updates simultaneously in the Double Ratings sheet and the consensus sheet so that consensus can be updated simultaneously and we no longer need to wait for that to be re-done.
Rylee reviewed reviewer pairs with agreement scores below 85. For other rater pairs with low agreement scores, I examined the papers used for Consensus and identified / updated any incorrect data. Updates are highlighted in Orangein both the Consensus and Double Ratings sheets. I also highlighted places where IRR was marked incorrectly (e.g. United Kingdom vs England marked as disagreement)
Rylee will check IRR for all data as a double-check and update by
Rylee updated IRR for all data on . Maria Quinones Vieta will double-check this on to ensure that no mistakes were made.
Rylee Manning to attempt to make the No Double Ratings sheet based off of graph we drafted together by
IRR first.
start by making a copy of the Double Ratings Sheet, THEN for papers included in Consensus, insert the Agreement row for each paper and delete the double ratings data for those papers → we should end up with the Agreement rating for each of those papers and for papers not included in Consensus, we will have the rest of that data
There should be one rating per paper at the end. Count to ensure there are 149 ratings total.
I (Rylee) created a new sheet of No Double Ratings. I noticed 16 papers had two reviewers but no Consensus had been established. I can either establish consensus for these or choose to use only one of the reviewers' ratings. Waiting to hear from Dr. Grasso as to which of these options is best at this point
sign up for the ISTAART program. Look into this
RM meets with Dr. Grasso
Rylee met with Dr. Grasso to discuss IRR pairs and why some ratings overlap but were not included in Consensus.
We remembered that Julia initially rated the incorrect set of papers, which explains the overlap between Reviewer 2 (Julia) with 1, 3, & 4
Rater 9’s reviews had been excluded and their papers were redistributed. This explains why there is overlap of papers 111, 153 and 155.
where there is overlap with Reviewer 4 (Rylee), use Reviewer 4 ratings
We identified some points that will need to be confirmed with MFQ:
We have questions about consensus, including decisions we made about the # of papers per reviewer pair- I think we decided 3 were needed but I want to confirm with you so please look at your notes and the spreadsheet you used to determine double-rating assignments
We see there are some papers (three) that were double-rated between rater 4 (Rylee) and either 8 (Jessica, n=1) or 11 (Lucas, n=2). We think these were redistributed after removing rater 9s ratings but since these did not get assigned to through consensus we are confused on the decision-making there (which the double-rating assignments will help us to understand)
Can you confirm that rater 10 and rater 1 currently have only one double-rating because we had to remove rater 9? We want to be sure this is the case
We also see that at present there are some pairs missing from reliability including 4 & 8 and 8 & 10, didn't we need to adjust this prior to removing reviewer 9, so all reviewers have pairwise comparisons or do you have notes indicating we made a different decision.
I also think for Lucas (rater 11), the 15 papers he has were those that needed to be FULLY rated (all cells in the table not just for our study) and so we only needed the pairwise between him and Rylee to ensure that we felt his ratings were stable. Is this correct also?
MFQ response:
All of these papers (16, 18, 19, 48, 49, 50, 52, 57, 58, 60, 62, 63, 66, 111, 153, 155) were double rated by Julia by mistake. Yes, we decided on 3 papers per reviewer pair.
Papers 111, 153 and 155 were indeed redistributed after removing rater 9s ratings. So rater 4 and 8 (Jessica and Rylee) do not have a pairwise comparison.
For papers 153 and 155, we should use Rylee’s reviews. They were partially double rated by Rylee to check some inconsistencies I was noticing on Luca’s reviews. She did not start from scratch but went in and made edits to Luca’s ratings so hers are the most accurate.
I can confirm that rater 10 and 1 only have one but it is because the papers double rated did not qualify for our study since we went in and realized that papers from Ana V's story were not included in the original ANCDS review.
I don’t have any notes on adjusting the missing pairs of reliability. Rylee and I discussed that we can either include the missing pairs in our consensus sheet or trust the reviewer’s rating skills (Rylee, Jessica and myself). We can chat about what may be best during our meeting later!
Yes that’s correct, the 15 papers he has were those that needed to be fully rated. Rylee double rated 146-149 I don't have any notes on why we decided to have her double rate four instead of 3. I am assuming because it was a larger set of data we wanted an extra rating.
Regarding Consensus, we were internally concerned about a number of rater pairs who scored below 85%, however this did not play a role in the Consensus methodology.
On 12/13/2024, Rylee updated the percentage reported for papers included in Consensus and updated the total agreement percentage across raters' reliability.
Next steps due to snag we identified with missing data for the old/ANCDS variables (for both for papers included in ANCDS review, and those that were added post ANCDS review)
Which papers are missing data and which rater would have completed that data and for which variables
Papers were discovered to not have the OLD variables entered/were not included in the original ANCDS review. This was discovered after the new vairables had been rated by one rater, so a second rater had to review and enter data for the old/ANCDS variables. These are documented, below:
*Look at Marifer’s response re: which papers we went with Rylee’s ratings over Lucas' ratings
Make a new sheet that does NOT have the double ratings, from the sheet described in step 2, so that the new sheet without double ratings has all of the old variables as well (i.e., CAC Final _ No Double Ratings Sheet)
4.
Then we copy the updated sheet without the double ratings, to create the new CAC Final No Double Ratings Analysis Sheet, where we will clean the later per our discussion on 12/16 (the old version of this sheet is located in the ARCHIVE folder)
Rylee to supplement data for old variables based on which reviewer completed them
Any data for old variables that were missing from the following papers were completed by:
105 - 119: Lucas
120-142: Rylee
146-149: Lucas (both Rylee & Lucas completed full ratings but Lucas’s are included in CAC datasheet)
150-162: Lucas
158: Wenfu
163: Sonia
Rylee identified which reviewers rated old variables for all the new papers (#104-163).
Rylee copy/pasted old variables into the CAC sheet of Double Ratings.
After this, Rylee created a new CAC sheet with No Double Ratings. This sheet includes data for all variables for all papers.
Rylee then made a copy of the sheet with No Double Ratings and renamed it to indicate that we will use this sheet for Analysis.
Rylee will clean the data in the Analysis sheet per her conversation with Dr. Grasso. We will start by cleaning data for the following variables in a new ‘Clean’ column.
Diagnostic label given to participants by original study authors
Diagnostic category assigned during the review
Country Where Study Conducted
Race of Participants
Ethnicity of Participants
Explicitly stated: Language(s) Spoken by Participants Rylee Manning to double check
Implied: Do Inclusionary/Exclusionary Criteria Require the Patient Speak a Specific Language?
Implied: If Yes, Which Language?
Conventions for frequently reported Country Where Study Conducted: - “United States of America” - “United Kingdom” - “Canada”
Conventions for frequently reported Race of Participants: - “White” - “Asian” - “Mixed”
Conventions for frequently reported Ethnicity of Participants: - “Hispanic or Latino” - “Not Hispanic or Latino”
Explicitly Stated: Languages Spoken: If multiple languages were spoken across participants, data is reported by participant (e.g., “P1: English, P2: French, P3: English, etc.”)
Conventions for Languages: -Mandarin → “Chinese” Is this OK? Check with Dr. Grasso - N/A → “NA” - English and another language → “English, Other”
*Note: If more than one item is included per variable, the variables are separated by a comma (e.g. for Country: “France, Spain”)
Conventions for Implied Inclusionary/Exclusionary Criteria: each item is enumerated (e.g., “Inclusionary: 1. ___ , 2. ___ , 3. ___ ; Exclusionary: 1. ___, 2. ___, 3. ___”) Or listed individually if not reported or not applicable (e.g., ”Inclusionary: NA; Exclusionary: NA”)
Conventions for Implied: Do Inclusionary/Exclusionary Criteria Require the Patient Speak a Specific Language: Only 4 options: “Y / N / NR/ NA”
Conventions for Implied: If Yes, Which Language? - ASL → “American Sign Language”