Outcomes-based assessment (generally, in academic libraries & cultural heritage institutions, and for linked data or Wikidata projects)
Agenda
Introductions and agenda overview
Discussion: Outcomes-based assessment with Krystal Wyatt-Baxter, Head of Assessment and Communication at UT Libraries
Looking ahead
Discussion items
Item
Who
Notes
Introductions, agenda overview
Melanie
notes provided by Beth Dodd
Discussion: Outcomes-based assessment
Krystal, Melanie
Discussion highlights shared with Krystal in advance:
We have collective interest in building institutional buy-in to make linked data projects less side-project status and more integrated into work expectations and job duties. Assessing and communicating impact of such projects in a way that resonates with administrators seems like the first logical step in that direction, but how to do that effectively is the question on everyone’s mind.
The following are some internal and external impact areas of interest that keep coming up in our conversations:
Internal – establish comfort and proficiency with linked data generally, Wikidata specifically, among staff
External – understand impact from a public services perspective (e.g. improved discovery of resources/collections for users, changes in Google search results and knowledge graph)
External – understand impact of contributions to a shared authority control resource like Wikidata (e.g. contributing to development of resource for use by other institutions, akin to Program for Cooperative Cataloging (PCC) work we do for LoC)
Looking for models to match our needs. None found yet.
KWB: reviewed notes, very helpful. This is her introduction to wikidata, so it provides a fresh perspective (which will likely be the same for our intended audiences).
KWB: using her TSLAC webinar slides to frame our outcomes based assessment discussions today
How to get started:
ask yourself foundational questions (what are the main goals?, do we want to revise job descriptions?…) and what it will take to achieve answers/goals.
Anticipate: your big decisions, preparing for strategic planning cycles, linking strong connections to institutional mission.
What assessments do we already conduct?
For Krystal: MC- spring board question- what do we already assess?
Arch- currently only submits data on what UTL requests and measures. Our form has been customized to match SAA’s std practices (https://www2.archivists.org/groups/saa-acrlrbms-joint-task-force-on-public-services-metrics). No formal data collection (locally) for online use/presence. TARO offers some, but not at the granularity we need. Would like to have data at least at the collection level so we can make management decisions based upon user needs (essentially to target staff time for processing).
HRC- not a lot of qualitative data, marketing dept has stronger data. Reading room also has their own data (access, reference.. more common to library stats). Would like more related to description and access.
KWB- would it be fair to say there are 2 research questions: what is being accessed? and how are people getting there?
Bjd- we have a good foundation for in person use, but not online. Missing baseline data for comparison purposes of our group - showing the affect of linked data.
KWB- we can see what people are looking (google analytics can be traced backwards in time?)
MA- interested in what does success look like?? How is it increasing access to our resources, rather than just numbers of adding LD (please refer to recording)
MC- unpacking the assumption of the impact of LD... How to do this? We understand that LD facilitates more access by providing more paths to the content, but how do we prove that?
Kpm- one direct way could include understanding the impact of the “archives at” tag.
PGP- other measures might include efficiencies in workflows. Ex. Name authorities on wikidata vs NACO. Terms may already exist on wikidata and not in NACO. KPM- authority management. BJD- on that point, impact is beyond local efficiencies- to immediate global impact on terms related to unique collections
KWB- brainstorming how to assess workflow impacts. Ex. Test current way (time it takes, people involved, roles) and then test wikidata workflows, or targeted interviews (process of structured questions with comparative analysis for common threads or pitfalls). Feels like these are more exploratative for quantatative results.
PGP- interested in how the interviews would move to decision making.
MC- making the assumption that administrators would be more interested in access issues. Demonstrating workflow efficiencies would be another good advantage to support argument. Bjd- more direct link to the UT Libraries mission and the UT mission.
KWB- always encourages multiple methods, because they will answer more of our questions and a broader audience interest. Be really intentional on what you are asking so that you can set up the appropriate methods for measurement.
MC- suggests setting up a test situation. Controlled testing of a framework and system (running controlled searches to test, then make changes…)
MS- google based upon ip locations. So uncertain about the reliability for different locations. MC- incorporate this as a factor.
MA- tried using screenshots but this is not sustainable.
KWB- flip from advocacy based to what we don’t want to do (counterpoint). What would we find that would show that it is not meeting our goals/outcomes? PGG- Staffing time lost (processing vs wikidata). How to find the balance?
MC- for those who have been doing it, is there a cost? KPM- almost all of our work has been conducted by endowment funded GRAs. We feel like the efficiencies are there, the challenge is how to sustain it. How can we transition it into perm staff? BJD- by deeming it essential work
BJD- perhaps start small with a case study using the work of a fairly recent project at arch (which actually spun off of HRC’s with the same GRA)
KPM- also include Josh’s project which was adding the broad quantitative wikidata points. Where the arch/hrc project was a bit deeper dive (qualitative)
KWB- best practice is to incorporate assessment (google analytics framework and reflections) into a project before it begins. LOTS OF NODS. Think about comparative aspects (ex. minor “first pass” access vs granular access finding aids)
BJD- Krystal have you been involved in the TARO project. Not really. ST- there are some legacy statistics, but not Google analytics, nor at the level we need.
MC- very much appreciate Krystal’s pragmatic approach to project planning. All agree.
MS- assessment tool/scripts used for this project to automate data gathering might be helpful for other projects.
MS- he is assessing our contributions in wikidata right now- Josh’s work! Looking for how to track pages we have edited. When and how have page views increased? Or how often are the pages we create edited by others over time. Example of information for a wikidata item - https://www.wikidata.org/w/index.php?title=Q2078042&action=info
PGP- how do you get to the wikidata stats? Go to the original main state capitol page. Add action …. Or click on page information/tools (on lower left menu).
JC/PGP/MS- discussion on sparql, or wikidata api. First parse the page itself (hard when changes change). MS just discovered this today because this conversation peaked his interest. MC- other measures on use too.
KWB- this is really cool and promising for framework. Looking through wikidata, for google analytics… can imaging setting up a toolkit for assessments.
MC- we’ve covered a lot of ground and thank you Krystal for helping us think more concretely on assessment strategies. We will certainly keep in touch as we move forward with our project. KWB signed off.
MA- re: link dropped in the chat. Wikidata is watching. MC- since they are already tracking it would be a good datapoint for painting the bigger picture.
MS- wikimedia api is available (just found it). Page view of the api seems pretty robust and nice for retrieving info (editing, linked pages would be nice). This is something we should explore a bit more. Wikimedia REST API - https://wikimedia.org/api/rest_v1/#/
MC- we now have new ideas on
wikidata assessment
google analytics to track traffic and reference sources for digi content, and how search results change over time.
Maybe set up an assessment strategies for workflow efficiencies.
KPM- interest in qualitative surveys? Structured questions? We need to come up with questions. 2 targets: josh and maybe Alyssa; and those who are not utl (Mary, Paloma).
MC- good topics for next meeting or following meeting.
KPM- start a list for qualitative and quantitative questions.
MC- thank you Michael for sharing the LD list
Looking ahead
Melanie
Next meeting will be July 28. Katie facilitating, Beth notetaking and Josh presenting. Katie will incorporate list of questions into the meeting if possible.
Fall meeting schedule- continue with the 4th Wednesday of the month (8/25)
Next round of topics & facilitators?
Action items
KPM Create seed list of questions (use google doc as a working document linked to our wiki)
PGP/KPM revise the wiki to provide enhanced permissions. NOTE: Mary will need to initially log in so that Confluence can find her.
Welcome to the University Wiki Service! Please use your IID (yourEID@eid.utexas.edu) when prompted for your email address during login or click here to enter your EID. If you are experiencing any issues loading content on pages, please try these steps to clear your browser cache.