Skip to Main Content

AI and Information Literacy: AI and Information Literacy Frames

Frames

Authority is based on context, which is to say, the bias and the point of view of the author of a piece of media often influences the messaging.

Algorithmic bias, which is often baked into AI due to the bias found in training sets (James & Filgo, 2023), can be a real-world way to broach how bias becomes a part of AI. For example, how does AI change its tone, what words and phrases does it use? Can changing prompts reveal stereotypes?

Deepfakes and the creation of mis- (not just dis-) information through AI tools also helps students learn about the illusionary nature of authority in a world where information can be created from whole cloth. Having students create misinformation intentionally, as in this lesson shared through the AI Pedagogy Project, can be a hands-on way of exploring this tension.

Information can be shared in different ways: as text, image, video, audio, and more! The process of creating the information (and its subsequent forms) can affect how that information is perceived.

As seen in the massive lists of AI tools, there are many ways in which information can be created and co-created, which just as many aims. Seen positively, AI helps to differentiate learning for students, increase interest through customized content, and more. Seen negatively, it creates information with ill intent or obfuscates learning in favor in flashiness. AI also doesn't always distinguish between correct and incorrect (see hallucinations), so tools like Globe Explorer which find only image-based information in search may help more than hinder.

Globe Explorer screenshot with results on AI

Screenshot of a search through Globe Explorer, showing image-heavy results for the search terms "ai and information literacy."

Many tools for differentiation, such as Diffit, are geared for younger students, but prompting in ChatGPT can do similar things as far as creating sample texts or changing rhetoric style. Ethan Mollick, who writes the One Useful Thing newsletter, created a guide to prompting in 2023 exploring that use case.

Information is big business. Information has a value beyond just informing. It can influence policy, create and destroy relationships, and otherwise change behavior. For this reason, there is a need to assess how information is affected by copyright or money.

With AI in the mix, things become even more challenging. A constant concern regarding generative AI is in how it is trained, and how the datasets that serve as its foundation are unethically sourced and in breach of copyright. Some companies are working to create ethical datasets, such as Adobe using Adobe Stock as data to provide "clean" data that minimizes bias. Some artists use tools such as Nightshade to ruin their works' ability to be used in datasets.

This "Generative AI & Legal Research" guide from Widener University digs a bit more into copyright and AI, but teaching students with recent news can be helpful to put AI in context. Even with citing AI with guides from APA and MLA can bring into question whether the citation is for an entity or as acknowledgment, which could then beget interesting conversation on citation practices more broadly!

Research is an iterative process, where questions must be reassessed, rephrased, and sometimes rewritten. Information can be framed in the ways in which that questioning, that inquiry, supports retrieval and aids in understanding.

Just a prompt engineering can help with rewriting search terms or clarifying intention, more specialized research AI tools are becoming available to the public. A great deal are behind paywalls, which highlights digital inequity concerns, but others are freely available (such as Perplexity, which uses natural language rather than Boolean search).

Research AI can be a part of each iterative action, from prompts to expanding search terms to summarization. Concerns can be raised about whether the act of scaffolding research should be done by AI, or whether summarization is helpful since non-experts can't assess if summaries are actually correct.

This sort of summarization problem can also be connected to the issues Google had with integrating AI snippets into search (not to mention its strong connection to the previous frame, given how writers may no longer see ad revenue for the writing those snippets used).

Scholarship, or the ways in which information is studied and transformed via theses and analysis, does not exist in a vacuum. Scholarship is a conversation between texts and authors, authors and authors, and so on. The relationships between scholars can be seen in citation practices and the way that seminal researchers pave the way for future scholars' insights.

Citation can be a powerful act in how it uplifts scholars whose works may be overlooked. AI tools such as ResearchRabbit, which is free to use, create what's known as citation maps, which can be used to see where connections between scholars lie. This visualizes conversation in a very literal way! For more resources, check out the results of this "citation mapping lesson ai" search.

As AI matures, we'll likely be seeing more custom GPTs, or generative AI that uses custom instructions. ChatGPT has opened up custom GPTs to the wider public, and with proper coding and training, these could become assistants, allowing users to "converse" with data.

Searching for information, particularly when doing more in-depth research, not only requires iterative prompting but strategy. Strategic exploration is the way in which someone navigates information to find an answer.

With more research-focused AI tools becoming available, students can take advantage of networked information to find more results than ever. Note, however, that AI only facilitates access. Unless search results are open-access, the ability to retrieve materials may be stymied by access rights and lack of database subscriptions (although articles can oftentimes be requested through avenues such as Interlibrary Loan or even from the researcher directly).

For the search strategy itself, ChatGPT can actually provide research plans if prompted, creating scaffolded outlines that serve as bespoke syllabi. The same is true of other generative text AI such as Gemini and Copilot. This open textbook chapter written by Mary Landry provides a roadmap for how students can use ChatGPT for research papers.

Using AI for research plans is more akin to asking for a table of contents versus a book, alleviating some concerns over bad data, but there is also value in having students create their own search strategies, so this could be seen as better to use with upper-level students rather than those in the foundational stage of learning to research.