Project NoCap: AI-Powered Fact-Checking for Instagram
September 2024 - Present
About This Project
An AI-powered fact-checking assistant for Instagram that helps users quickly assess the credibility of posts and reels. By forwarding content to the @project_nocap account, users receive an automated analysis that highlights potential misinformation, bias, and links to more reliable sources, making it easier to navigate the information overload on social media.
Project Details
Project NoCap is an AI-powered fact-checking tool designed to help Instagram users verify the accuracy of posts and reels by leveraging large language models and carefully engineered prompts. The core idea is to make fact-checking as easy as sharing a post: users simply forward content to the @project_nocap account on Instagram, and in return they receive a structured, plain-language assessment of how trustworthy that content appears to be, along with additional context and sources. Problem & Motivation Misinformation spreads rapidly on social media, and studies show that false political content can be significantly more likely to be shared than true information. Many people struggle to judge whether what they see online is reliable. Project NoCap aims to lower the barrier to fact-checking by meeting users where they are, inside Instagram, and providing an on-demand credibility check that fits naturally into existing sharing workflows. How It Works (High Level) - Instagram users share a post or reel to the @project_nocap account. - The backend service retrieves the content and associated caption/metadata. - A fact-checking pipeline built around large language models analyzes the claim, checks for internal consistency, and searches for corroborating or contradicting information from more reputable sources. - The system generates a response that: 1. Flags potential red flags or misleading framing 2. Highlights bias or emotional manipulation where applicable 3. Suggests more trustworthy sources or neutral summaries 4. Explains the reasoning in accessible, non-technical language From a technical perspective, the project combines modern LLM capabilities with prompt engineering patterns tailored for fact-checking: separating claim extraction, evidence gathering, reasoning, and explanation into distinct stages, and enforcing transparency in the model’s chain of thought in a user-friendly way (without exposing raw prompts to end users). The backend is implemented as a lightweight API service that can scale as usage grows. Future Directions We plan to implement Retrieval-Augmented Generation (RAG) and Model Context Protocol (MCP) to significantly improve the system's accuracy and efficiency. RAG will enable the system to retrieve relevant, up-to-date information from trusted knowledge bases before generating fact-checking responses, which should reduce hallucinations and improve the quality of evidence cited. MCP will help optimize the process time by streamlining how the model accesses and processes contextual information, making the fact-checking pipeline faster and more cost-effective. The project is currently progressing at a slower pace compared to last academic year due to time constraints, as team members balance university work and other commitments. However, we remain committed to advancing the project as best as we can.
Technologies
Project Information
Category
LLMs & Prompt Engineering
Timeframe
September 2024 - Present