Truth Sleuth

Visualizing the Research Methodology

To complement the detailed methodology, these visualizations provide a clear, graphical representation of the system's core components and performance. The diagrams below illustrate the architectural workflow, the bias reduction framework, and the real-time learning loop that enables continuous adaptation.

Figure 2: The Transformer - Model Architecture
This diagram illustrates the architecture of the Transformer model, which underpins the language processing capabilities of the system.

Nx

Feed
Forward
Add & Norm
Multi-Head
Attention
Add & Norm
Input
Embedding

Inputs

Output
Probabilities

Softmax
Linear

Nx

Feed
Forward
Add & Norm
Multi-Head
Attention
Add & Norm
Masked
Multi-Head
Attention
Add & Norm
Output
Embedding

Outputs
(shifted right)

Figure 2: System Architecture Workflow
The multi-stage pipeline from input to final verdict.

Input Processing

System deconstructs the core claims from the user-submitted text.

Language Analysis

A Transformer model analyzes text for linguistic and contextual cues.

Bias Reduction

Model is trained on balanced datasets and tested with counterfactuals.

Explainable AI (XAI)

SHAP is used to determine which features influenced the final verdict.

Final Verdict

An explained verdict is generated with a confidence score.

Figure 3: Technical Implementation Workflow
A high-level overview of the data flow and system interactions.

1. Frontend Interaction

Component: `InputForm.tsx`

Action: User submits text

Trigger: `analyzeArticle(text)`

2. Backend Orchestration

File: `actions.ts`

Function: `analyzeArticle()`

Calls: Parallel AI flows

3. AI Analysis Layer

Framework: Genkit

Model: Gemini (via API)

Process: Tool-augmented generation

Figure 4: Bias Reduction Framework
A two-pronged approach to mitigate model bias.

1. Dataset Balancing

Initial datasets are analyzed for demographic and ideological gaps.

Political Statements
Cultural Context
Regional Languages

Gaps are filled with additional data to ensure a balanced and equitable representation.

2. Adversarial & Counterfactual Testing

Original Article

Claim about "Location A"

Model

Fake

Counterfactual Article

Same claim, but about "Location B"

Model

Fake

The verdict remains consistent, proving the model is judging content, not protected attributes.

Figure 5: Real-Time Learning & Adaptation
Using a Reinforcement Learning from Human Feedback (RLHF) approach.

New Misinformation

A novel fake news tactic emerges.

Incorrect Classification

The model initially fails to detect the new tactic.

Error Flagged

The error is identified and fed back into the system.

Model Update

Reinforcement learning adjusts model weights.

Improved Model

The system now correctly identifies the new tactic.

Figure 1: Model Performance Evaluation