Visualizing the Research Methodology
To complement the detailed methodology, these visualizations provide a clear, graphical representation of the system's core components and performance. The diagrams below illustrate the architectural workflow, the bias reduction framework, and the real-time learning loop that enables continuous adaptation.
Nx
Forward
Attention
Embedding
Inputs
Output
Probabilities
Nx
Forward
Attention
Multi-Head
Attention
Embedding
Outputs
(shifted right)
Input Processing
System deconstructs the core claims from the user-submitted text.
Language Analysis
A Transformer model analyzes text for linguistic and contextual cues.
Bias Reduction
Model is trained on balanced datasets and tested with counterfactuals.
Explainable AI (XAI)
SHAP is used to determine which features influenced the final verdict.
Final Verdict
An explained verdict is generated with a confidence score.
1. Frontend Interaction
Component: `InputForm.tsx`
Action: User submits text
Trigger: `analyzeArticle(text)`
2. Backend Orchestration
File: `actions.ts`
Function: `analyzeArticle()`
Calls: Parallel AI flows
3. AI Analysis Layer
Framework: Genkit
Model: Gemini (via API)
Process: Tool-augmented generation
1. Dataset Balancing
Initial datasets are analyzed for demographic and ideological gaps.
Gaps are filled with additional data to ensure a balanced and equitable representation.
2. Adversarial & Counterfactual Testing
Original Article
Claim about "Location A"
Model
Counterfactual Article
Same claim, but about "Location B"
Model
The verdict remains consistent, proving the model is judging content, not protected attributes.
New Misinformation
A novel fake news tactic emerges.
Incorrect Classification
The model initially fails to detect the new tactic.
Error Flagged
The error is identified and fed back into the system.
Model Update
Reinforcement learning adjusts model weights.
Improved Model
The system now correctly identifies the new tactic.