Trust-Score Logo
Loading chart data…

As large language models become increasingly prevalent in society, their lack of transparency raises critical questions about trustworthiness. LLMs operate as black boxes with no visibility into their decision-making processes, creating an urgent need for external trust assessment mechanisms. This work introduces "trustparency"™ - a novel approach to evaluating LLM trustworthiness through public sentiment analysis. Our system combines two complementary models: the open-source "Prasadrao/xlm-roberta-large-go-emotions-v3" model that extracts 28 human emotions from text, and our proprietary neural network trained on 4 million software reviews to predict trust scores from emotional patterns. The pipeline continuously scans Reddit discussions about popular LLMs including ChatGPT, Claude, Gemini, and Grok, transforming raw public discourse into quantified trust metrics updated daily. By analyzing the emotional undertones of user experiences and opinions, our system provides an independent measure of public confidence in AI systems, offering a data-driven solution to the transparency problem that plagues modern artificial intelligence. This approach enables stakeholders to monitor real-time public perception and trust dynamics in the rapidly evolving AI landscape.

D
Shows the raw, daily calculated trust score.
W
Shows a 7-day sliding moving average to smooth out daily noise.
M
Shows a 30-day sliding moving average for a broader trend view.
ALL
Shows the all-time cumulative average trust score.

Contact: roman@trustparency.ai

Privacy Settings

Loading entities…

Privacy Settings