Home >Technology peripherals >AI >Making News Recommendations Explainable with Large Language Models

Making News Recommendations Explainable with Large Language Models

WBOYWBOYWBOYWBOYWBOYWBOYWBOYWBOYWBOYWBOYWBOYWBOYWB
WBOYWBOYWBOYWBOYWBOYWBOYWBOYWBOYWBOYWBOYWBOYWBOYWBOriginal
2025-02-25 19:56:14120browse

DER SPIEGEL explores using Large Language Models (LLMs) to improve news article recommendations. An offline experiment assessed an LLM's ability to predict reader interest based on reading history.

Methodology:

Reader survey data provided a ground truth of preferences. Each participant's reading history and article interest ratings were used. Anthropic's Claude 3.5 Sonnet LLM, acting as a recommendation engine, received each reader's history (title and summary) to predict interest in new articles (scored 0-1000). A JSON output format ensured structured results. The LLM's predictions were compared to actual survey ratings. A detailed methodology is available in:

A Mixed-Methods Approach to Offline Evaluation of News Recommender Systems

Key Findings:

Impressive results were achieved. Precision@5 reached 56% – when recommending 5 articles, nearly 3 were among a user's top-rated articles. For 24% of users, 4 or 5 top articles were correctly predicted; for another 41%, 3 out of 5 were correct. This significantly outperforms random recommendations (38.8%), popularity-based recommendations (42.1%), and a previous embedding-based approach (45.4%).

Making News Recommendations Explainable with Large Language Models The chart illustrates the performance uplift of the LLM approach over other methods.

Making News Recommendations Explainable with Large Language Models Spearman correlation, a second metric, reached 0.41, substantially exceeding the embedding-based approach (0.17), indicating superior understanding of preference strength.

Explainability:

The LLM's explainability is a key advantage. An example shows how the system analyzes reading patterns and justifies recommendations:

<code>User has 221 articles in reading history

Top 5 Predicted by Claude:
... (List of articles with scores and actual ratings)

Claude's Analysis:
... (Analysis of reading patterns and scoring rationale)</code>

This transparency enhances trust and personalization.

Challenges and Future Directions:

High API costs ($0.21 per user) and processing speed (several seconds per user) pose scalability challenges. Exploring open-source models and prompt engineering could mitigate these. Incorporating additional data (reading time, article popularity) could further improve performance.

Conclusion:

The strong predictive power and explainability of LLMs make them valuable for news recommendation. Beyond recommendations, they offer new ways to analyze user behavior and content journeys, enabling personalized summaries and insights.

Acknowledgments

This research utilized anonymized, aggregated user data. Further discussion is welcome via LinkedIn.

References

[1] Dairui, Liu & Yang, Boming & Du, Honghui & Greene, Derek & Hurley, Neil & Lawlor, Aonghus & Dong, Ruihai & Li, Irene. (2024). RecPrompt: A Self-tuning Prompting Framework for News Recommendation Using Large Language Models.

The above is the detailed content of Making News Recommendations Explainable with Large Language Models. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Previous article:Is ReFT All We Needed?Next article:Is ReFT All We Needed?