Skip to content

Commit

Permalink
add RefChecker (#595)
Browse files Browse the repository at this point in the history
  • Loading branch information
zhimin-z authored Sep 6, 2024
1 parent 6b28616 commit 459fc1e
Showing 1 changed file with 1 addition and 0 deletions.
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -365,6 +365,7 @@ Please review our [CONTRIBUTING.md](https://github.com/EthicalML/awesome-product
* [Ragas](https://github.com/explodinggradients/ragas) ![](https://img.shields.io/github/stars/explodinggradients/ragas.svg?style=social) - Ragas is a framework to evaluate RAG pipelines.
* [RAGChecker](https://github.com/amazon-science/RAGChecker) ![](https://img.shields.io/github/stars/amazon-science/RAGChecker.svg?style=social) - RAGChecker is an advanced automatic evaluation framework designed to assess and diagnose Retrieval-Augmented Generation (RAG) systems.
* [Rageval](https://github.com/gomate-community/rageval) ![](https://img.shields.io/github/stars/gomate-community/rageval.svg?style=social) - Rageval is a tool to evaluate RAG system.
* [RefChecker](https://github.com/amazon-science/RefChecker) ![](https://img.shields.io/github/stars/amazon-science/RefChecker.svg?style=social) - RefChecker provides a standardized assessment framework to identify subtle hallucinations present in the outputs of large language models (LLMs).
* [RewardBench](https://github.com/allenai/reward-bench) ![](https://img.shields.io/github/stars/allenai/reward-bench.svg?style=social) - RewardBench is a benchmark designed to evaluate the capabilities and safety of reward models.
* [TensorFlow Model Analysis](https://github.com/tensorflow/model-analysis) ![](https://img.shields.io/github/stars/tensorflow/model-analysis.svg?style=social) - TensorFlow Model Analysis (TFMA) is a library for evaluating TensorFlow models on large amounts of data in a distributed manner, using the same metrics defined in their trainer.
* [Tonic Validate](https://github.com/TonicAI/tonic_validate) ![](https://img.shields.io/github/stars/TonicAI/tonic_validate.svg?style=social) - Tonic Validate is a high-performance evaluation framework for LLM/RAG outputs.
Expand Down

0 comments on commit 459fc1e

Please sign in to comment.