You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Language models enable companies to build and launch innovative applications to improve productivity and increase customer satisfaction.
However, it’s been known that LLMs can hallucinate, generate adversarial responses that can harm users, and even expose private information that they were trained on when prompted or unprompted. It's more critical than ever for ML and software application teams to minimize these risks and weaknesses before launching LLMs and NLP models. As a result, it’s important for you to include a process to audit language models thoroughly before production.
The Fiddler Auditor enables you to test LLMs and NLP models, identify weaknesses in the models, and mitigate potential adversarial outcomes before deploying them to production.
Features and Capabilities
Fiddler Auditor supports
Red-teaming LLMs for your use-case with prompt perturbation
Integration with LangChain
Custom evaluation metrics
Generative and Discriminative NLP models
Comparison of LLMs
An example report generated by the Fiddler Auditor for text-davinci-003.
Installation
From PyPI
Auditor is available on PyPI and we test on Python 3.8 and above. We recommend creating a virtual python environment and installing using the following command
pip install fiddler-auditor
From source
You can install from source after cloning this repo using the following command