| CARVIEW |
π¦ Gorilla: Large Language Model Connected with Massive APIs
Shishir G.
Patil*,
Tianjun Zhang*,
Xin Wang,
Joseph E. Gonzalez
UC Berkeley
sgp@berkeley.edu, tianjunz@berkeley.edu
Systems and Algorithms for Integrating LLMs with Applications, Tools, and Services
Gorilla Used at
Teach LLM tool use
πIn OpenFunctions-v2, we natively train the model to support parallel functions (generate multiple functions at a time) and multiple functions (select one or more functions). Java/REST/Python APIs are also supported for the first time with extended data typesπ·
- Read More: Blog
- How well to other function-calling models perform: Berkeley Function Calling Leaderboard
- Play with the model online: Gorilla OpenFunctions-v2 Web Demo
- Check out the project: GitHub Code
- Model (6.91B) on HuggingFace π€: gorilla-llm/gorilla-openfunctions-v2
Benchmarking LLMs on function calling capabilities
π Berkeley Function-Calling Leaderboard (BFCL) π aims to provide a thorough study of the function-calling capability of different LLMs. It consists of 2k π question-function-answer pairs with multiple languages (π Python, β Java, π¨ JavaScript, π REST API), diverse application domains, and complex use cases (multiple and parallel function calls). We also investigate function relevance detection π΅οΈββοΈ, to determine how the model will react when the provided function is not suitable to answer the user's question.
- Read More: Blog
- Live Leaderboard: Website
- BFCL Evaluation Dataset: HuggingFace Dataset π€
- Gradio Demo: HuggingFace Space π€
- Reproducibility: Github Code
Better way to do RAG
RAFT: Retriever-Aware FineTuning for domain-specific RAG π Drawing parallels between LLMs and students in open-book (RAG) π and closed-book exams (SFT) π§ , we present a better recipe for fine-tuning a base LLM for RAG-focused challenges. Discover how RAFT prepares LLMs to excel with a specific document set, mirroring students' prep for finals! π
- Read More: Blog
- Read More: MSFT-META Blog
- Paper: https://arxiv.org/abs/2403.10131
- Reproducibility: Github Code
Runtime for executing LLM-generated actions
Gorilla Execution Engine (GoEX) is a runtime for LLM-generated actions like code, API calls, and more. Featuring "post-facto validation" for assessing LLM actions after execution π Key to our approach is "undo" π and "damage confinement" abstractions to manage unintended actions & risks. This paves the way for fully autonomous LLM agents, enhancing interaction between apps & services with human-out-of-loopπ
- Read More: Blog
- Paper: https://arxiv.org/abs/2404.06921
- Try it out: Web Demo
- Reproducibility: GitHub Code
News
π GoEx: A Runtime for executing LLM generated actions like code & API calls GoEx presents βundoβ and βdamage confinementβ abstractions for mitigating the risk of unintended actions taken in LLM-powered systems. Release blog, Paper.
π Berkeley Function Calling Leaderboard! How do models stack up for function calling? π― Read more in our Release Blog.
π Gorilla OpenFunctions v2 Sets new SoTA for open-source LLMs πͺ On-par with GPT-4 π Supports more languages π
π₯ Gorilla OpenFunctions! Drop in replacement! Examples
π Try Gorilla in 60s! No sign-ups, no installs, just colab!
π€© With Apache 2.0 licensed LLM models, you can use Gorilla commercially without any obligations!
π£ We are excited to hear your feedback and we welcome API contributions as we build this open-source project. Join us on Discord or feel free to email us!
Gorilla for your CLI and Spotlight Search
Gorilla powered CLI
Get started with pip install gorilla-cli
Gorilla Powered Spotlight Search
Gorilla-Spotlight
Signup
Vision
Contact Us
Citation
@article{patil2024gorilla,
title={Gorilla: Large Language Model Connected with Massive APIs},
author={Patil, Shishir G. and Zhang, Tianjun and Wang, Xin and Gonzalez, Joseph E.},
booktitle = {Advances in Neural Information Processing Systems},
year={2024},
}