CARVIEW |
Navigation Menu
Replies: 16 comments · 7 replies
-
Nice |
Beta Was this translation helpful? Give feedback.
All reactions
-
Good information |
Beta Was this translation helpful? Give feedback.
All reactions
-
When will Google models be added? |
Beta Was this translation helpful? Give feedback.
All reactions
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as spam.
This comment was marked as spam.
-
Nice work. Feedback / feature requests:
I've experimented with this. My concern is too many or long inline prompts in code starts to look like prompt spaghetti or web development before we conventions that separated concerns with MVC frameworks. Plus, giving non-developer roles the ability to contribute to and test prompts is important.
|
Beta Was this translation helpful? Give feedback.
All reactions
-
Hey @dljones555 - Thanks so much for the thoughtful feedback! Lots of great ideas here, and weβre excited to see you're already thinking along the same lines we are!
Keep the feedback coming! This is super helpful as we continue to improve the experience! |
Beta Was this translation helpful? Give feedback.
All reactions
-
Kate, Thanks for replying. I've explored the prompt management ecosystem even a little further.
To me, it seems there are prompt creation and eval tools geared towards engineers, but not workflows that are friendly to allowing non-engineer to work on prompts. There are also more consumer friendly prompt management tools that don't integrate into prompt as code approach or CI/CD workflows. A blend of both approaches might be the sweet spot. Are you positioning GitHub Models and the prompt tools as friendly for non-engineers to help with the prompt development lifecycle? Prompts are becoming key knowledge assets for organizations and other roles like QA, business and operations people should be involved in authoring and testing these and they should seamlessly be reusable in agent code by name. I took an attempt a file based PromptLoader (peer coded with GitHub Copilot mostly) that I want to develop further to integration with MCP SDK's for support the roots and prompts specs. https://github.com/dljones555/promptloader David L. Jones |
Beta Was this translation helpful? Give feedback.
All reactions
-
Hi David, really appreciate you sharing such a thoughtful and well-researched response! You've clearly dug in here! While we're still in the early stages of exploring where GitHub Models can go, your points about MCP, other prompt formats, and others are incredibly helpful context as we think about where we go next. I also totally agree on non-engineers in the prompt development lifecycle. Thatβs something weβve been hearing more broadly from large enterprises too. Iβve shared your post internally with the team, we're actively shaping thoughts on MCP now so the timing is great. Really appreciate you calling those out. Thanks again for contributing to the discussion here - please keep it coming! |
Beta Was this translation helpful? Give feedback.
All reactions
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as off-topic.
This comment was marked as off-topic.
-
Love this π |
Beta Was this translation helpful? Give feedback.
All reactions
-
β€οΈ 1
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
This comment was marked as spam.
-
When the daily rate limits are reached, be it via the Playground or the API, will there be any kind of notification warning us so and that we will be billed at $0.04/req going forward for that day? If not, I request this to be added in the UI or somewhere. |
Beta Was this translation helpful? Give feedback.
All reactions
-
Hey @krishnanmk, great question! To clarify: Once you enable billing, all usage (whether via the API or the in-repo playground/compare tabs) is immediately billed according to your pricing plan. There's no longer a free tier that gets used up first once you enable billing. This is because paid users get higher rate limits right away (e.g., more requests per minute), so you're getting added value from the start. That said, usage in the public playground at github.com/marketplace/models is always free! Itβs not tied to any org, so itβs not billed, but the tradeoff is that rate limits are much lower. Totally fair point about messaging. We just added a banner to the public playground to help clarify this, and weβre happy to keep improving the experience. Hope that helps! Feel free to reach out with any other questions! |
Beta Was this translation helpful? Give feedback.
All reactions
-
how can i log the rate limit error? |
Beta Was this translation helpful? Give feedback.
All reactions
-
Hey @Nassergmr can you tell us more about what you're trying to do here? With a few more specifics I can be more helpful! |
Beta Was this translation helpful? Give feedback.
All reactions
This comment was marked as off-topic.
This comment was marked as off-topic.
-
nice |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
π Hey GitHub community! We're beyond excited to unveil something we've been working on that's about to change how you build with AI. GitHub Models has arrived in public preview, bringing AI developer tooling directly into your repositories!
π Your AI Workflow, Simplified
Imagine building AI features without constantly switching between tools or losing context. Were commited to making that dream a reality! GitHub Models weaves AI capabilities seamlessly into your existing GitHub workflow:
π Prompts as Code: Version, Track, Review
Gone are the days of prompts living in random text files or lost in chat histories! With GitHub Models:
*.prompt.yml
files lets you treat prompts like any other code assetThe built-in editor means everyone on your team can contribute to prompt engineeringβeven those who've never written a line of code!
π Side-by-Side Model Comparison
No more guesswork when choosing models or tuning prompts! With GitHub Models, you can:
This structured approach means faster iterations and more confident decision-making for your AI projects.
π Enterprise-Grade Security & Control
We built GitHub Models with security and governance at the forefront:
β¨ Available Now for Everyone
GitHub Models is built into every repo, for every developerβwhether you're working solo or as part of a large organization. Getting started is simple:
.prompt.yml
fileWant to learn more? Check out our comprehensive documentation on GitHub Models.
π± Help Shape What's Next
This is just the beginning and we are still very early in our journey! Your feedback will directly influence our roadmap as we continue to enhance GitHub Models. Join us in the comments below to share your ideas and connect with other developers who are building the future of AI on GitHub.
What AI features are you excited to build with GitHub Models? Let us know in the comments below! π
Beta Was this translation helpful? Give feedback.
All reactions