You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you have trouble running the command, try running npx lmstudio install-cli to add it to path.
To check if the bootstrapping was successful, run the following in a 👉 new terminal window 👈:
lms
Usage
You can use lms --help to see a list of all available subcommands.
For details about each subcommand, run lms <subcommand> --help.
Here are some frequently used commands:
lms status - To check the status of LM Studio.
lms server start - To start the local API server.
lms server stop - To stop the local API server.
lms ls - To list all downloaded models.
lms ls --detailed - To list all downloaded models with detailed information.
lms ls --json - To list all downloaded models in machine-readable JSON format.
lms ps - To list all loaded models available for inferencing.
lms ps --json - To list all loaded models available for inferencing in machine-readable JSON format.
lms load - To load a model
lms load <model path> -y - To load a model with maximum GPU acceleration without confirmation
lms unload <model identifier> - To unload a model
lms unload --all - To unload all models
lms create - To create a new project with LM Studio SDK
lms log stream - To stream logs from LM Studio
Contributing
You can build the project locally by following these steps:
Note: make sure the version of Node you have installed matches the one specified in the engines of the package.json.
git clone https://github.com/lmstudio-ai/lms.git
cd lms
npm install
npm run watch
# To test your changes and run commands:
node ./dist/index.js <subcommand>