|
- LMStudio - Reddit
r LMStudio Using sillytavern with kobold is dead easy, no problems there But why doesn't sillytavern support lm studio? lm studio's interface is extremely basic, it doesn't support character cards and many of the nicer features that koboldcpp and faraday do And there don't seem to be any other proxy front ends out there for windows other than sillytavern I have seen a suggestion on Reddit
- Failed to load model Running LMStudio ? : r LocalLLaMA - Reddit
Failed to load in LMStudio is usually down to a handful of things: Your CPU is old and doesn't support AVX2 instructions Your C++ redists are out of date and need updating Not enough memory to load the model
- Why do people say LM Studio isnt open-sourced? - Reddit
Lmstudio GitHub does not contain the application itself, that's closed source In my opinion this is kinda gross since it's based on open source llama cpp but the license of that library allows it so they aren't offsides legally
- Re-use already downloaded models? : r LMStudio - Reddit
true In the course of testing many AI tools I have downloaded already lots of models and saved them to a dedicated location on my computer I would like to re-use them instead of re-downloading them again Some tools offer a settings file, where a source folder can be assigned But I haven't found anything like that in LM Studio and I wonder if that is at all possible or if I am overseeing
- Privacy? : r LMStudio - Reddit
Yeah, I have this question too The UI and general search download mechanism for models is awesome but I've stuck to Ooba until someone sheds some light on whether there's any data collected by the app or if it's 100% local and private
- r LMStudio Lounge : r LMStudio - Reddit
Hi, I'm using LMStudio to try out my `system_prompts` and settings, with the intention of implementing them later into my Llama_Index RAG application My Models ignore system_prompt somewhat and don't obey
- Why ollama faster than LMStudio? : r LocalLLaMA - Reddit
I just tested the Mistral 7b instruct v0 2, in both LM studio and ollama, in LmStudio they only allow the gguf format so i knew it was working on CPU hence so slow (Took around 90 minutes to generate which generates in 2 minutes in gemini) I tried the same model in ollama today, to my surprise the model was really fast the size of the model was 4gbs only (same as what was in lm studio) so
- How to change directory of LM Studio cache? : r LMStudio - Reddit
Posted by u y4435yuh4ueh - 2 votes and 3 comments
|
|
|