#%env/templates/metas.template%# #%env/templates/header.template%# #%env/templates/submenuAI.template%#

LLM Selection

Here you can pick models from a LLM model service to select them as production model. In the "Production Models" - Matrix you can then assign each selected model a function inside YaCy

Install your local LLM service! You need either a local ollama or LM Studio instance running on your local host or inside the intranet.

Service Selection
service
  This makes a preset to the Hoststub value
hoststub
  you can probably leave this to the default value
api_key
  (not required for Ollama or LMStudio)
max_tokens
  You must set the Context Length in the LLM service to fit to your selected max_tokens; in Ollama you find a Context Length slider in the settings
 
Production Models #{productionmodels}# #{/productionmodels}#
service model hoststub api_key max_tokens search-answers
infoThis model creates answers for search requests
chat
infoThis model is used in the chat interface and as default for the RAG proxy
translation
infoThis model can be used to make translations of the web UI
classification
infoThis model is used to classify prompts to find out what they demand
search-query
infoThis model produces search queries to YaCy search from prompts in RAG or chat
qa-pairs
infoThis model can be used to produce query-answer pairs which enhance search from chat prompts
tldr-shortener
infoThis model is used to make summaries from web content
tooling vision Actions
#[service]# #[model]# #[hoststub]# #[api_key]# #[max_tokens]#
#%env/templates/footer.template%#