Hi all,
A couple of questions relate to AI employees if you don’t mind:
Is it possible to stick either users or AI employees to an exact LLM provider and/or model if more than one are configured? Some models are more expensive than others and, for example, I can benefit from using GPT-5.3-Codex to debug JS blocks, but for plain users of the interface it would be optional to use only GPT-5.2 or something cheaper (but good enough). Also, some AI employees (i.e. translator) do not require Codex for their operation, so why to have this model enabled for this assistant? I cannot find any way to limit the possibility to switch between all configured providers and models in the chat dialog.
The “skills” tab for any AI employee (either built-in or added manually) is always read-only and it’s not possible to turn these skills on or off. Is it a bug, or do I do something incorrectly?
Thanks, I know that it’s possible to pick custom models. But these setting are applied to all enabled AI agents, and if there are more than one model assigned, the end users become able to switch between them in the chat dialog.
I was pointing to the idea that it would be good to assign different LLMs or models to different AI agents based on their functionality. Or, alternatively, create an access list for Nocobase users to limit which LLMs/models they can access when they use AI agents.
Unfortunately not. I can setup multiple LLM configurations and activate different models in them, but there is no option to assign an exact LLM configuration to an AI agent (or to a user) - all active LLM configurations and corresponding activated models will be selectable for users.