Flexible choice of model and provider - why?
In our experience, it is important for AI projects to remain flexible in terms of which Large Language Model or provider they provide for their users. That way, AI service providers are not putting all of their eggs in one basket and remain attractive for a wider range of customers.

But why is this so important to us? On top of the attractiveness, free choice of LLM and provider allow for:
Flexibility depending on use case and context: some models are more proficient in particular situations than others - always generate the most useful responses in one and the same software.
Time- and resource-efficiency: switch to a more reasonable or more powerful model within seconds, saving you and your company time and money.
A variety of models also allows for multi-step agents.
Reactivity to political and legal changes: having several different models and providers at hands means that you can quickly adapt to external factors.
Which models and providers are currently supported?
Our AI Suite currently supports over 27 different models from different providers such as OpenAI, Mistral, Azure or Anthrophic. As a result, users can choose which provider they trust the most, which model they prefer and what kinds of data they want to integrate. Additionally, the AI Suite already provides a practical overview of the important characteristics and differences between models, such as supported input media or costs according to input and output, which makes it easier for the individual user or an admin to choose a fitting model.
Recently, we have also experimented a lot with Gemini. Their models are fast and reasonable - apart from generating very helpful responses. Our team has already observed that Gemini models nicely support reasoning, are very reliable in tool-calling and produce much better structured outputs than GPT-models.

Our conclusion
We observe that AI models are increasingly becoming a "commodity", which makes them exchangeable more easily. As always, the devil is in the detail: a seamless integration of individual models often requires some fine-tuning. But it is definitely worth it in our experience!