com.vaadin.uitest.ai.
Interface LLMService
A LLM service that generates a response based on a prompt. All responsibilities related to the model usage have to be implemented in this service. This could be APIKEY providing, parameter setting, prompt template generation, etc.
-
Nested Class Summary
Nested Classes -
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final String
static final String
static final String
static final String
static final String
static final String
static final String
static final String
static final org.slf4j.Logger
static final String
static final String
static final String
static final String
-
Method Summary
Modifier and TypeMethodDescriptionstatic String
extractCode
(String markdown) default String
getChatAssistantSystemMessage
(String name, Framework framework) default String
getGeneratedResponse
(AiArguments aiArguments) default String
getGeneratedResponse
(String prompt, Framework framework, String gherkin) Generates a response based on the input prompt from the AI module.
default String
getImports
(TestFramework testFramework) default int
The maximum number of tokens to generate in the completion.
default String
getModel()
ID of the model to use.
default String
getPromptTemplate
(AiArguments aiArguments) default double
What sampling temperature to use, between 0 and 2.
static int
Timeout for AI module response in seconds
default AiPrompts
prompts()
requestAI
(AiArguments aiArguments)
-
Field Details
-
PARSER_ASSISTANT_QUESTION
See Also:
-
PARSER_SYSTEM_MSG_LIT
See Also:
-
PARSER_SYSTEM_MSG_REACT
See Also:
-
PARSER_SYSTEM_MSG_FLOW
See Also:
-
GENERATE_LOGIN_PLAYWRIGHT_REACT
See Also:
-
GENERATE_LOGIN_PLAYWRIGHT_JAVA
See Also:
-
GENERATE_IMPORTS_PLAYWRIGHT_REACT
See Also:
-
GENERATE_IMPORTS_PLAYWRIGHT_JAVA
See Also:
-
GENERATE_SNIPPETS
See Also:
-
GENERATE_IMPORTS
See Also:
-
GENERATE_HEADING_PLAYWRIGHT_REACT
See Also:
-
GENERATE_HEADING_PLAYWRIGHT_JAVA
See Also:
-
LOGGER
static final org.slf4j.Logger LOGGER
-
-
Method Details
-
getGeneratedResponse
Generates a response based on the input prompt from the AI module.
Parameters:
prompt
- the prompt to be used by the AI modulegherkin
-Returns:
the generated response from the AI module.
-
getGeneratedResponse
-
requestAI
-
getModel
ID of the model to use.
Returns:
model
-
prompts
-
getTemperature
default double getTemperature()What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
Returns:
temperature
-
getMaxTokens
default int getMaxTokens()The maximum number of tokens to generate in the completion.
Returns:
max tokens limit
-
getTimeout
static int getTimeout()Timeout for AI module response in seconds
Returns:
timeout in seconds
-
extractCode
-
getPromptTemplate
-
getChatAssistantSystemMessage
-
getImports
-