M42 Intelligence Actions Admin Manual
M42 Intelligence Actions Admin Manual
Introduction
M42 Intelligence Actions (previously called Effie AI Summarizer) is a configurable, generative AI feature that helps agents extract information from data cards quickly, produce contextual high-quality text content with a single click, translate important content, or summarize ticket content for a specific audience. M42 Intelligence Actions comes with predefined use cases that can be easily configured to fit other use cases. It does not matter in which process you use M42 Intelligence Actions - it can be used with any template or process, such as Incident management, Change management, Problem management, HR service management, IGA solution (Identity Governance and Administration) etc.
M42 Intelligence Actions is an early-access feature first available in ESM 2024.1 that does not require a license in the 2024.1 version, but the Generative AI powering the feature requires a paid subscription. The subscription can be bought from Efecte or directly from the supported providers (OpenAI or Azure OpenAI). For trial access to Efecte GenAI, please contact your Efecte representative. The feature will be part of M42 Intelligence for Agents starting in 2024.2.

M42 Intelligence Actions is available in the new ESM Agent UI. It is available on all templates for which actions have been configured. Up to 1000 generated responses can be triggered in the early access phase.

When you click the M42 Intelligence button, you can see the actions configured in the M42 Intelligence Actions for that template.

When the user clicks any of these Action buttons, the Action in the background is triggered with the following information:
- The prompt defined for that use case as defined by admin (predefined or custom)
- For example: “As an AI assistant for IT service desk agent, you are provided with data from a support ticket. Produce a summary of the ticket, including the following points … "
- The context attributes as defined by the admin
- For example, subject, details, internal comments, etc.
- Note that all attributes used with M42 Intelligence require an attribute code.
If a user does not have permission to access a context attribute, the attribute's value is ignored in the system prompt that is sent to the large language model.
The purpose of M42 Intelligence Actions is to extract and process critical information from data cards to generate meaningful content for desired use cases, such as:
- Summarize, for example, a ticket or an asset
- Creating a new subject for a ticket
- Creating content to be copied to a knowledge base article.
- For IGA purpose it can be used for example suggesting friendly name and description for entitlements and business roles

This article will explain the necessary steps to enable M42 Intelligence Actions in ESM version 2024.1, utilizing Efecte GenAI, OpenAI, or Azure OpenAI integration. When Efecte GenAI is used, data is not sent outside Efecte Cloud.
Prerequisites
During the early access, you can have up to 5 configurable use cases that can be used up to 1000 times. In the 2024.2 release, the limits for several actions and number of uses can be removed with the M42 Intelligence for Agents license. The number of available requests is subject to your subscription with the chosen Generative AI provider.
Availability of default use cases
In the 2024.1 version, the default use cases are only available if your configuration has the same templates and attribute names as the baseline configuration. In 2024.2, the default configurations were brought available regardless of the used configuration.
In 2025.1, we are bringing a set of new default actions, that are used if M42 Intelligence Actions has not been used yet. The new Actions do not replace configurations, that were used already, so they are only available for new customers out-of-the-box. Existing customers can easily refer to this documentation to add more Actions to their configuration.
M42 Intelligence Prompt Management in M42 Professional
In M42 Professional both M42 Intelligence Writing Assistance and Actions behavior is customized by creating and adjusting prompts.
Writing assistance is used to help users improve their input on supported attributes.
Actions is used to help users extract key information and create useful content using contextual data and configurable prompts. They are available in a dedicated user interface or can be triggered automatically through workflows.
Customizing the Behavior
You can further customize the behavior and availability of M42 Intelligence Writing Assistance and Actions by creating and adjusting prompts for each feature.
Create a New Prompt
To create a new prompt, click on the "+Add" button. This opens the new prompt configuration window:

-
Unique name - Name of the prompt. Must be unique.
- Example: Complete_1
-
User title - Name shown for the user. Should be in a more human readable form.
- Example: Complete the text
-
Description - Description for the prompt.
- Example: Complete user's text with a detailed resolution
-
Prompt instructions - The actual prompt, which is sent to the AI.
- Example: You are an AI assistant on an enterprise service management platform assisting a support agent. Your task is to rewrite the email draft into a clear, polished message in fluent language. Use a neutral ending to offer help if the issue continues.
-
Writing assistance mode - Only for “Writing assistance” feature.
- Text improvement - Improving existing text.
- Text creation - Creating new text from context (templates, attributes, data cards, etc.).
-
Visibility
- Visible: Action is shown in the data card for the target attribute.
- Hidden: Action is not shown in the data card for the target attribute.
-
Template - Template where the prompt is used.
- Example: Ticket
-
Context attributes
- Example: Support Email
-
Target attribute
- Example: E-mail discussions
Prompt Guidelines
To provide the instructions, follow these guidelines for the language model regarding the format and language it should use in the responses. Use these settings to adjust the agent experience based on, for example, the following factors:
- Goals and Objectives: Clearly define the primary goals and objectives of M42 Intelligence AI Writing Assistance. What kind of support do you want it to provide? Who is the person writing the email? Understand what kind of information or responses you want from the model. You can start with the default instructions and adjust as you see fit based on the rest of the factors below. For example, when used by IT support agents, you should mention it in the prompt, “Act as an IT support agent.” Please note that the same instructions apply to all configured templates.
- Brand tone and voice: Specify the tone and voice used in responses. Is it formal, informal, professional, friendly, or technical? The guidelines should reflect the brand's personality in customer communication. For example, you can instruct the generative AI to keep the answers short and straight to the point.
Cultural Sensitivity and Courtesy: Specify cultural sensitivity and courtesy guidelines. Make it clear how the model should handle sensitive topics, controversial issues, or potentially offensive content. You can instruct the model to be respectful and avoid bias. - Specific use cases: Define any specific use case or behavior you want to enforce in the responses. For example, you can instruct the model to ask clarifying questions to further help with troubleshooting.
Answer format details: Based on the user expectations and organizational communication habits, it might be a good idea to instruct the language model to produce more compact or lengthier responses in varying levels of detail. Additionally, the available data the agent selects can vary each time a response is generated, leading to varied experiences. You can also guide on the format of the responses and what kind of structure should be followed. - Relevance and Accuracy: Emphasize the importance of providing accurate and relevant information. Instruct the model to prioritize accuracy over creativity and ask for more details if the required information is unavailable on the ticket data.
If you want to customize the prompts, involving users in discussions early on is important to ensure the instructions meet their needs.
Test and Adjust
Test all the M42 Intelligence Writing Assistance functionalities (generate, correct and complete) in order to see how well the results are in line with expectations. Try using different context attributes. Listen to feedback from the users to adjust the instructions to fit required communication style. For using M42 Intelligence AI Writing Assistance, please see the following user guide. https://docs.Matrix42.com/effie-ai-email/how-to-use-effie-ai-email/
Troubleshooting
Problem: Responses cut short
If responses generated by the generative AI are cut short, a workaround is adding a limitation to the response size. This can be done by prompting, for example: “Limit the response to 1000 characters”.
Provider Configuration
To enable the M42 Intelligence Actions (Actions in 2024.1), select “Enable M42 Intelligence Actions.” This setting is not needed in 2024.2 and newer environments.
First, select the provider.

For bring-your-own AI subscriptions, you have OpenAI and Azure OpenAI options available, if you want to use your existing subscription to these service providers. If you wish to utilize your existing OpenAI or Azure OpenAI subscription, you are responsible for managing the account securely and any costs incurred by the use M42 Intelligence. Efecte is not responsible for the quality and availability of 3rd party services connected to M42 Intelligence.
Set the AI provider API password and URL pointing to the used language model.

In 2024.2, there will be an option to change the used model.

Warning
When using Efecte GenAI, changing the language model used without permission from Efecte R&D is not supported. Any changes can cause the application not to work or have unexpected behavior.
Important Provider Configuration Information
OpenAI
With OpenAI, use the following API URL https://api.openai.com/v1/chat/completions
Crete API key in the OpenAI platform to be used as a password.
Azure OpenAI
With Azure OpenAI, get the API URL and API key from your Azure tenant administration. More details on how to set up Azure OpenAI, check Azure OpenAI documentation: https://learn.microsoft.com/en-us/azure/ai-services/openai/overview
See details for prerequisites below.
Efecte GenAI
Efecte GenAI is Efecte's own large language model, that is currently being piloted. Efecte GenAI is run on European cloud, to make sure your data and information is being handled with care. To utilize Efecte GenAI with M42 Intelligence, please contact your Efecte representative.
Provider connection configuration with Azure OpenAI
To start using M42 Intelligence Email with Azure OpenAI, you will need to set up the Azure OpenAI services following the Azure OpenAI guide: https://learn.microsoft.com/en-us/azure/ai-services/openai/overview
Before you can configure M42 Intelligence Email with Azure OpenAI, you will need to have the following:
- Access to Azure OpenAI services
- Azure OpenAI service running on your selected region
- Azure OpenAI deployments for Text generation and Text completion using your preferred GPT model or a custom model
- API URLs for Completion and Chat Completion API pointing to your deployed models (see Constructing API URLs below)
- API keys to gain access to your deployed models
| Setting | Example | Additional information |
|---|---|---|
| AI provider API password: | Create in Azure OpenAI studio | API key to allow access, track usage and manage costs. Make sure to save the API key in a secure location, in order to retrieve it for future use. It is not possible to view the key in ESM. |
| Actions API URL: | https://yourtestenv.openai.azure.com/openai/deployments/gpt-instruct/chat/completions?api-version=2023-07-01-preview | Used for Generate feature |
| AI provider health check API: | Not supported | Used for showing the status of Efecte GenAI, not supported with Azure OpenAI |
Note: M42 Intelligence Email defaults to GPT 3.5-Turbo-instruct for Correct and Complete features and GPT-3.5-Turbo for Generate feature. If you wish to use a different ready-made or custom models, you need to change the model name in the following platform settings (starting in 2024.2 you can use the Model setting in the UI instead).
| Platform setting | Default value(s) | Information |
|---|---|---|
| ai. |
gpt-3.5-turbo | Large language model used to power Actions. |
Constructing API URLs
After deploying your OpenAI service and model, you can construct API links as follows:
Text generation API:
https://<OPENAI_DEPLOYMENT_NAME>.openai.azure.com/openai/deployments/<GENERATION_MODEL_DEPLOYMENT_NAME>/chat/completions?api-version=<API_VERSION_NAME>
Additional steps such as budget alerts, enhanced network security, and identity management might be necessary for a production setup. Please refer to the Microsoft Azure documentation for best practices for managing Azure deployments. Please note that you are responsible for any Azure OpenAI costs incurred using M42 Intelligence Email. M42 Intelligence Email contacts the Azure OpenAI service only when the agent uses the feature to generate, correct, or complete messages.
We recommend frequently consulting the official Microsoft Azure documentation, as it is the most reliable and up-to-date source of information on how Azure OpenAI services work.
You can read more about API versioning here: https://learn.microsoft.com/en-us/azure/ai-services/openai/reference
M42 Intelligence (Actions & Writing Assistance) General Information and Prompts Guidance
Information on data privacy
We prioritize integrity in our service to safeguard our customers' data. Regardless of the provider, the data processed by Large Language Models is never automatically collected to train any Generative AI services. Additionally, if there are concerns about the location of the data, using Matrix42 GenAI will ensure that all data stays within the EU. With OpenAI, we are leveraging an industry-standard solution and a reputable company to provide additional capabilities, such as multi-language and the ability to follow up on the latest models. Customer data will not be used to train the model, as we use the commercial API. You can read more about the OpenAI API privacy policy here: https://openai.com/enterprise-privacy.
Please note that no data is anonymized; the processed data includes only data selected by the agent and email if the administrator allows email content to be selected.
Supported Generative AI providers
Matrix42 GenAI
Matrix42 GenAI is a large language model provided and hosted by Matrix42, fine-tuned for ITSM use cases.
Matrix42 GenAI enables you to harness the power of generative AI without the need to set up and maintain separate services. See up to date information about language support in the M42 Intelligence solution description. You will need a separate agreement with Matrix42 to use Matrix42 GenAI. Please ask your Matrix42 representative for more details on gaining access to Matrix42 GenAI.
Matrix42 GenAI can be used with M42 Intelligence Writing Assistance only in English. Matrix42 fully manages and hosts the language models used in Matrix42 GenAI.
OpenAI (Bring your own)
If your organization already has an OpenAI account, you can create an API key to connect M42 Intelligence Writing Assistance to that account. For further details on how to set up M42 Intelligence Writing Assistance with OpenAI, please look at the instructions below in the M42 Intelligence Writing Assistance settings.
OpenAI hosts and manages the language models. You are responsible for setting up and managing the OpenAI account.
Since 2025.2, 4o, 4o-mini, o-series, and newer models are supported.
Azure OpenAI (Bring your own)
If you already have Azure OpenAI services, you can create a GPT model deployment in Azure OpenAI Studio using any GPT model. This deployment can then be used as the LLM for M42 Intelligence Writing Assistance. Custom models can also be used with M42 Intelligence Writing Assistance. Please check the instructions later in this article for details on setting up the Azure OpenAI connection.
The language models used in Azure OpenAI are hosted and managed in your Azure tenant. You are responsible for setting up and managing the Azure tenant and related OpenAI services.
Since 2025.2, 4o, 4o-mini, o-series, and newer models are supported.
Building requests for large language models
Understanding Large Language Models
M42 Intelligence Writing Assistance is technically easy to set up but requires some understanding of the Large Language Models to optimize it for your use case. Here are a few key instructions:
- Always use context attributes that are relevant for you - the use cases below show examples of context attributes.
- Be very concrete in your instructions and avoid ambiguity.
- Keep sentences short to make sure your intention is grasped by the large language models.
- Provide context and role with sufficient background from your configuration - think about how the message needs to be formed in order to be useful for support agents - with M42 Intelligence Writing Assistance, the AI needs to act as the agent, even though the human users are always in control.
- Let's break down a shorter example for Generate prompt here:
- Provide a role and context - for example: “Act as an IT support service desk agent handling issues related to workstations and printers."
- Introduce a background - for example: “You might be provided with the ongoing email conversation and with the data about the support ticket the agent is working on."
- Add general instructions - for example: “Using provided data, generate a polite email response. End with a polite greeting.”
- Remember to always adjust the instructions based on your context
- Let's break down a shorter example for Generate prompt here:
- Before starting to work with M42 Intelligence with OpenAI, please have a look on the prompt engineering guide by OpenAI for further instructions: https://platform.openai.com/docs/guides/prompt-engineering
When large language models produce responses, they take input from multiple levels that affect the eventual outcome.
- Platform setting system prompts - general instructions applied to all generated responses with the different features
- Use case configuration prompts - use case specific prompts that define the behavior with individual actions
- Context attributes - contextual data defined by the admin, such as ticket data
- User language (writing assistance only) - user's selection of language output
Additionally, for example, knowledge discovery with the AI Core component might have additional instructions that affect the responses.
To avoid issues with conflicting prompts, make sure that the prompts on different levels do not contradict.
Prompting guidelines
To get the most out of large language models, ensure your prompt instructions include specific instructions for what you want to achieve. Large language models also tend to provide some additional structure or formatting, such as parentheses around the produced content or including a pretext with a colon.
Prompt Length
Starting from 2025.1, the maximum length of a single prompt is 1000 characters.
In 2025.3, the maximum length is 4000 characters.
Response Length
Starting from 2025.1, the default length of a single response is 1000 characters.
Response window size
To modify the size of the response window in characters, use the Platform setting:
- Matrix42 GenAI: ai.provider.genai.generation.response.size
- OpenAI: ai.provider.openai.generation.response.size
- Azure: ai.provider.azure.generation.response.size
Context window size
The context window size defines the full size of the request (including system prompts, admpin prompts, and contextual data) and the generated response in characters.
To modify the size of the context window in characters, use the Platform setting:
- Matrix42 GenAI: ai.provider.genai.model.context.size
- OpenAI and Azure AI context size is set to 16,000 and cannot be changed.
Be Clear and Specific:
Clearly define the purpose and audience of the prompt, providing specific instructions for the desired response.
Ensure clarity by outlining the intended outcome and expectations clearly.
Adopt a Structured Approach:
Organize the prompt into well-defined sections or bullet points, covering all pertinent aspects of the use case. This makes it easier for the generative AI to capture individual instructions separately. Mention the availability of accompanied data, as the contextual data makes the feature much more powerful than just talking with a generative AI chatbot.
Tailor the prompt to the Use Case:
Customize the prompt to suit the specific requirements and objectives of the use case or task.
Align the content with the context and goals of the intended application or scenario. Is the purpose of generating content for consuming information only or something that should be used in sharing knowledge? Explain in the prompt.
Convey Concisely and Clearly:
Keep the prompt concise and straightforward, avoiding unnecessary complexity or verbosity.
Use clear and precise language to communicate instructions effectively.
Consider the Audience:
When writing the prompt, consider the knowledge level and expertise of the audience. Is the use case written for an IT support person or an HR representative? Provide guidance and context appropriate for the users to act based on the responses.
Prioritize Actionability and Usability:
Ensure the prompt leads to practical, actionable, readily implemented, or utilized responses.
Emphasize clarity and usability to facilitate efficient decision-making or problem-solving based on the generated output.
Align with your organization's standards and processes:
Where applicable, ensure the prompt adheres to your standards, processes, or best practices relevant to the use case. Maintain consistency and quality by aligning the generated responses with established guidelines and principles.
Encourage Feedback and Iteration:
Solicit user feedback on the prompt's effectiveness and the quality of the generated responses.
Iterate on the prompt based on user input and real-world usage to continuously enhance its effectiveness and relevance.
System prompts
You can adjust general instructions for AI in system prompts to reduce repeating the exact instructions regarding style and tone. There are three different platform settings to adjust.
ai.system.prompt - This setting gives a system-level prompt to all AI-generated responses. It is recommended that this setting be used with the default value.
ai.actions.prompt - This setting allows you to fine-tune the responses for the Actions.
ai.writingAssistant.prompt - This setting allows you to fine-tune the Writing assistance responses, so they are ready for use in communication and documentation.
AI Actions
Example configurations with prompts
Use the examples below as a starting point for configuring M42 Intelligence Actions. These examples provide prompts to configure according to your environment's needs - the attributes mentioned in some examples are shown as examples only, as the value of the attributes depends on which attributes are used and how.
Remember that you can use the Actions to get an idea of what any data card in your ESM is about - like getting to the root of a Problem ticket, understanding the status of a Change, or communicating the state of an identified Information security incident to non-technical stakeholders.
Depending on the use case, the configuration might heavily rely on the data card's contextual data. Select relevant attributes that usually hold helpful content for your purposes.
Tip
It is easy to adjust the prompts to your specific use case. Just add your own instructions and remember to test often with real-life data.
Use Cases by Domain
Incident management
Summarize Content
It is possible to use M42 Intelligence to for example Summarize data card content to quickly get an idea of what a ticket is about, what has been done so far to solve an issue, and what the next steps are to solve an issue. This helps in handover situations to quickly grasp the context and understand the situation.
Below is an example configuration, that you can use as a baseline to start exploring the possibilities of M42 Intelligence using Generative AI.
Full example configuration:
Unique name (name of the Action for the admin to recognize it): Ticket summarization
User title (title of the Actions shown for the user): Summarize ticket
Description (description of the Action to instruct the user): Provide a concise summary of the ticket
Prompt instruction: Using key details from a service management support ticket, summarize the core issue, actions taken, causes identified, and current resolution status. Ensure the support agent understands the urgency, progress made, and next steps needed. Keep the overview clear and structured, without using introductory or concluding phrases, focusing solely on critical ticket information.
Context attribute suggestions (select attributes relevant for you): Subject, Details, Status, Customer, Team, E-mail latest body, Internal comments, External comments
Create a New Subject
You can get a suggestion to replace a poorly written, vague or inaccurate subjects on tickets with an improved version by M42 Intelligence.
Example prompt:
Based on the provided support ticket data, generate a clear and concise subject line that accurately summarizes the ticket's issue or request in one brief sentence.
Example context attributes:
Internal comments, Subject, Details, Resolution
Create Resolution
Generate a precise resolution summary to document ticket resolutions for future reference.
Example prompt:
Using the service management data related to the ticket, generate a concise and clear resolution text. Include the steps taken to resolve the issue, any relevant troubleshooting actions, and the final solution applied. Ensure the text is suitable for documentation and can be referenced for future similar issues
Example context attributes:
Subject, External comments, Details, Priority, Resolution, Related assets
Generate Content for a KB Article
Use M42 Intelligence to make sure your knowledge is kept up to date, by structuring known information about how an issue was solved to a predefined format. You can adjust the prompt to align with your KB article format.
Example prompt:
As a Knowledge Manager, use provided service management data to create a knowledge base article for Service Desk Agents. Include:
- Title: Clear summary.
- Overview: Issue intro from data.
- Symptoms: Key indicators from data.
- Troubleshooting: Steps and tools from data.
- Resolution: Recommended fix.
- Prevention: Best practices.
- References: Related links.
Ensure clarity and actionability.
Example context attributes:
Internal comments, Subject, Details, Resolution
Categorization
You can use M42 Intelligence Actions to suggest categorization of your data as well, such as ticket category or related services. For now, you need to maintain a list of available categories, services or other classifiable information as a list in the prompt. Make sure you adjust the prompt below based on which type of classification you want to use, and insert the list of possible values.
Example prompt:
Based on the given IT issue, categorize the ticket into one of the relevant categories: (Insert your categories here). Then, suggest a service that aligns with the issue and your selected category. Use this list of available services: (Insert your list of services here).
Example context attributes:
Internal comments, Subject, Details, Resolution + attributes to be used in categorization
Next Steps
Ask for help on what should be done next and get a detailed list of possible next steps and actions to resolve the issue
Example prompt:
Review the service management support ticket, focusing on the core issue, actions taken, and identified causes. Suggest actionable next steps for the support agent, considering the ticket's urgency and progress.
Example context attributes:
Ticket type,Service,Subject,Email,External comments,Details
Root Cause analysis
Root cause analysis aims to identify the underlying cause of an issue by analyzing available service management data. It helps to prevent recurring incidents by pinpointing the source of a problem, allowing teams to address the root cause rather than just the symptoms and solve problems proactively, eventually leading to improved service quality.
Example prompt:
Using the provided service management data, analyze and identify the root cause of the issue. Summarize key factors contributing to the problem and suggest the most likely cause, supported by the data.
Example context attributes:
Subject, Service, Description, Worklog, Related incident, Category
Change management
Change - Draft a Test Plan
Description:
Outline key test steps and acceptance criteria to ensure the change works as expected before go-live.
Example prompt:
You are assisting in drafting a test plan for a technical change. Based on the application, environment type, and number of installations, provide: 1. Key test scenarios to validate success 2. Test steps (e.g., simulate failover, validate app status) 3. Acceptance criteria for successful validation Respond in this format: **Test Plan:** - Scope: [e.g., test environment, HA node, etc.] - Steps: 1. [Step 1] 2. [Step 2] - Acceptance Criteria: [Pass/fail criteria]
Example context attributes:
Service, Business criticality of affected CI(s), Subject, Description, Details for AI
Change - Create Justification for Change Authority Board (CAB)
Description:
Creates a clear and concise justification letter for the Change Advisory Board (CAB), based on the impacted applications and business drivers.
Example prompt:
You are assisting in drafting a test plan for a technical change. Based on the application, environment type, and number of installations, provide: 1. Key test scenarios to validate success 2. Test steps (e.g., simulate failover, validate app status) 3. Acceptance criteria for successful validation Respond in this format: **Test Plan:** - Scope: [e.g., test environment, HA node, etc.] - Steps: 1. [Step 1] 2. [Step 2] - Acceptance Criteria: [Pass/fail criteria]
Example context attributes:
Test plan,Service,Category,Description,Change size,Details for AI,Justification,Implementation plan,Rollback plan
Change - Analysis from Affected CI Details
Description:
Analyzes the scope and dependencies of the change using related configuration items to assess potential impact and risk.
Example prompt:
You are assisting with a change request review. Based on the affected configuration item (CI) data, perform the following: 1. Summarize the affected applications, versions, environments, and installation counts. 2. Identify dependent services and data confidentiality levels. 3. Assess potential operational risk based on environment type and dependencies. 4. Recommend any risk mitigations or actions. Respond in the following markdown format: **Change Scope Summary:** [Summary of affected applications and environments] **Dependencies and Risk Considerations:** [Key services or systems impacted, including confidentiality] **Risk & Impact Assessment:** [Concise summary of the potential risk or business impact] **Recommended Actions:** [Mitigation, rollback, stakeholder comms, etc.]
Example context attributes:
Affected CIs,Service,Business criticality of affected CI(s),Subject,Category,Description,Change size,Details for AI,Justification
Change - Plan the Implementation
Description
Create a step-by-step implementation plan, including required actions and involved roles.
Example prompt:
You are assisting in drafting a technical implementation plan for a change request. Use the CI data (e.g., application name, version, environment, installation count) to provide: 1. A brief description of the deployment 2. A list of ordered implementation steps 3. Required roles or participants Respond in this format: **Implementation Plan:** - Target: [Application name/version] - Steps: 1. [Step 1] 2. [Step 2] - Required Personnel: [List of roles involved]
Example context attributes:
Service,Business criticality of affected CI(s),Subject,Category,Description,Details for AI,Justification
Change - Prepare Rollback Instructions
Description
Describe how the change can be safely rolled back if needed, with triggers and recovery steps.
Example prompt:
You are assisting in drafting a rollback plan in case the change fails. Based on the CI and environment information, describe: 1. When rollback should be triggered 2. Step-by-step rollback actions 3. Estimated time and dependencies Respond in this format: **Rollback Plan:** - Trigger: [Failure symptoms or thresholds] - Steps: 1. [Rollback step 1] 2. [Rollback step 2] - Estimated Downtime: [Minutes] - Dependencies: [e.g., backup/snapshot required]
Example context attributes:
Test plan, Subject, Description, Details for AI, Justification,Implementation plan
Change - Risk Analysis
Description
Evaluates the potential risks of a planned change by analyzing the affected Configuration Items (CIs), their criticality, historical incident records, and dependency relationships.
Example prompt:
You are performing a change risk analysis for a planned change. You have been provided with affected Configuration Item (CI) details, including application name, environment type, version, installation count, dependent services, and data confidentiality classification. Your response must: 1. Identify potential technical, operational, and business risks specifically in relation to the provided CI details. 2. Consider dependencies, historical incidents, and compliance or regulatory constraints. 3. Assign a qualitative risk rating (Low / Medium / High) with justification. 4. Suggest risk mitigation measures tailored to the CIs. Respond in this markdown format: Change Risk Summary: [One paragraph explaining the main risks, their causes, and their potential impact, explicitly referencing the provided CIs — e.g., “Because SAPHanaSR is in a production environment with 5 installations supporting Facilities…”] Risk Rating: [Low / Medium / High] Risk Factors: Mitigation recommendations:
Example context attributes:
Service,Business criticality of affected CI(s),Subject,Category,Description,Details for AI,Justification
Device Lifecycle Status Update
In IT asset management, getting a device lifecycle status update involves updating the current lifecycle stage of a device based on available service management data. It helps to keep IT asset management data accurate, ensuring devices are tracked correctly. It can also help to identify outdated or faulty devices before they cause disruptions.
Example prompt:
Based on the current service management data, update the lifecycle status of the specified device. Ensure the status reflects its most recent activities and any upcoming actions.
Example context attributes:
Days in use, Model, Related tickets, Name, End of warranty, Applications, Status
Identity Governance and Administration
Suggest entitlement information (IGA)
When managing entitlements, AI Actions (Actions) can assist the IGA admin by suggesting friendly names, descriptions, categories, etc. It can be used for new entitlements lacking information like description or to make existing information more professional or easier to understand.
Example prompt for a friendly name
You are provided with information about one Entitlement that is a single access right group. As a IGA Admin you can manage entitlements. Suggest friendly name to the entitlement based on the categories, application and owner info in other entitlements. Name that end user easily understands what this access right is used for and what rights is giving to the user.
Example context attributes for friendly name
Friendly name, Technical name
Example prompt for description based on titles
You are provided with information about one Entitlement that is a single access right group. As a IGA Admin you can manage entitlements. Suggest description to the entitlement based on the application, cost center, organization and titles of the users in entitlement. It's always access to the target system, that can be anything. Not just support or ticket system.
Example context attributes for description based on titles
Application, Cost center, Internal Subcategory, Organization, Internal Category, Description, Title
Example prompt for categories
You are provided with information about one Entitlement that is a single access right group. As a IGA Admin you can manage entitlements. Suggest category to the entitlement based on the categories, application and owner info in other entitlements.
Example context attributes for categories
Owner, Technical owner, Application, Internal Subcategory, Internal Category
Summarize identity information (IGA)
Summarizing identity information provides quick way to review most important information about the identity and make it easy to understand.
Example prompt,
This is not support request, it's user Actions for displaying user data. Identity storage is displaying one user's data. Identity storage data card is generated for the user based on primary work period information. IGA identity storage is used for: Collecting all information related to the users access rights, work period(s) and responsibilities inside of IGA solution such as owner or approver responsibilities. Holistic view for IGA admins to
Example context attributes
Risk Value, Manager of, Last Logon date, Created, All related business roles, Access to applications, All related entitlements, Password last changed
Describe processes (IGA)
IGA processes can be complex and always contain a lot of settings and rules that affect them (sometimes these are documented, but often documentation is not up to date). To get an understandable picture of a departing user process, for example, AI Actions can summarize the process and describe it.
Example prompt for departing user process
This is not support ticket, do not use that term. This view summarizes how the departing process of the account is designed based on IGA set Account attributes. The departing user use case refers to the process initiated when a user's employment or contract is ending. The IGA solution starts the offboarding process, which can take several days to complete, depending on the account management settings, such as when accounts are disabled, email license
Example context attributes for departing user process
Email licenses removed after, Remove access rights, User type, Target system, Set as disabled, Departing user information receiver, User information send, Restore account's access rights if returns
Writing Assistance
Example configurations with prompts
Below are some examples of prompts to be used with M42 Intelligence Writing Assistance. These provide a good starting point, and with testing, you will find opportunities to customize them further based on the desired communication style.
Improve text
Text improvement can be used to spellcheck and improve the text in any text input. Look at the examples below, and adjust based on your configuration.
Ticket - Improve comment input
User title:
Improve text
Description:
Improves the user selection in comments.
Prompt:
You are an AI writing assistant for an IT support agent in an IT department. You are provided with a comment draft that will be sent to a self-service portal user who has reported an issue. Improve the spelling and grammar of the provided text. Do not add any additional improvements. Return only the generated answer.
Mode:
Text improvement
Target attribute:
Internal comments
Ticket - Improve internal comment input
User title:
Improve text
Description:
Improves the user selection in internal comments.
Prompt:
You are an AI writing assistant for an IT support agent in an IT department. You are provided with a comment draft that will be sent to a self-service portal user who has reported an issue. Improve the spelling and grammar of the provided text. Do not add any additional improvements. Return only the generated answer.
Mode:
Text improvement
Target attribute:
Internal comments
Ticket - Improve resolution input
User title:
Improve text
Description:
Improves the user selection in resolution text.
Prompt:
You are an AI writing assistant. You are provided with a draft of a resolution to a support ticket. Improve the spelling and grammar of the provided text. Do not add any additional improvements. Return only the generated answer.
Mode:
Text improvement
Target attribute:
Resolution
Ticket - Improve email input
User title:
Improve text
Description:
Improves the user selection in email.
Prompt:
You are an AI writing assistant for an IT support agent in an IT department. You are provided with an email draft that will be sent to a user who has reported an issue. Improve the spelling and grammar of the provided text. Do not add any additional improvements. Return only the generated answer.
Mode:
Text improvement
Target attribute:
E-mail messages
Ticket - Improve details input
User title:
Improve text
Description:
Improves the user selection in details.
Prompt:
You are an AI writing assistant. You are provided with a draft of a details to a support ticket. Improve the spelling and grammar of the provided text. Do not add any additional improvements. Return only the generated answer.
Mode:
Text improvement
Target attribute:
Details
Email writing assistance
Ticket - Ask for more details email
User title:
Ask for more details
Description:
Generate a contextual email message draft asking for more details.
Context attribute examples:
Assignee, Service, Subject, Details, Related assets, E-mail messages, Customer
Mode:
Text creation
Prompt instruction
You are an attentive, empathic and professional IT support agent with a customer-centric attitude in an IT department. You are responsible for handling a support ticket from a customer. You are provided with details about the ticket. Write an email asking for more details to improve your understanding of the issue. Instructions for writing the email: 1. Start with an informal and personalized greeting. 2. Ask clarifying questions to assist you with the investigation details that are not available. 3. Mention availability for further help. 4. End with a professional greeting without closing remarks. 5. Avoid too much courtesy. 6. Return only the generated answer.
Ticket - Status update in email
User title:
Provide a status update
Description:
Generates a brief status update draft in email based on latest information.
Context attribute examples:
Subject,Details,All ESS2 comments,Resolution,E-mail messages
Mode:
Text creation
Prompt instruction
You are an AI assistant on a service management platform for a support agent. You are provided with the latest information about a support ticket the agent is handling. Provide a brief, straight-to-the-point status update the agent can send to the user who reported the issue. Do not add any signature. Do not add any corporate jargon but maintain professionalism. Do not include "subject:" or other pretext, include only the response.
Portal comment writing assistance
Ticket - Ask for more details ticket comment
User title:
Ask for more details
Description:
Generate a contextual comment message draft asking for more details.
Context attribute examples:
Assignee, Service, Subject, Details, Related assets, E-mail messages, Customer
Mode:
Text creation
Prompt instruction
You are an attentive, empathic and professional IT support agent with a customer-centric attitude in an IT department. You are responsible for handling a support ticket from a customer. You are provided with details about the ticket. Write a comment to the self-service portal asking for more details to improve your understanding of the issue. Instructions for writing the comment: 1. Start with an informal and personalized greeting. 2. Ask clarifying questions to assist you with the investigation details that are not available. 3. Mention availability for further help. 4. End with a professional greeting without closing remarks. 5. Avoid too much courtesy. 6. Return only the generated answer.
Ticket - Status update in Portal Comment
User title:
Provide a status update
Description:
Generates a brief status update draft in self-service portal comments based on latest information.
Context attribute examples:
Assignee,Subject,Details,All ESS2 comments,Status
Mode:
Text creation
Prompt instruction
You are an AI assistant on a service management platform for a support agent. You are provided with the latest information about a support ticket the agent is handling. Provide a brief, straight-to-the-point status update the agent can send to the user who reported the issue. Use the "support_person" information in the signature if available. Do not add any signature if the "support_person" data is not available. Do not add any corporate jargon but maintain professionalism. Do not include "subject:" or other pretext, include only the response.
Documentation
Ticket - Resolution draft
User title:
Draft a resolution
Description:
Generates a resolution draft using knowledge base as the basis for resolution drafts. Requires AI Knowledge Discovery to be set up.
Context attribute examples:
Subject,AI service suggestion,AI ticket type suggestion,Details,AI Team suggestion,E-mail messages,Worklog
Mode:
Text creation
Prompt instruction
Create a concise (max 2 very short paragraphs) resolution text to document the service management ticket resolution according to the provided context information from internal comments and other ticket details. You also have access to the company knowledge base, which you can use to suggest a resolution. When referring to specific articles, use only their "solution_name".
Note: for easy access for the users writing resolutions, use this with Writing assistance and set the target attribute to Resolution.
Ticket - Draft resolution note
User title:
Draft resolution note
Description:
Produces a professional closing statement based on ticket resolution.
Context attribute examples:
Subject,Details,Resolution
Mode:
Text creation
Prompt instruction
You are an AI assistant on a service management platform for a support agent. You are provided with the latest information about a support ticket the agent is handling. Provide a brief, professional closing statement about resolution of ticket the agent can send to the user who reported the issue. Do not add any signature. Do not add any corporate jargon but maintain professionalism. Do not include "subject:" or other pretext, include only the response.
Ticket - Summarize ticket as a comment
User title:
Ticket summarization
Description:
Provide a concise summary of the ticket.
Context attribute examples:
Team,Subject,External comments,All ESS2 comments
Mode:
Text creation
Prompt instruction
Using key details from a service management support ticket, summarize the core issue, actions taken, causes identified, and current resolution status. Ensure the support agent understands the urgency, progress made, and next steps needed. Keep the overview clear and structured, without using introductory or concluding phrases, focusing solely on critical ticket information.
Knowledge discovery
Following actions require that you have the Knowledge Discovery feature set up. The Knowledge Discovery for support agents is a new beta feature available for piloting in M42 Pro version 2025.3. If you would like to learn more, please contact your sales representative.
Setting up M42 Intelligence to utilize Knowledge Discover
After the Knowledge discovery has been set up and you have indexed your documents, following configurations need to be made on the M42 Pro platform M42 Intelligence admin settings:
1. Choose compatible generative AI provider (M42 GenAI with RAG (BETA))

2. After provider has been selected, you need to choose each configuration to use RAG

3. Make sure your prompt instructs the AI to behave according to the fact it has access to the knowledge base - and if you'd like that fact to be utilized in the responses. For example, you might want to have responses lay out the fact whether the response is based on 1. stored company knowledge 2. general knowledge the AI is aware of based on its training data. See examples below.
Ticket - Find answers for a comment
User title:
Search for an answer from knowledge base
Description:
This functionality requires AI Knowledge Discovery.
Mode:
Text creation
Prompt instruction
You are provided with an IT support ticket. You are helping the support agent to write a comment to the user who reported the issue. You have access to the company knowledge base to help address the issue at hand. Using existing knowledge, search for a correct answer to be communicated in a response to the user reporting the issue as a comment to the self-service portal. Do not refer to a specific knowledge base article. If the knowledge base does not contain relevant content, provide generic assistance for the support agent on what should be done instead. Provide only the suggested response to be sent to the user as-is, without any pretext or additional remarks.
Context attribute examples:
Subject,Details,Resolution
Target attribute
Attribute used for Self-service portal commenting
Select: Use predefined data sources for responses
Ticket - Resolution draft
User title:
Draft a resolution
Description:
Generates a resolution draft using knowledge base as the basis for resolution drafts. Requires AI Knowledge Discovery to be set up.
Mode:
Text creation
Prompt instruction
Create a concise (max 2 very short paragraphs) resolution text to document the service management ticket resolution according to the provided context information from internal comments and other ticket details. You also have access to the company knowledge base, which you can use to suggest a resolution. When referring to specific articles, use only their "solution_name".
Do not include a "Resolution draft" or other header for your response. Keep the resolution text straight to the point and avoid excessive jargon.
Context attribute examples:
Subject,AI service suggestion,AI ticket type suggestion,Details,AI Team suggestion,E-mail messages,Worklog
Target attribute
Resolution
Select: Use predefined data sources for responses
AI Agent for Ticket Preparations
Implementation guide
Deploying AI Agent for Ticket preparations concists from following steps. Each step is separately explained what it includes:
| # | Step | Details |
| 1 | Basic configurations | Provider configurations (URL, API key), technical product license. |
| 2 | Definitions | Lightweight definition session for confirming the desired process and use cases. Review the customer’s existing ticketing process and plan how to incorporate the AI nodes. |
| 3 | Technical class and attributes | Add the necessary hidden technical attributes where the generated values are set by the workflow. |
| 4 | Actions configurations | Configuration of default actions. |
| 5 | Workflow configurations and process logic |
Adding 7 nodes (one node per AI action) to point towards the 7 actions mentioned above. Add necessary workflow script nodes to set the values to actual target attributes. Note: An existing workflow is required. If there is no workflow, it must be built. |
| 6 | Testing | End-to-end testing. |
Basic configurations
- Fill “Provider configuration

- Install technical product license
Definitions
- Lightweight definition session for confirming the desired process and use cases.
- Review the customer’s existing ticketing process and plan how to incorporate the AI nodes.
- Template used for ticketing process must have Workflow implemented in order to to take “Actions” into use.
Technical class and attributes
NOTE:
The configurations below represent the default solution setup available in M42 Baseline 2025.2. Configurations may not fit directly into an existing environment as-is and might need to be implemented differently to suit the target environment.
- Add the necessary hidden technical attributes where the generated values are set by the workflow. Following classes are available in M42 Professional baseline 2025.2:
- Ticket -template (workflow setting values into attributes)

- Knowledge article -template (listener on Ticket -template copying values into these attributes)
- Ticket -template (workflow setting values into attributes)

Workflow configurations and process logic
NOTE:
The configurations below represent the default solution setup available in M42 Baseline 2025.2. Configurations may not fit directly into an existing environment as-is and might need to be implemented differently to suit the target environment.
Following instructions are explaining which nodes needs to be added into workflow and also listener to copy details from Ticket to Knowledge article. Following logic is available in M42 Profesional baseline 2025.2
- Ticket -template
- Related nodes need to be added into Ticket workflow. These nodes are included in M42 Professional baseline 2025.2
- Add listener to copy details to Knowledge article (Knowledge article creation while resolving the Ticket)
- Related nodes need to be added into Ticket workflow. These nodes are included in M42 Professional baseline 2025.2
<listener> <name>postsave.CREATE Knowledge article automatically while ticket is resolved 2025.2</name> <trigger>post save</trigger> <source_conditions boolean="AND"> <source_condition> <value> <attribute> <code>related_solution</code> <current_value>true</current_value> </attribute> <operator>IS NULL</operator> <compared_value/> </value> </source_condition> <source_condition> <value> <attribute> <code>resolution</code> <current_value>true</current_value> </attribute> <operator>IS NOT NULL</operator> <compared_value/> </value> </source_condition> <source_condition> <value> <attribute> <code>create_knowledgearticle</code> <current_value>true</current_value> </attribute> <operator>IS NOT NULL</operator> <compared_value/> </value> </source_condition> </source_conditions> <action_chain> <name>Create knowledge article and clear selection</name> <action> <name>Clear knowledge article</name> <class>com.efecte.datamodel.entity.action.implementations.CreateDataCardAction</class> <configuration_item> <name>ticket_details</name> <value>$details$</value> </configuration_item> <configuration_item> <name>ticket_subject</name> <value>$subject$</value> </configuration_item> <configuration_item> <name>ticket_resolution</name> <value>$resolution$</value> </configuration_item> <configuration_item> <name>listener_flag</name> <value>Check</value> </configuration_item> <configuration_item> <name>Reference from source</name> <value>related_solution</value> </configuration_item> <configuration_item> <name>Folder</name> <value>knowledge_base</value> </configuration_item> <configuration_item> <name>ticket_service_string</name> <value>$service$</value> </configuration_item> <configuration_item> <name>Template</name> <value>knowledge_base_article</value> </configuration_item> </action> <action> <name>Clear checkbox</name> <class>com.efecte.datamodel.entity.action.implementations.ChangeDataCardValuesAction</class> <configuration_item> <name>Value</name> <value/> </configuration_item> <configuration_item> <name>Code</name> <value>create_knowledgearticle</value> </configuration_item> </action> </action_chain> </listener>
- Knowledge article -template
- Create workflow with following structure:
- Create workflow with following structure:
Testing
- End-to-end testing.
AI Action configurations
Following “Actions” are used in Ticket workflow for ticket data preparation. Configuration is based on baseline solution which might require changes based on individual environments:
Ticket - Semantic classification: Ticket type
Unique name (name of the Action for the admin to recognize it): Ticket - Semantic classification: Ticket type
User title (title of the Actions shown for the user): Change ticket type
Description (description of the Action to instruct the user): Sometimes users may report their issue as a problem even though it is something else: e.g. a query or request.
Prompt instruction: You are an AI assistant analyzing service management tickets. Your task is to classify the ticket type based solely on the content of the Details attribute. Rules: Incident: Use this if the Details describe: - A disruption, outage, or malfunction (e.g., 'The system is down,' 'I can’t log in'). - A problem requiring urgent resolution (e.g., 'Error 500 when submitting a form'). - Any issue impacting normal operations. Request for Information: Use this if the Details describe: - A question or inquiry (e.g., 'How do I reset my password?', 'What are the office hours?'). - A request for guidance, documentation, or clarification. - No active problem or disruption is mentioned. Output Requirements: - Respond with only one word: Either Incident or Request for Information. - No additional text, explanations, or quotation marks—just the classification.
Context attribute suggestions: Details
Ticket - Semantic classification: Service
Unique name (name of the Action for the admin to recognize it): Ticket - Semantic classification: Service
User title (title of the Actions shown for the user): Suggest classification (Service)
Description (description of the Action to instruct the user): Based on content of the ticket, let AI suggest classification.
Prompt instruction: Analyze the ticket content and classify it into ONE of these services: Access rights, Application Deployment, Application Development & Update, Application Monitoring, Capacity Management, Data Backup and Recovery, Desktop & End User Support, Device as a Service, Email, Facilities, Finance, HR, Legal, License Management, Marketing, Network Connectivity, Network Security, Single Sign-On, Software Installation and Updates, Virtualization Services, VPN Access, Wireless Network Management Instructions: Read the ticket description carefully Identify key technical terms, user requests, and problem context Match to the most relevant service category If multiple categories seem relevant, choose the PRIMARY issue Return ONLY the exact service name from the list above If uncertain, choose the closest match Response format: Service Name Only
Context attribute suggestions: Subject, Details
Ticket - Summarize e-mail messages
Unique name (name of the Action for the admin to recognize it): Ticket - Summarize e-mail messages
User title (title of the Actions shown for the user): Summarize e-mail messages
Description (description of the Action to instruct the user): Summarizing all e-mail messages
Prompt instruction: Service desk agent might get a ticket where is long e-mail thread and the real issue migh disappear inside the long messaging thread. Make a short summary so Service desk agent gets easily the idea, what is going on and if some troubleshooting has been done already by customer. Summarization must always have prefix "Short summarization of original issue according to conversation in e-mails:" Prefix must not include quotation marks.
Context attribute suggestions: File attachments, E-mail messages
Ticket - Assign Ticket to a Team
Unique name (name of the Action for the admin to recognize it): Ticket - Assign Ticket to a Team
User title (title of the Actions shown for the user): Assign Team
Description (description of the Action to instruct the user): Based on topic of the issue, let AI assign Ticket to proper Team for handling the issue
Prompt instruction: Based on service management ticket data and assign it to the appropriate team based on these guidelines: Team Responsibilities: Business Services: Handles business-related issues such as: Business process questions Business application support Business workflow issues Business documentation Department-specific business requests Facility Team: Manages facility-related matters including: Building maintenance Office equipment (non-IT) Physical security access Climate control Cleaning services Office supplies Workspace arrangements HR Support Team: Handles all HR-related inquiries such as: Employment questions Benefits and compensation Training and development Employee relations Recruitment Workplace policies Time and attendance Service Desk Level 1: Manages all IT-related issues including: Computer hardware/software problems Network connectivity Account access Password resets Email issues Printer problems IT equipment requests Application support Print only the name of suggested team
Context attribute suggestions: Subject, Details
Ticket - Resolution to customer
Unique name (name of the Action for the admin to recognize it): Ticket - Resolution to customer
User title (title of the Actions shown for the user): Resolution to customer
Description (description of the Action to instruct the user): Generate a precise resolution which is visible to the customer.
Prompt instruction: You are an AI Service Desk Assistant. Analyze the ticket details, including text and any screenshots (e.g., bluescreens, error messages). Write a clear, polite resolution that: • Uses simple language suitable for any employee. • Acknowledges the screenshot explicitly (e.g., “Based on the screenshot…”). • Gives practical next steps or advice. • Explains technical terms in plain language. End with this disclaimer: “This suggestion is based on general best practices and may not reflect your company-specific systems or configurations. For issues that persist, please contact your IT support team.” Keep the response concise (3–6 sentences) and ready to send as-is.
Context attribute suggestions: Self-Service attachments, File attachments, Subject, Details
Knowledge article - Knowledge article creation
Unique name (name of the Action for the admin to recognize it): Knowledge article - Knowledge article creation
User title (title of the Actions shown for the user): Generate content for a Knowledge article
Description (description of the Action to instruct the user): Generate content for a Knowledge article
Prompt instruction: As a Knowledge Manager, use provided service management data to create a knowledge base article for Service Desk Agents. Include: Overview: Issue intro from data. Symptoms: Key indicators from data. Troubleshooting: Steps and tools from data. Resolution: Recommended fix. Prevention: Best practices. References: Related links. Ensure clarity and actionability.
Context attribute suggestions: Ticket details, Ticket subject, Ticket resolution
Knowledge article - Generate title for Knowledge article
Unique name (name of the Action for the admin to recognize it): Knowledge article - Generate title for Knowledge article
User title (title of the Actions shown for the user): Generate title for Knowledge article
Description (description of the Action to instruct the user): Based on a solution description, generate title for Knowledge article
Prompt instruction: As a Knowledge manager I want to create descriping, user friendly, understandable title for Knwledge article. Title should be enough short but well describing the solution. So that Service desk agent could easily select correct knowledge article by it's title. Print only the actual title, e.g no quotation marks needed around title.
Context attribute suggestions: Article details
Useful platform settings
- ai.max.prompt.length – Defines maximum prompt length that the admin can set
- ai.system.prompt - Defines default behavior for all features
- ai.actions.prompt - Adjust default behavior of Actions
- ai.writingAssistant.prompt - Adjust default behavior of Writing assistance
- ai.actions.monthly.usage.limit – limit how many transactions can be used monthly (cost management)
- ai.request.timeout.seconds – defines how long ESM waits for AI responses (useful in complex scenarios)
Troubleshooting
If you run into any issues in the use, make sure to check following:
Features are not triggered / there are errors:
Error messages should pinpoint to the issues in the configuration or connection, but if you are unsure, make sure the following has been set:
- Make sure API URL and keys are set as they should
- Make sure the feature is not disabled with the platform setting
- Make sure the monthly usage limit is not reached (adjust in platform settings, if possible from cost perspective)
- Check m42_intelligence logs for issues
- With AI Workflow node: Consider using exceptions to make sure data cards are handled properly regardless of error situations (e.g. roll back to previous stage)

M42 Intelligence logs are useful for troubleshooting
Responses are not good enough:
- Make sure the context attributes have (relevant) values
- Adjust the prompts for the use cases
- Configuration prompt
- Adjust Actions / Writing assistance system prompt only if necessary
- We recommend not adjusting the general system prompt
Responses are not consistent:
- Make sure expected data in context attributes is found
- Make sure the prompts do not conflict
- General system prompt
- Actions / Writing assistance system prompts
- Action / Writing assistance configuration prompt
AI Agent for Ticket Preparation
Step 4 - Test and adjust
Make sure you test the output of each use case with real-life data cards on the configured templates to make sure that the content generated makes sense for your use cases. Add more specific instructions to the prompts as you see fit.
Image Recognition Support
Image recognition is supported with M42 Intelligence Actions starting from ESM version 2025.2.
It is possible to upload PNG and JPEG images (from attributes with FileUpload handler), and use prompts to get output, such as image description, and use that in further conclusions and any output.
Can be used with following use cases:
- Translations
- Summary
- Creating a resolution
- Creating content for a KB article
Note
ONLY supported with OpenAI and Azure OpenAI at the moment.
If more data is attempted to be sent than what can be handled in the context window, all data is not considered in the response (max 5MB).
On Azure OpenAI, only gpt-4o supports files.
Two new platform settings. "ai.images.max.size.mb" controls the total max size of all images embedded to request and "ai.images.max.count" controls the total number of images in the request.
OpenAI's documentation - https://platform.openai.com/docs/guides/images?api-mode=chat
Localization of AI Configurations
The Localization feature allows administrators to translate the user-facing texts of M42 Intelligence configurations for Writing Assistance and Actions. This makes it possible for agents working in different languages to see the AI features in their preferred language.
Localization applies to the following configuration fields:
User titleDescription
The Prompt instruction field is not localized. Prompts remain in a single language to ensure consistent behavior of the generative AI.
Localization Tab
A Localization tab is available in the M42 Intelligence administration interface.

The tab displays all Writing Assistance and Action configurations in a table format. Each configuration includes separate rows for:
User Title
Description
The table includes:
-
Unique name- Identifier of the configuration -
Text type- Indicates whether the row represents a User title or Description -
Feature- Shows whether the configuration belongs to Writing Assistance or Actions - Language columns - One column for each supported language
-
Manage- Options for editing translations
The list of available languages is determined by the platform setting languages.translation.support.
Language names are displayed in their native form (for example Deutsch for German or Suomi for Finnish).
Default Language
The system’s default language is defined by the platform setting language.
Existing configurations are automatically treated as default language configurations. If a translation is not available for a specific language, the system automatically falls back to the default language text.
Administrators can also modify the default language text directly from the localization interface.
Managing Translations
Translations can be edited by selecting the edit (pen) icon in the Manage column.
This opens the Edit-window for that specific translation:

When editing a configuration, administrators can:
- Provide translations for all enabled languages.
- Update the User title and Description.
- Modify the default language text if necessary.
If a translation is missing, the system displays the default language text until a translation is added.
The Filter field in the top right corner allows administrators to quickly locate configurations by searching for identifiers or translated text.
End-User Experience
When agents use the Agent UI, the titles and descriptions of M42 Intelligence Actions and Writing Assistance features are displayed in the language selected in the user profile.
If a translation is not available for the user’s selected language, the system automatically displays the default language text.
Localization support is currently available only in the Agent UI.
Troubleshooting
Problem: Responses cut short
If responses generated by the generative AI are cut short, a workaround is adding a limitation to the response size. This can be done by prompting, for example: “Limit the response to 1000 characters”.
Table of Contents





