Konfigurera: EPE Jira Cloud Connector
Lär dig hur du konfigurerar EPE Jira Connector för att synkronisera dina Jira-ärenden.
Konfigurera: EPE Jira Cloud Connector
Lär dig hur du konfigurerar EPE Jira Connector för att synkronisera dina Jira-ärenden.
Hur ansluter man Jira Cloud till EPE?
Sedan 2023 introducerar Efecte Provisioning Engine (EPE) en ny Jira-anslutning. Händelsebaserad Jira-anslutningsuppgift används när data skrivs från Efecte-plattformen till Jira. EPE-administratörer kan konfigurera anslutningen till mål-Jira Cloud med hjälp av EPE-administratörsgränssnittet. Pro kan köras händelsebaserat och utlösas av Visual Workflow Automation.

Vanliga användningsfall är:
- Skapa Jira-ärenden baserat på incident från ESM-plattformen till Jira
- Importera Jira-problem från Jira till ESM-plattformen
- Överför kommentarer mellan ESM-plattformen och Jira
- Överföringslösning mellan ESM-plattformen och Jira
- Skapa och ta bort Jira-användare
- Skapa och ta bort Jira-grupper
- Lägga till och ta bort användare från Jira-grupper
Allmän uid för schemalagda uppgifter
General g uid ance for scheduled tasks
How to Create New Scheduled Task to import data
For configuring scheduled-based provisioning task, you will need access to Administration / Connectors tab.
1. Open the Administration area (a cogwheel symbol).
2. Open Connectors view.
3. Choose Connector for Scheduled-based task and select New Task
Note! If connector is not created, you have to choose first New connector and after that New task.

4. Continue with connector specific instructions: Native Connectors
Should I use Incremental, Full or Both?
Scheduled task can be either Incremental or Full -type.
Do not import permissions with AD and LDAP incremental task
Incremental task has issue with permissions importing. At the moment it is recommended not to import group memberships with incremental scheduled task.
On Microsoft Active Directory and OpenLDAP connectors, remove this mapping on incremental task:

Setting on Scheduled tasks:

Incremental type is supported only for Microsoft Active Directory, LDAP and Microsoft Graph API (formerly known as Entra ID) Connectors.
Incremental type means, that Native Connectors (EPE) fetches data from source system, using changed timestamp information, so it fetches only data which is changed or added after previous incremental task run.
When Incremental type task is run for very first time, it does a full fetch (and it marks the current timestamp to EPE database), thereafter, task uses that timestamp to ask the data source for data that changed since that timestamp (and then EPE updates the timestamp to EPE database for next task run). Clearing task cache doesn't affect this timestamp, so Incremental task is always incremental after first run.
Full type is supported for all Connectors.
Full type import fetches always all data (it's configured to fetch) from source system, on every run.
Both Full and Incremental type tasks use also Task cache in EPE, which makes certain imports faster and lighter for M42 system.
By default that task cache is cleared ad midnight UTC time. When cache is cleared, next import after that is run without caching used to reason if data fetched should be pushed to ESM, all fetched data is pushed to ESM. But after that, next task runs until next time cache is cleared, are using EPE cache to determine if fetched data needs to be pushed to ESM or not.
You can configure at what time of day task cache is emptied, by changing global setting in EPE datapump configuration:
/opt/epe/datapump-itsm/config/custom.properties
which is by default set to: clearCacheHours24HourFormat=0
You can also clear cache many times a day, but that needs to be thinked carefully, as it has impact on overall performance as EPE will push changes to ESM, that probably are already there, example(do not add spaces to attribute value): clearCacheHours24HourFormat=6,12
After changing this value, reboot EPE datapump container to take change into use.
Recommendations:
Have always by default Full type scheduled task.
If you want to fetch changes to data fetched already by full task, more frequently than you can run full task, add also incremental task. Usually incremental task is not needed.
Recommended Scheduling Sequence
Recommended scheduling sequence, depends how much data is read from Customers system/directory to the Matrix42 Core, Pro or IGA solution and is import Incremental or Full.
Examples for scheduling,
| Total amount of users | Total amount of groups | Full load sequence | Incremental load sequence |
| < 500 | < 1000 |
Every 30 minutes if partial load is not used Four (4) times a day if partial load is used |
Every 10 minutes |
| < 2000 | < 2000 |
Every 60 minutes, if partial load is not used Four (4) times a day if partial load is used |
Every 15 minutes |
| < 5000 | < 3000 |
Every four (4) hours, if partial load is not used Twice a day if partial load is used |
Every 15 minutes |
| < 10 000 | < 5000 | Maximum imports twice a day, no matter if partial load is or is not used | Every 30 minutes |
| < 50 000 | < 7000 | Maximum import once a day, no matter if partial load is or is not used | Every 60 minutes |
| Over 50 000 | Over 7000 | There might be a need for another EPE-worker, please contact Product Owner | Separately evaluated |
Please note that if there are several tasks running at the same time you may need more EPE-workers. The tasks should be scheduled at different times and can be completed according to the table above. However, if there are more than 6 tasks running at the same time, the number of epeworkers should be increased. It's best practice not to schedule tasks to run at same time, if possible.
Recommendations related to performance
If the amount fo data to be imported is over 10000 concider these things:
Adjust log level of ESM and DATAPUMP to ERROR-level, to lowe the amount of logging during task run
Have as few as possible automations starting immediately for imported datacards (listeners, handlers, workflows), as those make ESM to take longer time handling new datacards.
Set removed accounts and entitlements status removed/disabled
With this functionality, you can mark account and entitlement status to e.g. Deleted or Disabled, when account or entitlement is removed from source system. Starting from version 2025.3 you can also set status to generic objects (not only to accounts/identities and entitlements/groups).
For version 2025.3 and newer
In version 2025.3 these settings are moved from properties files to Task UI. Also you can now set these settings for Generic objects, which have not been possible before this version.
There is separate configuration for each scheduled task, and for all mapping types. Here is example of this config on task:

For version 2025.2 and older
This functionality is available for “full” type scheduled tasks.
Settings are on datapump dockers configuration file. To change those parameter values, you need to set those in /opt/epe/datapump-itsm/config/custom.properties file.
Configuration
To enable disabling functionality, datapump config should have these parameters set to true:
disable.unknown.esm.users=truedisable.unknown.esm.groups=true
Those 2 parameters are false by default in 2024.2 and 2025.1 versions. In 2025.2 and newer version those are true by default.
Next are these parameters:
personTemplateStatusCodeAttributeKey=accountStatuspersonTemplateStatusAttributeDisabledValueKey=DeletedgroupTemplateStatusCodeAttributeKey=statusgroupTemplateStatusAttributeDisabledValueKey=5 - Removed
First two attributes should point to the DatacardHiddenState attribute in the User template, and tell which value should be send there when the user is deleted.
By default its accountStatus and Value 5 - Removed on IGA Account template.
All these needs to match with the attribute configuration:

Same thing applies for the next two paramaters, but its for Groups.'
If you need to change those parameters in properties file, do changes in Datapump container to file: /opt/epe/datapump-itsm/config/custom.properties and those changes will then survive over container reboot and will be copied on reboot to /opt/epe/datapump-itsm/config/application.properties.
Description
Tasks save their __taskid__ shown as Task Id mapping in the UI to the datacards, its then used to determine if the datacard was added by this task. In case there are multiple tasks with different sets of users.
This field was previously used as datasourceid, but since we moved to the model where connector can have multiple tasks its identifier cannot be used anymore, thats why the field was repurposed as taskid instead.
Taking users as an example, when task runs ESM is asked for the list of users that have its taskid in Task Id mapping field, and doesn't have a personTemplateStatusAttributeDisabledValueKey value in the personTemplateStatusCodeAttributeKey
This result is then compared to what the task fetched, and the datacards of users that were not fetched have their personTemplateStatusattribute set to value specified in the config - 5 - Removedby default.
Example log below shows described process and informs that one user was removed.

Same thing applies to groups but groupTemplateStatusattributes are used instead.
Notes
- Feature works only with full fetch scheduled tasks..
- No support for generic templates yet, only identity and access
- When migrating from the previous versions where datasourceid was still used it needs to run at least once to set its taskid’s in the datacards first.
- EPE identifies Disabled users or groups as the ones that were removed from the AD, at the present we do not support statuses related to the entity beign active or not.
- EPE does not enable users back on its own.
- If more than one tasks fetches the same users or groups it may overwrite the taskid in the datacard depending on which task ran last. It is suggested that many full type tasks are not fetching same user or group.
- Always do configuration file changes to custom.properties, do not change only application.properties as those changes are lost on container reboot if you have not done same changes to custom.properties.
Steg-för-steg-instruktioner schemalagda
1. Öppna Efecte-administrationsområdet (en kugghjulssymbol) i det övre högra hörnet. 
2. Öppna vyn Kontakter

3. Skapa en ny Jira Connector (om du inte redan har en)
- Jira-anslutningens värd-URL ska ha formatet: https://instancename.atlassian.net
- Kom ihåg att ställa in alla attribut innan du klickar på "Testa anslutning", detta innebär även Web API användaren och dess lösenord. Om du inte ser ett popup-meddelande om att testanslutningen lyckades i användargränssnittet (eller om du ser ett popup-fel), kontrollera EPE:s huvudloggar för fel.
- Användarnamnet är den användare som skapade åtkomsttoken
- Åtkomsttokens är giltiga i högst ett år (begränsning på Jira-sidan), så kom ihåg att dokumentera när token löper ut, så att du kommer ihåg att skapa en ny token och konfigurera den till Jira Connection innan den löper ut. För mer information, se: https://support.atlassian.com/atlassian-account/docs/manage-api-tokens-for-your-atlassian-account/

4. Välj Atlassian Jira -typ connector från översikten, i Schemalagda uppgifter klicka på "+ Ny uppgift" 
5. Välj schemaläggningssekvens, som beror på hur mycket data som läses från Jira. Fyll i ett unikt namn för provisioneringsuppgiften och Jira-frågan.
- Det är bäst att testa din Jira-fråga (JQL) i ett faktiskt Jira-system innan du använder den i uppgiftskonfigurationen.
Se även Jira-dokumentationen:
6. Fyll i avsnittet för attributmappning. Obs! Innehållet varierar beroende på vad du har valt som värde i fältet Mappningstyp.

7. Spara provisioneringsuppgiften med knappen Spara
8. Du har nu konfigurerat en schemalagd provisioneringsuppgift och du kan klicka på "Kör uppgift" för att köra den omedelbart manuellt om det behövs. Eller vänta på att den körs automatiskt enligt ditt schema.

9. Om uppgiften körs manuellt ("Klicka på Kör uppgift") eller om den körs enligt schemaläggning kan uppgiftens status granskas under Visa historik:

Steg-för-steg-instruktioner Händelsebaserade
1. Öppna Efecte-administrationsområdet (en kugghjulssymbol) i det övre högra hörnet.

2. Öppna vyn Kontakter 
3. Välj Atlassian Jira Connector från Översikt och Händelsebaserade uppgifter och knappen + Ny uppgift 
4. Fyll i ett unikt namn för provisioneringsuppgiften
- Mappningstypen måste vara en generisk mall vid hantering (till exempel ärenden) av andra objekttyper än användare och grupper.

5. Fyll i avsnittet för attributmappning. Obs! Innehållet varierar beroende på vad du har valt som värde i fältet Mappningstyp.
- Det är möjligt att ställa in ytterligare attribut genom att välja "Nytt attribut" och sedan skriva attributnamnet, och acceptera det genom att klicka på "Lägg till objekt". Detta är mycket användbart om du har lagt till dina egna anpassade attribut till Jira.

- Det är möjligt att definiera vilken attributinformation som skrivs till Jira.
- Det kan finnas flera provisioneringsuppgifter för olika syften mot Jira.
- För att lägga till en Jira-biljett måste du ange minst issueType, project och summary, beroende på din Jira-konfiguration kan du även ange andra attribut. Bekräfta nödvändiga attribut från Jira-systemet.

7. Spara provisioneringsuppgiften från knappen Spara
8. Nästa steg är att konfigurera arbetsflödet för att använda denna händelsebaserade uppgift. Från arbetsflödesmotorn i Efecte Service Management-plattformen är det möjligt att köra provisioneringsaktiviteter mot Jira. Det betyder att vilken som helst av de tillgängliga aktiviteterna kan köras när som helst i arbetsflödet med hjälp av noden "Orkestrering".
Exempel på inställningar för Orhcestration-noder för Jira:

