Atlassian Jira Cloud Connector
Learn how to configure the Atlassian Jira Connector to synchronize your Jira tickets, users, groups and permissions.
Atlassian Jira Cloud Connector
Learn how to configure the Atlassian Jira Connector to synchronize your Jira tickets, users, groups and permissions.
Archived
How to connect to Atlassian Jira Cloud with Atlassian Jira Cloud connector
One of the Native Connectors available is Atlassian Jira Cloud connector, which is meant to be used when integrating Matrix42 Pro/IGA to Atlassian Jira Cloud instance. Event-based Jira connector task is used when writing data from Matrix42 Pro/IGA platform towards Jira. Solution administrators are able configure connection to target Jira Cloud using the Native Connectors admin UI. Processes can be run event-based triggered by Visual Workflow Automation.

Common use cases are:
- Create Jira issues based on incident from ESM platform to Jira
- Import Jira issues from Jira to ESM platform
- Transfer comments between ESM platform and Jira
- Transfer resolution between ESM platform and Jira
- Create and delete Jira users
- Create and delete Jira groups
- Add and remove users from Jira groups
General guidance for scheduled tasks
General guidance for scheduled tasks
How to Create New Scheduled Task to import data
For configuring scheduled-based provisioning task, you will need access to Administration / Connectors tab.
1. Open the Administration area (a cogwheel symbol).
2. Open Connectors view.
3. Choose Connector for Scheduled-based task and select New Task
Note! If connector is not created, you have to choose first New connector and after that New task.

4. Continue with connector specific instructions: Native Connectors
Should I use Incremental, Full or Both?
Scheduled task can be either Incremental or Full -type.
Do not import permissions with AD and LDAP incremental task
Incremental task has issue with permissions importing. At the moment it is recommended not to import group memberships with incremental scheduled task.
On Microsoft Active Directory and OpenLDAP connectors, remove this mapping on incremental task:

Setting on Scheduled tasks:

Incremental type is supported only for Microsoft Active Directory, LDAP and Microsoft Graph API (formerly known as Entra ID) Connectors.
Incremental type means, that Native Connectors (EPE) fetches data from source system, using changed timestamp information, so it fetches only data which is changed or added after previous incremental task run.
When Incremental type task is run for very first time, it does a full fetch (and it marks the current timestamp to EPE database), thereafter, task uses that timestamp to ask the data source for data that changed since that timestamp (and then EPE updates the timestamp to EPE database for next task run). Clearing task cache doesn't affect this timestamp, so Incremental task is always incremental after first run.
Full type is supported for all Connectors.
Full type import fetches always all data (it's configured to fetch) from source system, on every run.
Both Full and Incremental type tasks use also Task cache in EPE, which makes certain imports faster and lighter for M42 system.
By default that task cache is cleared ad midnight UTC time. When cache is cleared, next import after that is run without caching used to reason if data fetched should be pushed to ESM, all fetched data is pushed to ESM. But after that, next task runs until next time cache is cleared, are using EPE cache to determine if fetched data needs to be pushed to ESM or not.
You can configure at what time of day task cache is emptied, by changing global setting in EPE datapump configuration:
/opt/epe/datapump-itsm/config/custom.properties
which is by default set to: clearCacheHours24HourFormat=0
You can also clear cache many times a day, but that needs to be thinked carefully, as it has impact on overall performance as EPE will push changes to ESM, that probably are already there, example(do not add spaces to attribute value): clearCacheHours24HourFormat=6,12
After changing this value, reboot EPE datapump container to take change into use.
Recommendations:
Have always by default Full type scheduled task.
If you want to fetch changes to data fetched already by full task, more frequently than you can run full task, add also incremental task. Usually incremental task is not needed.
Recommended Scheduling Sequence
Recommended scheduling sequence, depends how much data is read from Customers system/directory to the Matrix42 Core, Pro or IGA solution and is import Incremental or Full.
Examples for scheduling,
| Total amount of users | Total amount of groups | Full load sequence | Incremental load sequence |
| < 500 | < 1000 |
Every 30 minutes if partial load is not used Four (4) times a day if partial load is used |
Every 10 minutes |
| < 2000 | < 2000 |
Every 60 minutes, if partial load is not used Four (4) times a day if partial load is used |
Every 15 minutes |
| < 5000 | < 3000 |
Every four (4) hours, if partial load is not used Twice a day if partial load is used |
Every 15 minutes |
| < 10 000 | < 5000 | Maximum imports twice a day, no matter if partial load is or is not used | Every 30 minutes |
| < 50 000 | < 7000 | Maximum import once a day, no matter if partial load is or is not used | Every 60 minutes |
| Over 50 000 | Over 7000 | There might be a need for another EPE-worker, please contact Product Owner | Separately evaluated |
Please note that if there are several tasks running at the same time you may need more EPE-workers. The tasks should be scheduled at different times and can be completed according to the table above. However, if there are more than 6 tasks running at the same time, the number of epeworkers should be increased. It's best practice not to schedule tasks to run at same time, if possible.
Recommendations related to performance
If the amount fo data to be imported is over 10000 concider these things:
Adjust log level of ESM and DATAPUMP to ERROR-level, to lowe the amount of logging during task run
Have as few as possible automations starting immediately for imported datacards (listeners, handlers, workflows), as those make ESM to take longer time handling new datacards.
Set removed accounts and entitlements status removed/disabled
With this functionality, you can mark account and entitlement status to e.g. Deleted or Disabled, when account or entitlement is removed from source system. Starting from version 2025.3 you can also set status to generic objects (not only to accounts/identities and entitlements/groups).
For version 2025.3 and newer
In version 2025.3 these settings are moved from properties files to Task UI. Also you can now set these settings for Generic objects, which have not been possible before this version.
There is separate configuration for each scheduled task, and for all mapping types. Here is example of this config on task:

For version 2025.2 and older
This functionality is available for “full” type scheduled tasks.
Settings are on datapump dockers configuration file. To change those parameter values, you need to set those in /opt/epe/datapump-itsm/config/custom.properties file.
Configuration
To enable disabling functionality, datapump config should have these parameters set to true:
disable.unknown.esm.users=truedisable.unknown.esm.groups=true
Those 2 parameters are false by default in 2024.2 and 2025.1 versions. In 2025.2 and newer version those are true by default.
Next are these parameters:
personTemplateStatusCodeAttributeKey=accountStatuspersonTemplateStatusAttributeDisabledValueKey=DeletedgroupTemplateStatusCodeAttributeKey=statusgroupTemplateStatusAttributeDisabledValueKey=5 - Removed
First two attributes should point to the DatacardHiddenState attribute in the User template, and tell which value should be send there when the user is deleted.
By default its accountStatus and Value 5 - Removed on IGA Account template.
All these needs to match with the attribute configuration:

Same thing applies for the next two paramaters, but its for Groups.'
If you need to change those parameters in properties file, do changes in Datapump container to file: /opt/epe/datapump-itsm/config/custom.properties and those changes will then survive over container reboot and will be copied on reboot to /opt/epe/datapump-itsm/config/application.properties.
Description
Tasks save their __taskid__ shown as Task Id mapping in the UI to the datacards, its then used to determine if the datacard was added by this task. In case there are multiple tasks with different sets of users.
This field was previously used as datasourceid, but since we moved to the model where connector can have multiple tasks its identifier cannot be used anymore, thats why the field was repurposed as taskid instead.
Taking users as an example, when task runs ESM is asked for the list of users that have its taskid in Task Id mapping field, and doesn't have a personTemplateStatusAttributeDisabledValueKey value in the personTemplateStatusCodeAttributeKey
This result is then compared to what the task fetched, and the datacards of users that were not fetched have their personTemplateStatusattribute set to value specified in the config - 5 - Removedby default.
Example log below shows described process and informs that one user was removed.

Same thing applies to groups but groupTemplateStatusattributes are used instead.
Notes
- Feature works only with full fetch scheduled tasks..
- No support for generic templates yet, only identity and access
- When migrating from the previous versions where datasourceid was still used it needs to run at least once to set its taskid’s in the datacards first.
- EPE identifies Disabled users or groups as the ones that were removed from the AD, at the present we do not support statuses related to the entity beign active or not.
- EPE does not enable users back on its own.
- If more than one tasks fetches the same users or groups it may overwrite the taskid in the datacard depending on which task ran last. It is suggested that many full type tasks are not fetching same user or group.
- Always do configuration file changes to custom.properties, do not change only application.properties as those changes are lost on container reboot if you have not done same changes to custom.properties.
Create Jira connector
1.Open the Efecte Administration area (a cogwheel symbol) in right upper corner. 
2. Open Connectors view

3. Create new Jira Connector (if you don't already have one)
- Jira connector host url should be in format: https://instancename.atlassian.net
- Remember to set all attributes before clicking “Test connection”, this means also WebAPI user and it's password. If you don't see test connection success message popup in UI (or you see error popup), check EPE master logs for error.
- User name is user who created Access token (Jira Service Accounts are not supported)
- Access tokens are valid max for one year (Jira side limitation), so remember to document when token will expire, so you remember to create new token and configure it to Jira Connection before it expires. For more information see: https://support.atlassian.com/atlassian-account/docs/manage-api-tokens-for-your-atlassian-account/
- Access token value is Jira users API token

4. Save connector
5. Open connector again and click Test connection
6. When connection is successful, you can continue to creating Scheduled task and/or Event-based task
Step-by-Step Instructions to create Scheduled task
1. Choose Atlassian Jira -type connector from the overview, in Scheduled-based tasks click “+ New Task”
2. Choose scheduling sequence, which depends how much data is read from Jira. Fill in unique name for the provisioning task and Jira query
- It is best practice to test your Jira query (JQL) in actual Jira system, before using it in task configuration.
See also Jira documentation:
3. Fill in attribute mapping section. Note! The content is different depending on what you have selected as the value in the field Mappings type.

4. Save provisioning task using the Save button
5. You have now configured scheduled-based provisioning task and you can click “Run task”, to run it immediately manually if needed. Or wait for it to be run automatically according to your schedule.

6. If task is executed manually ("Run task" clicked) or it is run according to scheduling, task status can be reviewed under View history:

fsda
Step-by-Step Instructions to create Event-based task
1.Open the Efecte Administration area (a cogwheel symbol) in right upper corner.

2. Open Connectors view

3. Choose Atlassian Jira Connector from the Overview and Event-based tasks and button + New Task
4. Fill in unique name for the provisioning task
- Mapping type needs to be Generic Template when handling (for example tickets) other object types than users and groups.

5. Fill in attribute mapping section. Note! The content is different depending on what you have selected as the value in the field Mappings type.
- It is possible to set additional attributes by choosing “New Attribute” and then writing attribute name, and accept it by clicking “Add item ”. This is very useful if you have added your own custom attributes to Jira.

- It is possible to define which attribute information is written to the Jira.
- There can be several provisioning task for different purpose towards Jira.
- To add Jira ticket, you must set at least issueType, project and summary, depending on your Jira configuration maybe also other attributes. Confirm needed attributes from Jira system.

7. Save provisioning task from the Save button
8. The next step is to configure the workflow to use this event-based task. From workflow engine in Efecte Service Management platform, it is possible to execute provisioning activities towards Jira. This means that any of the available Activity's can be run at any point of the workflow using “Orchestration” node.
Example of Orhcestration node settings for Jira:
fsda
Table of Contents

