Scheduled Export Workflow
Each scheduled export exists as a process instance. When a schedule is created, an instance starts then pauses at an event gateway. Execution proceeds from the gateway following one of three message events:
- startExport - sent by the goss.DataExporter.schedule.processSchedules and goss.DataExporter.schedule.processNextSchedule End Points (see below)
- editExport - sent by the scheduling form when an export schedule is edited. This updates the relevant process variables
- cancelExport - sent by the scheduling form when an export schedule is deleted. This terminates the process instance
Instances can be started by users in the DATAEXPORTER group. Instances are processed one at a time, managed via the End Points described below.
Scheduled Task
In the scheduled tasks area of iCM Management, create a schedule to run the goss.DataExporter.schedule.processSchedules End Point. This schedule should run once a day, at around 02:00. Make sure that only a single schedule exists and that it does not run more frequently than once a day.
Your scheduled task need not pass any parameters to the End Point.
End Point and Workflow Interaction
As described above, a single End Point, goss.DataExporter.schedule.processSchedule's, runs every day in the early hours of the morning. This End Point creates an array of all of the business keys of all of the scheduled export workflows that need to be processed and passes them to the first process instance in the list.
The process instance calls the goss.DataExporter.main End Point, which manages the export, file creation and processing. Once this End Point has returned (ie the configured export is complete) the goss.DataExporter.schedule.processNextSchedule End Point is called, which removes the current instance's business key from the array of business key and calls the next instance in the list, starting the loop again.
Constants
The goss.DataExporter.helpers.constants End Point defines values used throughout the solution.
Property | Type | Description |
---|---|---|
operations | Object | The available exporters:operations: { |
fileTypes | Object | The available file generators:fileTypes: { |
dateFormats | Object | dateFormats: { |
frequencies | Array of objects | Each object in this array sets an option in the scheduling form. { |
Error Logging
Errors are logged to "DataExporterErrors" in the API server log directory. If an error occurs during an export the scheduler will move onto the next export in the list.
History Logging
The data exporter maintains two histories for each schedule.
Config
This history holds the configuration of the export schedule. The business key matches that of the schedule. Each time the configuration is updated a new event is added to the history, with the most recent being used.
{
"labela": "_DataExporter",
"labelb": "businesskey",
"labelc": "configs"
}
Audit
An audit history is written for each schedule. The business key matches that of the schedule. An event is added to it each time the schedule runs.
{
"labela": "_DataExporter",
"labelb": "businesskey",
"labelc": "reporting_audit"
}
Each event in this history records when the schedule ran and where the data was sent. This example shows data being sent by email:
{
"scheduleFrequency": "",
"sftpDetailsKey": null,
"operations": ["EMAIL"],
"recipients": [{
"address": "support@gossinteractive.com",
"name": "Support"
}],
"description": "File Export Processed",
"from": "Run",
"dataSource": "goss.DataExporter.dataSources.mockedDataForCSV",
"user": "SUPPORT",
"timestamp": "2024-07-10T13:31:02.651Z"
}
File Generators
CSV Files
There is a hard limit of 100,000 rows for CSV data. If the datasource returns a data set larger than this the exporter will generate an error when it tries to create the CSV file.
When a CSV is created the file generator must be passed two arrays. The first array, called
JSON Files
The JSON file will be an array of objects. The array is limited to 100,000 elements and an error will be generated if larger data sets are exported.
Datasource End Point
The datasource End Points are responsible for extracting data from the iCM database in a format expected by the file generators. In this release only JSON and CSV files are supported.
Here's the result from a datasource End Point that retrieves data from the Registrars product. It has the required CSV header row and a single row of data representing one appointment.
"result": {
"headerRow": ["instancestatus", "enddatetime", "locationbookableid", "name", "bookableid", "appointment", "ticketstatus", "type", "instancebk", "startdatetime", "assignedTo", "status"],
"data": [{
"instancestatus": "Active",
"enddatetime": "2019-06-20T15:00:00Z",
"locationbookableid": 1444,
"name": "Ross",
"bookableid": 1449,
"appointment": "Notice of Marriage",
"ticketstatus": "Reserved",
"type": "SB",
"instancebk": "8376-7234-6416-2568",
"startdatetime": "2019-06-20T14:00:00Z",
"assignedTo": "",
"status": "CREATED"
}]
}
If you are writing a new datasource End Point, it must be created in the goss.DataExporter.dataSources namespace for it to appear in the scheduling form.
Your End Point should return data in a format relevant to the file type you are hoping to create. Take a look at the schemas of the End Points in goss.DataExporter.fileGenerators to see what each expects.