Toggle menu

Technical Notes

Scheduled Export Workflow

Each scheduled export exists as a process instance. When a schedule is created, an instance starts then pauses at an event gateway. Execution proceeds from the gateway following one of three message events:

  • startExport - sent by the goss.DataExporter.schedule.processSchedules and goss.DataExporter.schedule.processNextSchedule End Points (see below)
  • editExport - sent by the scheduling form when an export schedule is edited. This updates the relevant process variables
  • cancelExport - sent by the scheduling form when an export schedule is deleted. This terminates the process instance

Instances can be started by users in the DATAEXPORTER group. Instances are processed one at a time, managed via the End Points described below.

Scheduled Task

In the scheduled tasks area of iCM Management, create a schedule to run the goss.DataExporter.schedule.processSchedules End Point. This schedule should run once a day, at around 02:00. Make sure that only a single schedule exists and that it does not run more frequently than once a day.

Your scheduled task need not pass any parameters to the End Point.

Scheduled Task
 

End Point and Workflow Interaction

As described above, a single End Point, goss.DataExporter.schedule.processSchedule's, runs every day in the early hours of the morning. This End Point creates an array of all of the business keys of all of the scheduled export workflows that need to be processed and passes them to the first process instance in the list.

The process instance calls the goss.DataExporter.main End Point, which manages the export, file creation and processing. Once this End Point has returned (ie the configured export is complete) the goss.DataExporter.schedule.processNextSchedule End Point is called, which removes the current instance's business key from the array of business key and calls the next instance in the list, starting the loop again.

Constants

The goss.DataExporter.helpers.constants End Point defines values used throughout the solution.

PropertyTypeDescription
operationsObjectThe available exporters:
operations: {
    email: "EMAIL",
    sftp: "SFTP"
}
fileTypesObjectThe available file generators:
fileTypes: {
    csv: "CSV",
    json: "JSON"
}
dateFormatsObjectdateFormats: {
    iso: "YYYY-MM-DD",
    tz: "Europe/London"
}
frequenciesArray of objectsEach object in this array sets an option in the scheduling form. addUnit and addPhrase correspond to the number and duration key of the moment.js add method:
{
    name: "Daily",
    addUnit: 1,
    addPhrase: "days"
}

Error Logging

Errors are logged to "DataExporterErrors" in the API server log directory. If an error occurs during an export the scheduler will move onto the next export in the list.

History Logging

The data exporter maintains two histories for each schedule.

Config

{
    "labela": "_DataExporter",
    "labelb": "businesskey",
    "labelc": "configs"
}
This history holds the configuration of the export schedule. Each time the configuration is updated a new event is added to the history, with the most recent being used.

Audit

{
    "labela": "_DataExporter",
    "labelb": "businesskey",
    "labelc": "audit"
}

Each event in this history records the following:

{
    "startingUser": "anonymous",
    "operations": ["\\\"EMAIL\\\""],
    "description": "File Export Processed",
    "dataSource": "goss.DataExporter.dataSources.mockedDataForCSV",
    "timestamp": "2019-06-20T10:52:08.026Z"
}

File Generators

CSV Files

There is a hard limit of 100,000 rows for CSV data. If the datasource returns a data set larger than this the exporter will generate an error when it tries to create the CSV file.

When a CSV is created the file generator must be passed two arrays. The first array, called headerRow, is an array of strings that forms the header row in the CSV file. The second array, called data, is an array of objects. Each object in the data array must contain a key for each element in the header row. If not an error will be thrown.

JSON Files

The JSON file will be an array of objects. The array is limited to 100,000 elements and an error will be generated if larger data sets are exported.

Datasource End Point

The datasource End Points are responsible for extracting data from the iCM database in a format expected by the file generators. In this release only JSON and CSV files are supported.

Here's the result from a datasource End Point that retrieves data from the Registrars product. It has the required CSV header row and a single row of data representing one appointment.

"result": {
    "headerRow": ["instancestatus", "enddatetime", "locationbookableid", "name", "bookableid", "appointment", "ticketstatus", "type", "instancebk", "startdatetime", "assignedTo", "status"],
    "data": [{
        "instancestatus": "Active",
        "enddatetime": "2019-06-20T15:00:00Z",
        "locationbookableid": 1444,
        "name": "Ross",
        "bookableid": 1449,
        "appointment": "Notice of Marriage",
        "ticketstatus": "Reserved",
        "type": "SB",
        "instancebk": "8376-7234-6416-2568",
        "startdatetime": "2019-06-20T14:00:00Z",
        "assignedTo": "",
        "status": "CREATED"
    }]
}

If you are writing a new datasource End Point, it must be created in the goss.DataExporter.dataSources namespace for it to appear in the scheduling form.

Your End Point should return data in a format relevant to the file type you are hoping to create. Take a look at the schemas of the End Points in goss.DataExporter.fileGenerators to see what each expects.

Last modified on 10 March 2023

Share this page

Facebook icon Twitter icon email icon

Print

print icon