This blog discusses options to keep a local copy of Calem data up to date via Calem REST API. The use case is as follows:

  • A 3rd party app fetches asset, location and work orders from Calem. 
  • The data fetched is stored in a local database at the 3rd party app.
  • The app needs to fetch data from Calem periodically to keep the local copy of data up to date with Calem.

Three options are discussed below. It is recommended that you implement option 2 or 3 if the data you fetch from Calem is significant (tens of thousands of records). You may get by with Option 1 if the number of records fetched is insignificant (less than a few thousands).


Option 1. Fetch All Data from Calem

The first option is to use a task scheduler such as crontab in Linux to fetch all data from Calem periodically. For instance, every 6 minutes, a cron task is launched to fetch asset, location and work orders from Calem.

A maximum limit of number of rows returned from an API fetch call on any table is enforced by Calem. It allows Calem to perform consistently and prevents a client extracting tens of thousands records in one API call which could put a excessive stress on network and server resources.

  • "fetch_max_rows" is the maximum number of rows to return by a search API. This value must be greater than 0. The default is 100 rows. Caution should be used to bump up this value.
    • The value may be bumped up to 500 rows by contacting Calem Support Team.
  • Calem has an internal max limit of 3000 rows for one search API invocation.

One needs to code a loop to fetch all data based on the configuration above with the limit clause in the API search (parameter "l" in the API). Here is a demonstration of the loop in pseudo PHP code.



//The configuration value of "fetch_max_rows"
$pduSize=100;
//Starting from the first row matching the query criteria
$offset=0;    
while (true) {
    //Fetch data starting at $offset, and get the number of rows of $pduSize
    Call API search with parameter "l"="$offset, $pduSize"
    //Check the number of rows fetched, if less than $pduSize then all rows are fetched
    if (number of rows fetched < $pduSize) break;
    //Increase $offset by $pduSize
    $offset += $pduSize;
}
 

Option 2. Fetch Data by Changes Periodically

Option 1 above fetches all data needed periodically (such as every 6 minutes). It has disadvantages:

  • The records of data fetched from Calem could be tens of thousands.
  • It places load on network and database resources for Calem databases.

There are ways to improve Option 1. One option is to take advantage of the bookkeeping fields for each record in Calem (Calem Offline is built based on those fields). 

  • Each object record in Calem keeps track of who and when it is changed. Each object record includes the timestamps of "Last updated" (field name: modified_time) and "Time created" (field name: created_time).
    • When an object record is created, both "Last updated" and "Time created" are changed to current timestamp.
    • When an object record is changed, "Last updated" is changed to current timestamp.
  • Objects deleted in Calem are stored in the system "Recycle-bin" table.
    • When an object is deleted (if there is no object dependency) it is inserted into the recycle-bin with "Time created" as the timestamp the object is deleted.
    • Your may utilize the status fields in Calem to manage object lifecycles and do not delete objects physically. So, you may skip calculating data deletion from Calem recycle-bin.


We will use "SyncAnchor" to indicate the timestamp of the local data. It is object specific. For instance, you have a SyncAnchor each for asset, location, and work order.

  • SyncAnchor is persisted in a database for each object. For each object (asset, location, work order, etc.) you will need to store a SyncAnchor in your local database to keep track of the timestamp of your local data.
  • Use GMT timestamp of the format 'yyyy-mm-dd H:i:s' for a SyncAnchor. For instance, The SyncAnchor of current GMT time at 16:50:30 of August 7, 2020 is string '2020-08-07 16:50:30' which can be used in search query of Calem REST API.
  • If a SyncAnchor is not found, use the beginning timestamp of MySQL: '1970-01-01 00:00:01'. Note that single quote is required to reference timestamp in MySQL queries.
  • The expression to fetch data changes since last API fetch is between the last SyncAnchor and current timestamp.
    • For instance, if work order is fetched the first time, the query may be "cm_wo.modified_time between '1970-01-01 00:00:01' and '2020-08-07 16:50:30'".
    • If a SyncAnchor exists such as '2020-08-07 16:42:30', the query will be to include only changes made in the last 6 minutes: "cm_wo.modified_time between '2020-08-07 16:42:30' and '2020-08-07 16:50:30'"


Here is the flow to fetch data by changes based on SyncAnchor. This process works better for both initial fetch and incremental fetches. 



//Check a SyncAnchor for an object
Get SyncAnchor from the database for the object as $SyncAnchor1
If $SyncAnchor1 does not exist, set it to be the beginning of time ('1970-01-01 00:00:01')
Set $SyncAnchor2 to be the current timestamp (such as '2020-08-07 16:50:30');
Set query string to fetch changes happened in the time range from $SyncAnchor1 to $SyncAnchor2. 
For instance, API search parameter:
$w=“cm_wo.modified_time between ‘2020-08-07 16:42:30’ and ‘2020-08-07 16:50:30’”

//The configuration value of "fetch_max_rows"
$pduSize=100;
//Starting from the first row matching the query criteria
$offset=0;    
while (true) {
    //Fetch data starting at $offset, and get the number of rows of $pduSize
    Call API search with parameter "l"="$offset, $pduSize"
    //Check the number of rows fetched, if less than $pduSize then all rows are fetched
    if (number of rows fetched < $pduSize) break;
    //Increase $offset by $pduSize
    $offset += $pduSize;
}
//This is important to replace syncAnchor
Set $SyncAnchor2 as the SyncAnchor and save it to database for the object (such as "cm_wo")

 

 Option 3. Fetch Data by Changes Periodically with a Fetch-All

It is possible to modify the Option 2 to fetch data by changes with a fetch-all once in a while. It is done by adding a logic in SyncAnchor and periodically reset the SyncAnchor. We will present an option below to do daily fetch-all at mid-night. You may add your own logic to fetch all data such as every 100 fetch by changes.

  • Assume you have a schedule task (such as cron task) to fetch data by changes periodically such as every 6 minutes.
  • At a fetch task, get the $SyncAnchor1 and $SyncAnchor2. Compare the day values of the two timesstamps. If they are different,  $SyncAnchor1 is before mid-night, and $SyncAnchor2 is after mid-night, set $SyncAnchor1 to the beginning timestamp and fetch all data.


//Check a SyncAnchor for an object
Get SyncAnchor from the database for the object as $SyncAnchor1
If $SyncAnchor1 does not exist, set it to be the beginning of time ('1970-01-01 00:00:01')
Set $SyncAnchor2 to be the current timestamp (such as '2020-08-07 16:50:30');

//Logic to do daily fetch-all
If $SyncAnchor1 and $SyncAnchor2 have different day value, 
set SyncAnchor1 to the beginning time of the timestamp ('1970-01-01 00:00:01').
A fetch-all will be performed.

Set query string to fetch changes happened in the time range from $SyncAnchor1 to $SyncAnchor2. 
For instance, API search parameter:
$w=“cm_wo.modified_time between ‘2020-08-07 16:42:30’ and ‘2020-08-07 16:50:30’”

//The configuration value of "fetch_max_rows"
$pduSize=100;
//Starting from the first row matching the query criteria
$offset=0;    
while (true) {
    //Fetch data starting at $offset, and get the number of rows of $pduSize
    Call API search with parameter "l"="$offset, $pduSize"
    //Check the number of rows fetched, if less than $pduSize then all rows are fetched
    if (number of rows fetched < $pduSize) break;
    //Increase $offset by $pduSize
    $offset += $pduSize;
}
//This is important to replace syncAnchor
Set $SyncAnchor2 as the SyncAnchor and save it to database for the object (such as "cm_wo")