Oracle SaaS API’s have a structure that is equal for all the “business entities” across the different modules (Finance, Loyalty, Supply Chain, …) such as invoices, expenses, loyalty transactions and so on. Let’s analyze the pattern. In this episode, we are focusing on the “Get all xxx” methods because it conforms a pattern that we’ll utilise in a sooner construct which we are working on for building a tool that extracts and publishes changes in the data in near-realtime.
Verb for all of them is obviously GET
Endpoint is <fqdnurl>/<endpoint> where
<fqdn> is the server url such as https://iiii-eeee.fa.em2.oraclecloud.com
<endpoint> is the API endpoint provided in the documentation such as /fscmRestApi/resources/11.13.18.05/invoices
If you execute the API call and data exists, conditions are valid and the user you are using for the connection has privileges to read, a JSON structure is retrieved up to the number of records in the limit parameter (default 25). If more records exist it is notified in the hasMore field in the response.
expand: Includes child records of the entity, you can specify which childs to include or all for example ?expand=invoiceLines (tip: if onlyData is false (default) the links provide you the name of the child entities)
fields: If you don't want all the fields in the response, include the names of the fields expected
limit: maximum number of records per call execution
offset: the ordinal number of the set of records retrieved according to the total and the limit parameter. For example if limit is 5 and you want the records from 6 to 10, offset must be set to 1
onlyData: by default, response JSON includes child objects links, putting this value to false disables that option
orderBy: fields and order criteria for retrieving data such as ?orderBy=InvoiceId:asc,SupplierSite:desc
q: the filter condition to get data such as ?q=InvoiceAmount>1000
totalResults: by default is false so total records retrieved are not evaluated, if true they are counted with the drawback of a more computation cost and probably lower response time and performance
In the following example we are requesting invoices and their lines with an amount greater than 1000 indicating that we want to know the number of invoices that accomplish the criteria
curl -X GET -k -H 'Content-Type: application/vnd.oracle.adf.resourceitem+json' -u firstname.lastname@example.org:password "https:/iiii-eee.fa.em2.oraclecloud.com//fscmRestApi/resources/11.13.18.05/invoices?limit=5&totalResults=true&q=InvoiceAmount>1000&expand=invoiceLines&onlyData=true"
As we mentioned in a previous post, the idea here is to develop a construct that seeks for changes in the SaaS data system and publishes those changes to a stream for later consumption from other systems over there.
Indeed, as we see in the diagram, there is a block in which we are executing a program that run in loops of “get all” requests to a list of API endpoints that have been configured with the Administration User interface. The logic is:
wake up every x seconds
for each endpoint registered
get parameters such as conditions, last successful execution time, …
execute GET requests in loops with chunks of N records per call and get the data until there is no more data retrieved
put the data in the topic with the streaming API
As we mentioned in this post, the SaaS REST API’s have a common pattern so its easy to create a program that executes the logic mentioned and put it in an image container to be deployed in K8s, we’ll show an example in a new post sooner.
Robotics Process Automation (RPA) is a technology that allows to automate the human-machine dialog.
How many of you have seen end users working with several User Interface apps “at the same time”?
The lack of integration between our systems and apps is because several reasons, such as failed IT implementations, niche, legacy or obsolete solutions, no API’s or data interchange mechanisms available, and so on.
RPA can help us to solve the problem, because what it basically does is interact with the application UI’s providing us with the ability to catch the data and put it in a central and “apificated” repository that can then be utilised by the ecosystem.