Cerbo provides an open API, which allows for outside (non-Cerbo) developers to build custom functionality or integrations that interact with Cerbo.
As with any development project that involves sensitive data, you'll want to make sure that your team is aware of best security practices and of the most efficient ways to retrieve specific data. The list below is not comprehensive, but it's worth reviewing before you start an API project.
- The API should only be enabled if you have an active application that needs to be connected. Do not request credentials until you're ready to start working with the API.
- API credentials are NOT the same as user credentials. You cannot use API credentials to log into the user interface of the EHR and vice versa.
- Requested API uses should only have the minimum necessary permissions required to perform the actions you expect to need them for. Common permission restrictions include:
- Read-only access
- Forcing an "anonymize data" flag for applications which are doing general data analysis (this will remove most identifying properties about patients when responding to requests)
- Restricting access to only specific endpoints
- API credentials should be stored and transmitted securely. At no point should API credentials be stored outside of a secure environment and they should be shared only with parties that require them.
- Any technical support questions should not include credential data as part of the support ticket
- If your application server has a static IP address, you may request that the API connection is only accessible via that IP address
- The API uses BASIC authentication, which can be formatted as a base64-encoded username:password string. This is purely data encoding and can be reversed at any time - do not assume that encoded credentials are any safer than plaintext credentials.
- If you are using a username:password@api-
subdomain syntax to write API curl commands (rather than header-based authentication syntax) ensure that you are not logging the raw commands in any way. In either case, ensure that commands are always processed server-side before rendering results to the client application.
- For web-based applications, ensure that credentials are stored in a way that they cannot be compromised in the event of a breach (for instance, store them outside a public-root document and only include them where necessary, or encrypt the credentials at rest and only decrypt them for the purpose of making API calls.
- Ensure that error reporting on the application does not potentially output debugging information that might expose credentials.
- Ensure that only authorized users have access to the source-code of your application and ideally that only key personnel can access the directory/files where the API credentials are kept.
- If you suspect that API credentials might have been exposed immediately disable the API user through the Cerbo interface and request new credentials be issued.
- Data passing through the API is generally sensitive - ensure that your environment is secure and meets regulatory requirements.
- If using the "anonymize data" flag in your application, do not assume that the consuming application will not be able to identify the patient. API data responses contain a large number of data-points that could be used in conjunction to identify a patient, and questionnaire data included in responses are not scrubbed in any way (these questions are custom for each client, so Cerbo does not know which questions might be identifying).
- Disable any API users as soon as they are no longer in use.
- When developing an application - especially if it involves POST/PATCH commands or large series of GET loops, please request a sandbox environment to test your application before deploying it in production.
- When synchronizing large amounts of data, use the API's "delta" endpoint to calculate which resources have been added/removed/modified within a time-period rather than re-sycning large numbers of documents/data that you've already cached.
- Avoid PATCH commands where no data is being updated where possible (evaluate whether a change is actually being requests or if the request matches the live state already).
- Avoid multi-threaded requests where possible. You'll risk putting too much load on your database and you may also trigger a rate-limit lock (specific rate limits are different by endpoint, and single-threaded requests will almost never trigger a lock other than for credential validation endpoints or endpoints that trigger emails to be sent).
- Log requests locally (without credential data) for debugging purposes
- When possible, for high-volume data-transfers (downloading large amounts of data), these should be scheduled for off-peak hours (after 5PM Pacific, before 8AM Eastern).
- Only use the extended_details endpoint (which returns a huge amount of data per patient) if you need most of the included data - almost all the data returned by that endpoint is available via more targeted endpoints which will generally be much faster.