Skip to main content

To take this integration further, we ideally need to be able to ingest the backup information into BR, similar to what is already available via the email ingestion. Receiving the ticket that just says there was a failure or warning, and still having to go and dig around in VSPC or some other Veeam management tool to gather the information adds time to the task.

This seems to be a relatively simple addition as the data is returned when the job status is queried anyway, from the JSON returned if the job status is a failure with VBR, or warning/failure in VBO365.

Some examples are below, I have truncated the output for clarity.

  • VBR:
    Querying /infrastructure/backupServers/jobs/backupVmJobs object
    failureMessages: ] is returned
{
"data": :
{
"instanceUid": "944c9568-9423-4893-9163-59a8242cadc1",
"subtype": "VSphere",
"targetRepositoryUid": "88788f9e-d8f5-4eb4-bc4f-9b3f5403bcec",
"protectedVmCount": 1,
"_embedded": {
"backupServerJob": {
"instanceUid": "944c9568-9423-4893-9163-59a8242cadc1",
"name": "VMBackupJob1",
"backupServerUid": "062a3c26-e7d0-480c-928f-36a353938b85",
"locationUid": "895f4104-741c-4972-941b-20fec9126c8f",
"siteUid": "de0ffddd-d046-4363-8ea6-44859bc5b2e4",
"organizationUid": "899d01ed-5972-4ce4-ac6c-717ff0780309",
"mappedOrganizationUid": "899d01ed-5972-4ce4-ac6c-717ff0780309",
"status": "Success",
"type": "BackupVm",
"lastRun": "2023-10-20T12:07:31.317+11:00",
"lastEndTime": "2023-10-20T12:08:30.517+11:00",
"lastDuration": 59,
"processingRate": 0,
"avgDuration": 59,
"transferredData": 0,
"bottleneck": "None",
"isEnabled": true,
"scheduleType": "NotScheduled",
"failureMessage": null,
"targetType": "Local",
"destination": "Default Backup Repository",
"retentionLimit": 7,
"retentionLimitType": "Days",
"backupChainSize": "2810183",
"isGfsOptionEnabled": false,
"lastSessionTasks": :
{
"instanceUid": "d15ac8e3-912c-423e-9b7d-e1849139807c",
"objectUid": "e484d769-b4bb-4f24-8c4d-ad0f2b54acf2",
"objectName": "restv3empty",
"totalObjects": 5,
"processedObjects": 5,
"readDataSize": null,
"transferredDataSize": null,
"startTime": "2023-10-20T12:07:46.223+11:00",
"endTime": "2023-10-20T12:08:24.223+11:00",
"duration": 38,
"failureMessages": :],
"status": "Success"
}
]
}
}
}
]
  • M365:
    Querying  /infrastructure/vb365Servers/organizations/jobs/backup
    lastErrorLogRecords r] is returned
 {
"instanceUid": "cde7c9c1-734b-43b5-8bac-1f8a73de91d5",
"name": "<snip>",
"description": "Sharepoint and Teams backup for <snip>",
"repositoryUid": "e12bf3c9-3a29-4d40-881b-b34aa61cb108",
"repositoryName": "<snip>",
"vb365OrganizationUid": "<snip>",
"vspcOrganizationUid": "94d55903-df13-496f-b0c7-fbbe9c92c15f",
"vspcOrganizationName": "<snip>",
"vb365ServerUid": "ae5dca0b-23b7-4ef7-ad34-565796209cfc",
"vb365ServerName": "<snip>",
"lastRun": "2024-08-02T01:03:52.6865243+00:00",
"nextRun": "2024-08-02T08:00:00.0000000+00:00",
"isEnabled": true,
"isCopyJobAvailable": true,
"backupType": "SelectedItems",
"lastStatus": "Warning",
"lastStatusDetails": "Team site not found: Contractors",
"lastErrorLogRecords": s
{
"message": "Team site not found: <snip>",
"logType": "Warning"
},
{
"message": "Team site not found: <snip>",
"logType": "Warning"
},
{
"message": "Team site not found: <snip>",
"logType": "Warning"
},
{
"message": "Processing team <snip> finished with warning: Team files processing finished with warnings",
"logType": "Warning"
},
{
"message": "Processing team <snip> finished with warning: Team files processing finished with warnings",
"logType": "Warning"
},
{
"message": "Processing team Contractors finished with warning: Team files processing finished with warnings",
"logType": "Warning"
},
{
"message": "Job finished at 2/08/2024 11:03:52 AM with warnings",
"logType": "Warning"
}
],

 

Thanks for the feedback Lee. I’ll review this with our development team this week and circle back to confirm our next course of action. Ask makes sense. 

 

Beyond your input above, any other items of improvement you see in relation to this beta release. We’re looking to drop the ‘beta’ tag associated with this release in the near future, but wanted to confirm whether it meets your needs. 


Thanks Jamie - the biggest thing missing for us is the detail in the results - I understand we likely won’t get parity with email notifications as we’re hamstrung by what is exposed by the API, but getting warning and failure reasons into the data is critical to any time savings Backup Radar provides.

I have only touched on M365 above, but server backups are similarly critical to us.

The only other thing, which isn’t an issue with Backup Radar, is that using the VSPC integration we can no longer see the status of Veeam Configuration backups which are critical for the recoverability of the Veeam server itself. We have built a check via our RMM, but it is something that needs to be identified if your a new player.

Other than that, it all seems pretty useable at this point. 

I think it would be beneficial if there was some alerting should the integration break or is receiving bad data/errors. This may be silent until the backups trip with No Results, and would be beneficial to get infront of those kind of things before tickets land.

 


Hi Lee - I wanted to update you on the issue regarding the surfacing of failure messages. There was an update this past week that may have addressed this, so please let us know if you see any improvements on your end.

Regarding your feedback on alerting for integration breaks, I suggest adding your vote to the existing idea in our community here: Backup Radar PSA Integration - Notify of Disconnection. This idea could be expanded to include alerts for backup vendor integrations as well as PSAs, helping proactively manage any disconnections.

Let us know if you have further insights or questions!


I just submitted a similar request to have the Details section of Veeam alerts to be parsed and potentially searchable through backup Radar and/or with API commands. This would allow us to search and identify common warnings/failures. 

 

one that comes in mind for my team is low space alerts on datastores and local/cloud repositories. They currently come in as warnings that evolve into failures. If we had the ability to filter out what clients/ jobs were low on space it would be easier to remediate. Currently we are either forced to open up the result for each individual VM or log into the client's server to investigate. Often times the remediation is cleaning up the clients datastore, Local repository or cloud repository. 


Reply