Peregrine turns video streams from cameras into relevant data points
to build new products and services and increase the pace of development.
Remove data without insights from collection. Focus on hard-to-find items.
Only relevant data needs to go to cloud hot storage or needs to be stored at all.
Minimize engineers time to data.
Finding the right data is becoming critical for computer vision applications to create customer value. Peregrin’s Vision SDK sees the full picture – going beyond the interests of dashcams, ADAS or autonomous driving – enabling new data-driven business models.
Handling visual data is hard. It is key to prioritize data transfer, storage and post-processing by relevance and the impact on each use case. Peregrine’s approach learns by federated teaching from human domain experts (e.g. drivers).
The difference between a detection and scene understanding is context understanding. Contextual awareness allows critical data (less than 1%) in visual data to be found faster (shorter time to relevant data) and saves time and money on handling data in the cloud.
Peregrine’s core, giving it an unfair advantage, is its unique software stack and machine learning pipeline, that can process the visual sensor data right at the edge, compress relevant information to the size of only 1% of the analyzed data and send this back to the cloud, where insights and analytics are made available to stakeholders. To extract this relevant information, Peregrine implemented the perception ability of the human cognitive system into a breakthrough technology for mobile cameras.