Peregrine.ai’s Vision SDK turns video streams from cameras into relevant data points
to build new products and services and increase the pace of development.
Our Vision SDK sees the full picture – beyond ADAS and autonomous related object classes. An urban traffic scene has a high dimensional state space. To generate meaning, we’re assessing causalties of all present inputs.
When handling video data, it is key to only transfer, store and post-process data with relevant information. Our system has learned to distinguish traffic behavior from professionals: Drivers.
Based on our federated teaching approach, we are able to determine traffic events from a broad range of scenarios. We derive actionable information and gain superior data based insights.
Peregrine’s core, giving it an unfair advantage, is its unique software stack and machine learning pipeline, that can process the visual sensor data right at the edge, compress relevant information to the size of only 1% of the analyzed data and send this back to the cloud, where insights and analytics are made available to stakeholders. To extract this relevant information, Peregrine implemented the perception ability of the human cognitive system into a breakthrough technology for mobile cameras.
Please get in touch. We do require certain specs regarding chipsets, memory and operating system configurations.
Yes it can and there might be reasons to do exactly that. Peregrine’s Vision SDK is a powerful, service-enabling system. It is not designed to actively control actuators (steering or pedals) in safety critical situations.
Developers using our Vision SDK can choose elements of the tech stack. Our technology consists of object detection and tracking, localization and data fusion (spatial and temporal) components.
Yes please. Peregrine’s strength is a super fast turnaround time for network model or algorithm improvements.
Our SDK can share visual insights (anonymous metadata) directly within the application.