StradVision raises Series B funding for autonomous vehicle vision
StradVision Inc. last week said it had raised $27 million in Series B funding. The Seoul, South Korea-based company makes vision processing technology for autonomous vehicles and advanced driver assistance systems, or ADAS.
Posco Capital led the round, which included investment from IDG Capital, Industrial Bank of Korea, Lighthouse Combined Investment, LSS Private Equity, Mirae Asset Venture Investment, Neoplux, and Timefolio Asset Management. StradVision has raised a total of $40 million to date.
“StradVision’s software solutions for autonomous vehicles and ADAS are proving successful and attractive to leading automakers and suppliers, as our latest round of funding strongly confirms,” stated Junhwan Kim, CEO of StradVision. “We appreciate all of our new investors coming on board, and StradVision will use this funding to take our groundbreaking products to the next level as we lead the advancement of camera technology in autonomous vehicles.”
SVNet provides high-level perception for ADAS
StradVision said it is an industry leader in camera perception software, which plays a critical role in ADAS capabilities such as automatic emergency braking and blindspot detection. The company has registered 75 patents relating to its core technologies, and 79 more patent applications are in process.
The SVNet deep learning-based software enables high-level perception abilities including lane detection, traffic light and sign detection and recognition, object detection, and free space detection, said the company, which also has offices in Tokyo and San Jose, Calif.
The software, which includes SVNet External, SVNet Internal (for monitoring the driver and in-cabin experience), and SVNet Tools, provides real-time feedback, detects obstacles in blind spots, and alerts drivers to potential accidents. SVNet also prevents collisions by detecting lanes, abrupt lane changes, and vehicle speeds, even in poor lighting and weather conditions, according to StradVision.
SVNet’s Auto Labeling System (ALS) produces training data with minimal human input, and a semi-supervised, learning-based SVNet training tool enables customers to enhance SVNet by themselves during mass-production projects. This application of unsupervised learning, which functions similar to how the human brain organizes data, gives machines nearly limitless understanding of the visual information they see.
When paired with commercial, automotive-grade systems on chips (SoCs), SVNet’s deep neural network is fully optimized and interacts in real-time with the world it is viewing. Offering minimum latency and power consumption, the system uses artificial intelligence for proper real-world detection, tracking, segmentation, and classification, the company said.
StradVision working on Level 2 through 4 self-driving cars
The company added that it has partnerships with leading automotive OEMs and Tier 1 suppliers for multiple mass-production projects in China and Europe. Millions of vehicles — including SUVs, sedans, and buses — will be using the SVNet software by 2021, claimed StradVision.
The vision provider recently earned the Automotive SPICE CL2 certification, as well as China’s Guobiao (GB) certificate, and it is already deploying ADAS vehicles on Chinese roads.
StradVision currently has more than 105 employees in the U.S., Germany, South Korea, and Japan. Itrecently partnered with a leading global Tier 1 supplier and a number of commercial vehicle manufacturers on a side-camera project and custom camera technology for autonomous buses. Current StradVision projects range from Autonomy Levels 2 through 4.
Font: The Robot Report