Advances in deep learning and neural networks have delivered huge breakthroughs in natural language processing and computer vision, and they have the potential to solve big problems in manufacturing, retail, supply chain, agriculture, and countless other business domains. Naturally, technology start-ups are behind some of the most important innovations.
In recent articles, we looked at start-ups revolutionising natural language processing and start-ups leading the way in MLops. Here we’ll take a look at “applied AI” start-ups.
These are companies that are applying different techniques—whether it be processing images, text, audio, video, categorical or tabular data, or combinations of the above—to address various industry challenges, from fulfilling the promise of self-driving cars to pushing the boundaries of agricultural production.
Are we there yet? It seems we’ve been waiting on the promises for years now, but the work in self-driving technology continues. Argo AI is a company that aims to become the complete platform for self-driving vehicles, covering all the software, hardware, maps, and remote infrastructure that will be required to bring us to the glorious future where we don’t have to be on a bus or a train to read a book on the commute to work.
Working with partners such as Ford and Volkswagon, Argo AI is pushing forward the boundaries of research, having recently just announced Argo Lidar (light detection and ranging), a new approach to performing distance checks of objects up to 400 meters away, as well as working well at night and in low-light conditions and being able to handle transitions such as coming out of tunnels that can cause issues for other Lidar arrays (and let’s face it, us poor humans too).
Argo AI is not making wild promises about its current tech, but appears to be doing the long hard slog of building all the blocks for a safe assisted-driving experience, testing in six cities across the US, with testing in Europe slated for later this year.
Maybe it isn’t quite as gee-whiz as self-driving, but the technology that Ceres Imaging is bringing to bear on growing crops may well help to lower your grocery bill long before you can get into a self-driving car and have it take you to the supermarket.
Ceres Imaging offers a wonderful mix of old-school and cutting-edge technology, eschewing satellite or drone imagery for high-resolution cameras mounted onto fixed-wing aircraft and using those images as input to an array of models to provide critical information to farmers, such as discovering irrigation problems two to three weeks before they’d be visible in the field, correcting over-watering or under-watering scenarios, and calculating how fixing these issues will affect yields.
In addition, Ceres Imaging can relieve farmers of the burden of simple, labour-intensive tasks like tree counting, instead generating tree counts from aerial imagery.
Ceres will deliver a report that tallies the number of trees by varietal, and pinpoints the locations of missing and damaged trees, even going so far as generating the nursery order for replacements. It’s just one tiny example of how AI techniques are unlocking advances even in areas that might not immediately come to mind when somebody says the words “neural network.”
Founded by Andrew Ng, co-founder of Google Brain and former head of data science at Baidu, Landing AI is an attempt to bring the power of AI to domains that have not yet seen the advances it can bring.
The company’s first product, LandingLens, is an integrated platform that allows manufacturers to pair their expertise with Landing AI’s to produce a continuously improving visual inspection platform. In addition to manufacturing, Landing AI is also working on visual inspection systems for the agriculture and automotive industries.
One interesting aspect of Landing AI’s approach is how it puts users’ data at the centre of the solution.
Dealing with input data is often the least exciting part of a data scientist’s job, but despite great strides being made in self-supervised solutions in the past few years, input data is where you can make the biggest impact on your application. It doesn’t matter how fancy your model is; if you feed it garbage, you’re going to get garbage out.
So Landing AI focuses on efficient and easy-to-use labelling systems, making sure that data is collected continuously, easy retraining and validation of models, and of course, being able to alert quickly if inferences suddenly skew (e.g., if a camera loses a colour channel).
Sooner, rather than later, we’re going to need a way of detecting deepfakes. While deepfaking—using AI techniques to generate fake audio and video of real people—still hasn’t quite made it to the mainstream, the expense and knowledge needed to generate such media is decreasing on a weekly basis.
You may have seen recent news stories about the remarkably convincing Tom Cruise deepfake TikTok. Even more convincing fake Tom Cruises are in our future.
Headquartered in Estonia, Sentinel is striving to be one of the leaders in that arena. With impressive credentials from NATO cyber security and backing from the former president of Estonia, they are offering an API that draws on various deep learning approaches, as well as a massive database of existing fakes for comparison purposes, to determine whether uploaded media is fake or not. The Sentinel system even produces a report one what was done to generate the fake in the case of a positive result.
Like the Amazon Go stores that dot a few major US cities, Standard offers the promise of brick-and-mortar shopping without lines. You check in with a mobile app when you enter a store, wander around and grab what you want, and then you just leave.
Standard’s computer vision technology keeps track of everything you leave the building with and charges your account. The experience is even more friction-free than Amazon Go, with no turnstiles or gates.
Standard would very much like to be the company that makes this technology ubiquitous among retailers, hooking into their supply chains to provide detailed analytics as well as the smoothest of checkout experiences.
Currently, Standard has a flagship store in San Francisco (but of course!) and has inked a deal with Circle K on some pilot experiments in Arizona, retrofitting four stores with autonomous checkout technology. If all goes well, we could see Standard’s shopping AI spreading across the country fast.
What we can see in this short tour of start-ups is that the range of verticals where the cutting-edge techniques of computer vision, natural language processing, and other deep learning approaches is vast and most likely underestimated.
The neural networks are learning more and more all the time. They’re already in our phones, and they’re coming to our stores, cars, supply chains, manufacturing plants, and farms. Who knows where else they’ll be by the time 2030 rolls around?