Shopping in the future may feel a lot like shoplifting does today — without the risk of getting nabbed — if two artificial intelligence startups have their way.

New Zealand’s IMAGR and Silicon Valley’s Mashgin aim to make checking out of grocery stores and company cafeterias a walk in the park. Almost literally.

Many supermarkets offer self-checkout to save shoppers time. IMAGR founder William Chomley wants to skip the checkout altogether, so you can just walk right out the door. It’s similar to the idea behind Amazon Go, being tested in a grocery store in downtown Seattle, which lets customers shop without ever stopping at a cashier on the way out.

IMAGR makes SmartCart, an ordinary grocery cart with an AI computing video camera attached. The device tracks what goes into the cart, tallies the total along the way and syncs that with payment information on the shopper’s mobile phone.

“We want to give people the ability to shop as they normally would, and then just walk past the cashier and out of the store,” Chomley said.

High Noon at the Checkout Counter

Mashgin was born out of frustration over lunch breaks spent waiting in lines rather than chatting with friends. It’s installed its automated checkout system, also called Mashgin, in several Silicon Valley company cafeterias, including NVIDIA’s. Using GPU deep learning and computer vision, it recognizes your soup, salad or soda faster than you can gulp.

The elegant Mashgin self-checkout station features a very simple user interface. Customers simply place their lunch on the device, where five 3D cameras examine it from different angles to identify and price each item. To pay, customers swipe a credit card.

Demonstration of a future version of the Mashgin AI cafeteria checkout.
This animation depicts a future version of the Mashgin AI cafeteria checkout. Currently the device detects packaged goods, soups, salads and takeout containers, but is still being trained to identify foods on a plate. Animation courtesy of Mashgin.

The startup trained its system on a dataset of common items found in cafeterias, using the CUDA parallel computing platform, NVIDIA GeForce GTX 1080 GPUs and cuDNN with the Caffe deep learning framework. Mashgin customizes its system for each company’s cafeteria, and its deep learning algorithm learns new items as more people use it.

“It’s a huge market and there’s this big problem,” said Abhinai Srivastava, who founded the company with Mukul Dhankhar. “Everyone wants to eat at 12 o’clock.”

Catching Rays, Not Delays

IMAGR’s Chomley created SmartCart because he wasn’t getting enough sunshine. Stuck behind his computer screen at an investment fund most days, he yearned to spend a few minutes soaking up rays during lunch. Instead, the line for food at a small grocery near his office ate up his entire break.

Chomley quit his job and began work on what is now SmartCart. After several false starts — at one point, he had to take a job moving furniture to keep the company afloat — he and the IMAGR team set their course on deep learning and computer vision to enable SmartCart.

Using our TITAN X GPU and the TensorFlow deep learning framework, IMAGR initially trained its algorithms on images of grocery store products. Next, it used the SmartCart video camera to learn to recognize products put into or removed from the cart — say you reconsidered that half-gallon of chocolate chocolate-chip ice cream over a second bunch of kale. Finally, the team trained the algorithm on barcodes to learn prices.

IMAGR is planning a small SmartCart trial at a New Zealand grocery chain within the next couple of months. Chomley said several of the world’s largest supermarket chains have expressed interest in SmartCart.

“People just don’t want to be standing in huge lines,” he said. “They want to get in and get out.”

Source: NVIDIA Blog

You ask, AI delivers.

At least, that’s the concept that Kevin Peterson is trying to achieve with his robotics company, Marble. It recently made news for deploying food delivery robots onto the streets of San Francisco.

Peterson, Marble’s co-founder and software lead, joined this week’s AI Podcast to talk about their efforts to integrate AI into the delivery process.

Marble’s robots, all named “Happy,” look like a white boxcar about the size of a mobility scooter. They’re complete with a trunk, where it stores packages. Users get a code with their delivery confirmation to access their packages.

“We want everyone’s first interaction with the robot to be delightful, actually,” explained Peterson in a conversation with Michael Copeland, the host of NVIDIA’s AI Podcast. “So we spend a lot of time designing that interaction and making sure the vehicle is operating in a way that looks good and is good.”

To provide efficient delivery, the Marble team uses a 3D map system to plan out the best routes for their delivery bots. According to Peterson, the robot has a program that detects last-minute route obstacles, and then will request a re-route.

For Peterson, automating delivery systems is only the beginning.

“There’s a huge amount of impact in the world that comes from having these kinds of autonomous vehicles out there,” he says.

Source: NVIDIA Blog

Today, the demand for on-demand image processing is increasing. Every program, from Microsoft Office which may run slowly on Windows 10, to advanced imaging software like AutoCAD and Autodesk, for companies in the construction and design industries, can benefit from better image processing.

The NVIDIA GRID License allows for GPU resources to be spread among users, by using a single GPU server. Using Live Migration, this allows for superior image and video processing – and you can even use NVIDIA GRID to run other complex AI Deep Learning calculations, while still maintaining smooth operation. Learn more on Youtube.


GPU stands for Graphics Processing Unit. GPUs have many more cores than a comparable CPU – the most powerful graphics card have more than 5,000 cores, while most CPUs have a maximum of 10 cores. This makes GPUs faster and more efficient than CPUs for repetitive tasks such as AI development. Learn more on Youtube video.

what is gpu

Learn more on YouTube.

Is NVIDIA’s DGX Station Better Than DIY Machines For AI/Deep Learning

If you’re interested in AI and deep learning, you must either buy a supercomputer that can handle AI Model Training, or build your own machine to create a DIY machine. But is this a good idea? We don’t think so.

Save Time And IT Resources

Building your own AI computer is usually cheaper than an off-the-shelf unit the NVIDIA DGX Station – but only if you ignore the time and IT resources required to run the unit, download and update software, and maintain the computer. When you factor these costs in, the NVIDIA DGX Station is a much better option.

The NVIDIA DGX Station – Ready To Go For AI Projects

The NVIDIA DGX Station is ideal for AI scientists. This compact supercomputer is small enough to place on your desk, and has low power consumption.

It also offers enhanced security, as your sensitive data does not have to be on the cloud. And because it’s embedded with pre-installed AI software, you can start AI model training in just 2-4 hours. There’s no need to spend extra time and money on more IT equipment or on support services.

Learn more rental NVIDIA DGX Station and NVIDIA Deep Learning Institute. 


On 30th Aug, 2018, the Department of Biomedical Engineering of City University had launched her 1st workshops on fundamentals of Deep Learning for Computer Vision to students. In this workshop, students learn how to

>Implement common deep learning workflows, such as image classification and object detection

>Experiment with data, training parameters, network structure, and other strategies to increase performance and capability

> Deploy their neural networks to start solving real-world problems