Teaching machines to see like humans
Farshid Harandi has been working with industrial machine vision for over 10 years and developed machine vision-based solutions for automatic sorting. He believes the way to the future is to build robots with better hand-eye coordination. Just like humans.
As a child, I was the one opening up my toys before playing with them because I wondered how the bits and pieces worked together. Usually they stayed in pieces and broken, but occasionally they would be put back together rightfully.
Growing up as a DIY:er I played around a lot with mechanics, electronics, programming computers and microcontrollers all the way up to a Master’s program in Intelligent System Design at Chalmers University in Gothenburg. The program focused on Artificial Intelligence (AI) and entrepreneurship at the onset of the AI revolution in industry. Thus, many of the graduates, including me, ended up being part of the disruptive innovation at various start-ups.
Living the life of an entrepreneur
Having been part of four start-ups since 2010, the journey has been fun with long working days. The slim margins and deadlines of product development in such worlds have been great sources of learning. As part of a small agile team for Refind Technologies AB we delivered OBS500, the world’s only automatic battery sorter.
It sorts waste consumer batteries into different chemistries with a single glance at their label. It can recognise up to 15 batteries per second out of a database of 6 million images. Custom-developed and fine-tuned deep learning models and artificial neural networks power the recognition. We later applied the same technology to sorting electronic waste and fish and won the Google Impact Challenge Australia 2016 award.
Semcon and machine vision
The opportunities of adopting AI in industry are expanding day after day. We no longer produce with our hands but make machines that do. Still, there exists a plethora of monotone tasks such as quality control, logistics, shelfing, and packaging that are carried out by humans. Either due to unviable expensive solutions or due to the complexity of the task in hand. Advancements in Deep learning and machine vision are addressing those problems by making such environments graspable for robots as it is for humans.
Robots of the future
As for the robots, we are already in an era where the industry is transitioning from blind, heavy, and expensive robots to affordable, light, and collaborative ones. RGB and depth cameras like Photoneo aid robots like never before. They enable hand-eye coordination to simplify the mechanics and programming of the robots, reduce errors and enable handling of the unexpected. They can be taught to see, find, locate, pick and place objects using a graphical programming pedant in hours. They can be relocated and work different jobs during the same day at the factory. Just like a human resource.
I started working at Semcon in April 2019 and my aim is to give robots a pair of eyes to do all the monotone tasks we do every day. Just like we use a copy machine to copy a text instead of doing it by hand we can have a robot cook our meal, take out the trash, and produce flawless products for us.
Title: Project Manager/Machine vision specialist
Education: Master Science Mechatronics and AI
Worked at Semcon since: April 2019
I have most fun at work when: I visit customers and learn how they produce things, like candy. It is like a real life version of “how it’s made” documentary television series.