Search

How Drawing Became The Gateway To AI Communication

Shutterstock

If we can teach artificial intelligence to draw, will that help us communicate with it? As someone who's involved in both the app and entertainment industries, I’ve been tracking the evolution of AI to see how it’ll change my business and my clients’ expectations of my domain. But as someone with a lifelong interest in art and computing, I’ve been fascinated by recent advances that link AI to sketching.

Drawing has long been fundamental to the human thought process. When asked to draw a pig, for instance, people draw a generalized concept of a pig that minimizes its form into only the most unique and recognizable attributes. Pigs have a nearly unlimited number of attributes including hair, four legs and a body shape shared with many other animals. But most sketches of a pig focus instead on a short curly tail, large nostrils and triangular ears -- those attributes that we find distinct to that animal.

According to a June article in The Atlantic, by training AI to recognize the “pigness” of human sketches, we can train it to recognize what humans feel is unique about an entity like a pig and understand human thinking in a deep and widely applicable way. And all of this learning starts with clunky doodles hastily drawn with a mouse.

AI Gets In The Sketching Game

Take, for example, AI-enhanced art projects like Pix2Pix, in which a neural network system creates a photorealistic -- though often grotesquely distorted --  image of a face based on doodles of a face. The website featuring these images received thousands of visitors, so many that it became overloaded, though the images can still be seen in this video (warning: Video contains NSFW language). While certainly captivating and even slightly nightmare-inducing, the Pix2Pix images reveal how AI can already make fairly realistic faces when fed more accurate line drawings.

This process is all about extracting the underlying components of what makes a face look like a face or what makes any singular object look like that object. AI is already able to identify and categorize those components, but in order to complete the extraction that translates into more comprehensive recognition, AI systems need many examples to learn from.

Back in 2015, Google already knew this when launched its Android Experiment site, which later turned into AI Experiments featuring programs like Quick, Draw! -- a website that gives users 20 seconds to sketch a common object such as a cannon or a lion. By collecting thousands of these sketch examples, Google then feeds this database of line drawings to its AI systems and lets them study it.

The result? By smartly investing in the AI software infrastructure that will be able to recognize more complex objects for the purposes of better image search, either for web searches or as part of robot vision, Google is taking computers beyond replication. One program, announced in April, is sketch-rnn, an online application that takes the most basic start of a user-provided drawing for a certain named object, like an angel and then attempts to complete that drawing using its knowledge of how angels ought to look. Another application called AutoDraw analyzes users’ sketches and suggests clip art that matches them, turning a person who fails at Pictionary into a professional cartoonist.

">

Shutterstock

If we can teach artificial intelligence to draw, will that help us communicate with it? As someone who's involved in both the app and entertainment industries, I’ve been tracking the evolution of AI to see how it’ll change my business and my clients’ expectations of my domain. But as someone with a lifelong interest in art and computing, I’ve been fascinated by recent advances that link AI to sketching.

Drawing has long been fundamental to the human thought process. When asked to draw a pig, for instance, people draw a generalized concept of a pig that minimizes its form into only the most unique and recognizable attributes. Pigs have a nearly unlimited number of attributes including hair, four legs and a body shape shared with many other animals. But most sketches of a pig focus instead on a short curly tail, large nostrils and triangular ears -- those attributes that we find distinct to that animal.

According to a June article in The Atlantic, by training AI to recognize the “pigness” of human sketches, we can train it to recognize what humans feel is unique about an entity like a pig and understand human thinking in a deep and widely applicable way. And all of this learning starts with clunky doodles hastily drawn with a mouse.

AI Gets In The Sketching Game

Take, for example, AI-enhanced art projects like Pix2Pix, in which a neural network system creates a photorealistic -- though often grotesquely distorted --  image of a face based on doodles of a face. The website featuring these images received thousands of visitors, so many that it became overloaded, though the images can still be seen in this video (warning: Video contains NSFW language). While certainly captivating and even slightly nightmare-inducing, the Pix2Pix images reveal how AI can already make fairly realistic faces when fed more accurate line drawings.

This process is all about extracting the underlying components of what makes a face look like a face or what makes any singular object look like that object. AI is already able to identify and categorize those components, but in order to complete the extraction that translates into more comprehensive recognition, AI systems need many examples to learn from.

Back in 2015, Google already knew this when launched its Android Experiment site, which later turned into AI Experiments featuring programs like Quick, Draw! -- a website that gives users 20 seconds to sketch a common object such as a cannon or a lion. By collecting thousands of these sketch examples, Google then feeds this database of line drawings to its AI systems and lets them study it.

The result? By smartly investing in the AI software infrastructure that will be able to recognize more complex objects for the purposes of better image search, either for web searches or as part of robot vision, Google is taking computers beyond replication. One program, announced in April, is sketch-rnn, an online application that takes the most basic start of a user-provided drawing for a certain named object, like an angel and then attempts to complete that drawing using its knowledge of how angels ought to look. Another application called AutoDraw analyzes users’ sketches and suggests clip art that matches them, turning a person who fails at Pictionary into a professional cartoonist.

Let's block ads! (Why?)

Read Again https://www.forbes.com/sites/forbestechcouncil/2017/11/01/how-drawing-became-the-gateway-to-ai-communication/

Bagikan Berita Ini

0 Response to "How Drawing Became The Gateway To AI Communication"

Post a Comment

Powered by Blogger.