Apple is usually in the news over some product launch, future iPhone
speculations, or patent filing. Not this week. Apple made the news over,
wait for this, a research paper.
Turns out this is a research paper that describes a technique for improving artificial intelligence. The focus is on computer vision and pattern recognition.
How well do machines see images? How well do they interpret them? The researchers are in that area of enquiry.
The research paper is a big deal, said Paul Lilly, senior editor, Hot Hardware. Why? Apple, he said, "joins the fray after having published its first AI paper this month." It was submitted in November.
By fray, Lilly was referring to the world's biggest technology
companies, including Microsoft, IBM, Facebook, Google, paying attention
to the growing fields of machine learning and artificial intelligence.
There is another reason that this move has drawn the attention of
tech watchers. "Apple has kept its research tight lipped and out of the
public eye. Publishing this paper can be seen as an indication that
Apple wants a more visible presence in the field of AI," said Lilly.
Don Reisinger in Fortune similarly noted that "Scientists around the world have long criticized Apple for not publishing research about artificial intelligence."
(Nonetheless, Apple's competitors may generally publish their own
papers on a number of topics, but they, too, he added, keep some
advancements secret.)
Hot Hardware pointed out that Apple's researchers looked at a
method that involves "a simulator generating synthetic images that are
put through a refiner. The result is then sent to a discriminator that
must figure out which are real and which are synthetic."
AppleInsider talked about the use of synthetic, or computer generated, images.
"Compared to training models based solely on real-world images, those
leveraging synthetic data are often more efficient because computer
generated images are usually labelled. For example, a synthetic image of
an eye or hand is annotated as such, while real-world images depicting
similar material are unknown to the algorithm and thus need to be
described by a human operator."
Thing is, relying on simulated images may not prove successful. AppleInsider said, "computer
generated content is sometimes not realistic enough to provide an
accurate learning set. To help bridge the gap, Apple proposes a system
of refining a simulator's output through "Simulated+Unsupervised
learning."
The authors said in their paper that they proposed S+U learning to
refine a simulator's output with unlabeled real data. Their method
involved an adversarial network.
Avaneesh Pandey, International Business Times, explained what
was going on. He said, "the researchers used a technique known as
adversarial learning, wherein two competing neural networks basically
try to outsmart each other. In this case, the two neural networks
are the generator, which, as the name suggests, generates realistic
images, and the discriminator, whose function is to distinguish between
generated and real images."
SOURCE:
TechXplore



No comments:
Post a Comment