Wed. Oct 20th, 2021


Yet it assumes you can retain that training data, Kauts said. He and his colleagues at Nvidia have come up with a different way of disclosing personal data, including faces and other objects, medical data and more, without the need to access training data.

Instead, they have developed an algorithm that can reproduce information published by a trained model Reversing the steps that the model goes through When processing that data. Take a trained image-recognition network: To identify what is in an image, the network transmits it through a series of layers of artificial neurons, each layer extracting different levels of information, from abstract edges, to shapes, to more recognizable features.

Kauts’s team saw that they could intercept a model in the middle of these steps and reverse it by recreating the input image from the model’s internal data. They have tested this technique on various general image-recognition models and GANs. In one experiment, they showed that they could accurately reproduce images from ImageNet, one of the most well-known image recognition datasets.

ImageNet (below) Entertaining those images created by rewinding a trained model, as well as images from ImageNet (above)

NVIDIA

Like Webster’s work, the recreated images are closely related to reality. “We were amazed by the final quality,” Coutts said.

The researchers argue that such attacks are not merely speculative. Smartphones and other small devices are starting to use more AI. Due to battery and memory limitations, models are sometimes only half-processed on the device and sent to the cloud for the ultimate computing crisis, a method known as split computing. Most researchers assume that split computing will not reveal any personal data from a person’s phone because only the model is shared, Koutz said. But his attack shows that this is not the case.

Kauts and his colleagues are now working on ways to prevent models from leaking personal data. We wanted to understand the risks so that we could reduce vulnerabilities, he said.

Although they use very different techniques, he thinks his work and Webster complement each other. Webster’s team has shown that personal information can be found in the output of a model; Kauts’s team has shown that personal information can be disclosed by rewriting the input. “It’s important to explore both aspects to better understand how to prevent an attack,” Coutts said.



Source link

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *