A New Danger To Our Online Photos

A New Danger To Our Online Photos
By Anonymous | March 29, 2019

This is age of photo sharing.

We as humans have replaced some our socialization needs by posting our captured moments online. Those treasured pictures on Instagram and facebook fulfill many psychological and emotional needs – from keeping in touch with our family, reinforcing our ego, collecting our memories and even so we can keep up with the Joneses.

You knew what you were doing when you posted your Lamborghini to your FB group. Photo credit to @Alessia Cross

We do this even when the dangers of posting photos at times appear to outweigh the benefits. Our pictures can be held for ransom by digital kidnappers, used in catfishing scams, used to power fake gofundme campaigns or be gathered up by registered sex offenders. Our photos could expose us to real world perils such as higher insurance premiums, real life stalking (using location metadata) and blackmail. This doesn’t even include activities which aren’t criminal but still expose us to harm – like our photos being used against us in job interviews, being taken out of context or being used to embarrass us years later. As they say, the internet never forgets.

As if this all wasn’t even enough, now our private photos are being used by companies to train their algorithms. According to this article in fortune, IBM “released a collection nearly a million photos which were scraped from Flickr and then annotated to describe the subject’s appearance. IBM touted the collection of pictures as a way to help eliminate bias in facial recognition. The pictures were used without consent from the photographers and subjects, IBM relied on “Creative Commons” licenses to use them without paying licensing fees.

IBM has issued the following statement:

IBM has been committed to building responsible, fair and trusted technologies for more than a century and believes it is critical to strive for fairness and accuracy in facial recognition. We take the privacy of individuals very seriously and have taken great care to comply with privacy principles, including limiting the Diversity in Faces dataset to publicly available image annotations and limiting the access of the dataset to verified researchers. Individuals can opt-out of this dataset.

Opting-out however is easier said than done. To remove any images requires photographers to email IBM links to the images they would like to have removed which is a bit hard since IBM has not revealed usernames of any users it pulled photos from.

Given how all the dangers our photos are already exposed to, it might be easy to dismiss this. Is a company training models on your pictures really more concerning than, say, what your creepy uncle is doing with downloaded pictures of your kids?

Well, it depends.

The scary part of our pictures being used to train machines is that we don’t know a lot of things. We don’t know which companies are doing it and we don’t know what they are doing it for. They could be doing it for a whole spectrum of purposes from the beneficial (make camera autofocus algorithms smarter) to innocuous (detect if someone is smiling) to possibly iffy (detect if someone is intoxicated) to ethical dubious (detecting someone’s race or sexual orientation) to downright dangerous (teach Terminators to hunt humans).

It’s all fun and games until your computer tries to kill you. Photo by @bwise

Not knowing means we don’t get to choose. Our online photos are currently thought of a public good and used for any conceivable purpose, even if those purposes are not only something we may not support but possibly even harmful to us. Could your Pride Parade photos be used to train detection of sexual orientation? Could insurance companies use your photos to train detection of participation in risky activities? Could T2000s use John Connor’s photos to find out what Sarah Connor would look like? Maybe these are extreme examples, but it is not much of a leap to think there might be companies developing models that you might find objectionable. And now your photos could be helping them.

All of this is completely legal of course, though it goes against the principles laid out in the Belmont Report. It doesn’t respect persons due to its lack of consent (Respect for Persons), it provides no real advantage to the photographers or subjects (Beneficience) and all the benefits really go to the companies exploiting our photos while we absorb all of the costs (Justice).

With online photo sharing, a Pandora’s box has been opened and there is no going back. As much as your local Walgreen’s Photo Center might wish it, wallet sized photos and printed 5.75 glossies are things of the past. Online photos are here to stay, so we have have to do better.

Maybe we can start with not helping Skynet.

Hasta la vista, baby.

Sources:

Millions of Flickr Photos Were Scraped to Train Facial Recognition Software, Emily Price, Fortune March 12, 2019, http://fortune.com/2019/03/12/millions-of-flickr-photos-were-scraped-to-train-facial-recognition-software/

Leave a Reply