Rocksolid Light

Welcome to RetroBBS

mail  files  register  newsreader  groups  login

Message-ID:  

"Hello again, Peabody here..." -- Mister Peabody


computers / comp.sys.mac.misc / Apple's iCloud Photos and Messages child safety initiatives - AppleInsider

SubjectAuthor
o Apple's iCloud Photos and Messages child safety initiatives -Jolly Roger

1
Apple's iCloud Photos and Messages child safety initiatives - AppleInsider

<in5mr1F589bU1@mid.individual.net>

  copy mid

https://rocksolidbbs.com/computers/article-flat.php?id=461&group=comp.sys.mac.misc#461

  copy link   Newsgroups: comp.sys.mac.misc comp.mobile.ipad misc.phone.mobile.iphone
Followup: misc.phone.mobile.iphone
Path: i2pn2.org!i2pn.org!weretis.net!feeder8.news.weretis.net!news.szaf.org!fu-berlin.de!uni-berlin.de!individual.net!not-for-mail
From: jollyroger@pobox.com (Jolly Roger)
Newsgroups: comp.sys.mac.misc,comp.mobile.ipad,misc.phone.mobile.iphone
Subject: Apple's iCloud Photos and Messages child safety initiatives -
AppleInsider
Followup-To: misc.phone.mobile.iphone
Date: 6 Aug 2021 21:10:57 GMT
Organization: People for the Ethical Treatment of Pirates
Lines: 306
Message-ID: <in5mr1F589bU1@mid.individual.net>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
X-Trace: individual.net W570SsGmKRWXq146ujmqFgsRHEVWp29mE6YOFPGp37X85ZMWXo
Cancel-Lock: sha1:xIGBpp/KXhzFnXU2N/dbvRnO9rc=
Mail-Copies-To: nobody
X-Face: _.g>n!a$f3/H3jA]>9pN55*5<`}Tud57>1<n@LQ!aZ7vLO_nWbK~@T'XIS0,oAJcU.qLM
dk/j8Udo?O"o9B9Jyx+ez2:B<nx(k3EdHnTvB]'eoVaR495,Rv~/vPa[e^JI+^h5Zk*i`Q;ezqDW<
ZFs6kmAJWZjOH\8[$$7jm,Ogw3C_%QM'|H6nygNGhhl+@}n30Nz(^vWo@h>Y%b|b-Y~()~\t,LZ3e
up1/bO{=-)
User-Agent: slrn/1.0.3 (Darwin)
 by: Jolly Roger - Fri, 6 Aug 2021 21:10 UTC

What you need to know: Apple's iCloud Photos and Messages child safety
initiatives

<https://appleinsider.com/articles/21/08/06/what-you-need-to-know-apples-icloud-photos-and-messages-child-safety-initiatives>

Apple's recent child safety announcement about iCloud Photos image
assessment and Messages notifications has generated a lot of hot takes,
but many of the arguments are missing context, historical information —
and the fact that Apple's privacy policies aren't changing.

Apple announced a new suite of tools on Thursday, meant to help protect
children online and curb the spread of child sexual abuse material
(CSAM). It included features in iMessage, Siri and Search, and a
mechanism that scans iCloud Photos for known CSAM images.

Cybersecurity, online safety, and privacy experts had mixed reactions to
the announcement. Users did, too. However, many of the arguments are
clearly ignoring how widespread the practice of scanning image databases
for CSAM actually is. They also ignore the fact that Apple is not
abandoning its privacy-protecting technologies.

Here's what you should know.

Apple's privacy features

The company's slate of child protection features includes the
aforementioned iCloud Photos scanning, as well as updated tools and
resources in Siri and Search. It also includes a feature meant to flag
inappropriate images sent via iMessage to or from minors.

As Apple noted in its announcement, all of the features were built with
privacy in mind. Both the iMessage feature and the iCloud Photo
scanning, for example, use on-device intelligence.

Additionally, the iCloud Photos "scanner" isn't actually scanning or
analyzing the images on a user's iPhone. Instead, it's comparing the
mathematical hashes of known CSAM to images stored in iCloud. If a
collection of known CSAM images is stored in iCloud, then the account is
flagged and a report is sent to the National Center for Missing &
Exploited Children (NCMEC).

There are elements to the system to ensure that false positives are
ridiculously rare. Apple says that the chances of a false positive are
one in a trillion. That's because of the aforementioned collection
"threshold," which Apple declined to detail.

Additionally, the scanning only works on iCloud Photos. Images stored
strictly on-device are not scanned, and they can't be examined if iCloud
Photos is turned off.

The Messages system is even more privacy-preserving. It only applies to
accounts belonging to children, and is opt-in, not opt-out. Furthermore,
it doesn't generate any reports to external entities — only the parents
of the children are notified that an inappropriate message has been
received or sent.

There are critical distinctions to be made between the iCloud Photos
scanning and the iMessage feature. Basically, both are completely
unrelated beyond the fact that they're meant to protect children.

* iCloud Photo assessment hashes images without examining for context,
and compares it to known CSAM collections and generates a report to
issue to the hash collection maintainer NCMEC.

* The child account iMessage feature uses on-device machine learning,
does not compare images to CSAM databases, and doesn't send any
reports to Apple — just the parental Family Sharing manager account

With this in mind, let's run through some potential scenarios that users
are concerned about.

iCloud library false positives - or "What if I'm falsely mathematically
matched?"

Apple's system is designed to ensure that false positives are
ridiculously rare. But, for whatever the reason, humans are bad at
risk-reward assessment. As we've already noted, Apple says the odds of a
false match are one-in-a-trillion. One-in-1,000,000,000,000.

In a human's lifetime, there is a 1-in-15,000 chance of getting hit by
lightning, and that's not a general fear. The odds of getting struck by
a meteor over a lifetime is about 1-in-250,000 and that is also not a
daily fear. Any given mega-jackpot lottery is over 1-in-150,000,000 to
win, and millions of dollars get spent on it every day across the US.

Apple's one-in-a-trillion is not what you'd call a big chance of
somebody's life getting ruined by a false positive. And, Apple is not
the judge, jury, and executioner, as some believe.

Should the material be spotted, it will be human-reviewed before being
handed over to law enforcement. And then, and only then, will the
potential match be handed over to law enforcement.

It is then up to them to issue a warrant, as courts do not accept just a
hash-match as acceptable.

A false unlock on your iPhone with Face ID is incredibly more probable
than a SHA-256 collision at 1-in-1,000,000. A bad Touch ID match is even
more probable than that at 1-in-50,000, and again, neither of these are
big concerns of users.

In short, your pictures of your kids in the bath, or your own nudes in
iCloud are profoundly unlikely to generate a false match.

Messages monitoring - or "If I send a nude to my partner, will I get
flagged?"

If you're an adult, you won't be flagged if you're sending nude images
to a partner. Apple's iMessage mechanism only applies to accounts
belonging to underage children, and is opt-in.

Even if it did flag an image sent between two consenting adults, it
wouldn't mean much. The system, as mentioned earlier, doesn't notify or
generate an external report. Instead, it uses on-device machine learning
to detect a potentially sensitive photo and warn the user about it.

And, again, it's meant to only warn parents that underage users are
sending the pictures.

What about pictures of my kids in the bath?

This concern stems from a misunderstanding of how Apple's iCloud Photos
scanning works. It's not looking at individual images contextually to
determine if a picture is CSAM. It's only comparing the image hashes of
photos stored in iCloud against a database of known CSAM maintained by
NCMEC.

In other words, unless the images of your kids are somehow hashed in
NCMEC's database, then they won't be affected by the iCloud Photo
scanning system. More than that, the system is only designed to catch
collections of known CSAM. So, even if a single image somehow managed to
generate a false positive (an unlikely event, which we've already
discussed), then the system still wouldn't flag your account.

What if somebody sends me illegal material?

If somebody was really out to get you, it's more likely that somebody
would break into your iCloud because the password the user set is "1234"
or some other nonsense, and dump the materials in there. But, if it
wasn't on your own network, it's defensible.

Again, the iCloud Photos system only applies to collections of images
stored on Apple's cloud photo system. If someone sent you known CSAM via
iMessage, then the system wouldn't apply unless it was somehow saved to
your iCloud.

Could someone hack into your account and upload a bunch of known CSAM to
your iCloud Photos? This is technically possible, but also highly
unlikely if you're using good security practices. With a strong password
and two-factor authentication, you'll be fine. If you've somehow managed
to make enemies of a person or organization able to get past those
security mechanisms, you likely have far greater concerns to worry
about.

Also, Apple's system allows people to appeal if their accounts are
flagged and closed for CSAM. If a nefarious entity did manage to get
CSAM onto your iCloud account, there's always the possibility that you
can set things straight.

And, it would still have to be moved to your Photo library for it to get
scanned.

Valid privacy concerns

Even with those scenarios out of the way, many tech libertarians have
taken issue with the Messages and iCloud Photo scanning mechanisms
despite Apple's assurances of privacy. A lot of cryptography and
security experts also have concerns.

For example, some experts worry that having this type of mechanism in
place could make it easier for authoritarian governments to clamp down
on abuse. Although Apple's system is only designed to detect CSAM,
experts worry that oppressive governments could compel Apple to rework
it to detect dissent or anti-government messages.

There are valid arguments about "mission creep" with technologies such
as this. Prominent digital rights nonprofit the Electronic Frontier
Foundation, for example, notes that a similar system originally intended
to detect CSAM has been repurposed to create a database of "terrorist"
content.

There are also valid arguments raised by the Messages scanning system.
Although messages are end-to-end encrypted, the on-device machine
learning is technically scanning messages. In its current form, it can't
be used to view the content of messages, and as mentioned earlier, no
external reports are generated by the feature. However, the feature
could generate a classifier that could theoretically be used for
oppressive purposes. The EFF, for example, questioned whether the
feature could be adapted to flag LGBTQ+ content in countries where
homosexuality is illegal.


Click here to read the complete article

computers / comp.sys.mac.misc / Apple's iCloud Photos and Messages child safety initiatives - AppleInsider

1
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor