Major privacy alert for Android users.

mastodon.sdf.org/@jack/1139522…

@jack #Privacy #Android #cybersecurity


You remember #Apple scanning all images on your #mobile device?

If you have an #Android #phone, a new app that doesn't appear in your menu has been automatically and silently installed (or soon will be) by #Google. It is called #AndroidSystemSafetyCore and does exactly the same - scan all images on your device as well as all incoming ones (via messaging). The new spin is that it does so "to protect your #privacy".

You can uninstall this app safely via System -> Apps.

developers.google.com/android/…

This entry was edited (10 months ago)
Unknown parent

mastodon - Link to source

nullagent

The system definitely scans photos for nudity already. Today they claim the feature only runs on certain apps but as we've seen with Apple and various world governments there's a major tendency for these sorts of features to creep into all of your content whether that's what Google intended in their first release or not.

security.googleblog.com/2024/1…

@TheMNWolf @jack

in reply to GrapheneOS

@Ra See grapheneos.social/@GrapheneOS/….


The functionality provided by Google's new Android System SafetyCore app available through the Play Store is covered here:

security.googleblog.com/2024/1…

Neither this app or the Google Messages app using it are part of GrapheneOS and neither will be, but GrapheneOS users can choose to install and use both. Google Messages still works without the new app.

@Ra
in reply to GrapheneOS

@noxypaws @Ra See grapheneos.social/@GrapheneOS/….


The functionality provided by Google's new Android System SafetyCore app available through the Play Store is covered here:

security.googleblog.com/2024/1…

Neither this app or the Google Messages app using it are part of GrapheneOS and neither will be, but GrapheneOS users can choose to install and use both. Google Messages still works without the new app.

in reply to nullagent

For folks looking for exactly how the Android client side image scanning works or if it's present see the below. 👇🏿

partyon.xyz/@nullagent/1139663…


The system definitely scans photos for nudity already. Today they claim the feature only runs on certain apps but as we've seen with Apple and various world governments there's a major tendency for these sorts of features to creep into all of your content whether that's what Google intended in their first release or not.

security.googleblog.com/2024/1…

@TheMNWolf @jack

reshared this

in reply to nullagent

A few folks are questioning if AI scanning like what Android is doing can be missused. The last time a similar feature was coming to Apple's iOS the media rightly described it as an extremely dangerous warrantless surveillance tool.

Regardless of what Android developers intended this client side scanner to do it will be enlisted by governments of the world to spy on you and break strong encryption.

9to5mac.com/2023/09/01/csam-sc…

#privacy #cybersecurity #apple #android #ai #clientsidescanning

This entry was edited (9 months ago)
in reply to nullagent

And if you look at the current reporting on Apple and government requests for your private data...

"The encrypted data of millions of Apple users worldwide could reportedly be handed over to the government.

The Home Office has ordered Apple to let it access encrypted data stored in its cloud service, The Washington Post reported."

Demanding access to every last bit you have in any cloud is normal government stuff these days

metro.co.uk/2025/02/08/privacy…

#UKPol #EU #UK #Apple #Privacy #HomeOffice

kim_harding ✅ reshared this.

in reply to nullagent

this post is misinformation, it's the "sensitive content warnings" feature described in this blog post security.googleblog.com/2024/1…
in reply to nullagent

To me it's not clear what this app does, in particular if it sends data back somewhere. That is the problem. That an OS regularly installs new components seems normal.

So once again, people complain about the wrong issues, and I feel this doesn't help, even if it is popular. It doesn't help, because Google can now say, all these complaints have nothing to do with reality, which is not wrong. But instead we should ask for more transparent and easily accessible info.

And I'm not saying this App is harmless. I just seem to have difficulties finding info about it.

in reply to d@nny disc@ mc²

@hipsterelectron @jonny
Here's a thread on what it is:

grapheneos.social/@GrapheneOS/…

It's tiring going through endless news cycles of fake privacy and security threats and we don't really have the energy to deal with it more than that.

We're dealing with ongoing attacks on GrapheneOS on X by several different charlatans/scammers and we've been focused on dealing with that rather than writing about something like this. Threw together a quick thread about what it is though.


The functionality provided by Google's new Android System SafetyCore app available through the Play Store is covered here:

security.googleblog.com/2024/1…

Neither this app or the Google Messages app using it are part of GrapheneOS and neither will be, but GrapheneOS users can choose to install and use both. Google Messages still works without the new app.

The Doctor reshared this.

in reply to GrapheneOS

"The app doesn't provide client-side scanning used to report things to Google or anyone else. It provides on-device machine learning models usable by applications to classify content as being spam, scams, malware, etc."

Forgive me if I'm not understanding correctly, but to clarify:

That statement could be misconstrued to suggest that "on-device machine learning models usable by applications to classify content" is different and distinct from "client-side scanning". To clarify, those're two ways of saying the same thing, with one being more specific. Do you really intend to just point out that it doesn't report things to Google or anyone else by default, and/or that the "client side scanning" is a scan-on-request thing, and not a let's-scan-the-whole-device-by-default thing?

What's stopping any app from using the output of the "on-device machine learning models" to report to third parties?

in reply to Bitslingers-R-Us

@AnachronistJohn @hipsterelectron @jonny We're pointing out neither this app or Google Messages is using it to report something. It's also not scanning for illegal content. Apps also don't need this app to use local ML models. It only provides certain already made models. Apps have always been able to run local classifiers and can use hardware acceleration for it, which has been there for many years. It's not something which just showed up recently with the recent AI craze.
in reply to GrapheneOS

@AnachronistJohn @hipsterelectron @jonny People are using the term client side scanning to refer to doing content scanning for a service on the client side and reporting to the service. That's not what this is doing. This also doesn't somehow enable that in a way that wasn't already doable by any apps wanting to do it. It's a specific implementation of detecting certain kinds of content used by Google Messages for local warnings and blurring with a dialog to bypass it.
in reply to nullagent

I'm not sure if that app specifically scans photos, AFAIK it only scans for malicious apps- although if they broadened the scope that wouldn't be surprising to me either.

I however do know that the EU and USA both mandate running CSAM scanners on any cloud platform, so if you upload anything to Google Drive, OneDrive, Dropbox, iCloud, etc, you will definitely get screened for CSAM and alike.
As much as I hate that, the only way around that is to run your own infra from scratch.

in reply to Paperpad

Exactly. Right at a time when SMS(RCS) end-to-end security is improving isn't it odd that suddenly there's so much helpful client side AI that wants to read your messages. 🤔

Is -accidently- sending a nude really this big of a problem that ALL android users need this feature turned on by default without warning over night?

This entry was edited (9 months ago)
in reply to nullagent

See grapheneos.social/@GrapheneOS/….


The functionality provided by Google's new Android System SafetyCore app available through the Play Store is covered here:

security.googleblog.com/2024/1…

Neither this app or the Google Messages app using it are part of GrapheneOS and neither will be, but GrapheneOS users can choose to install and use both. Google Messages still works without the new app.

Unknown parent

mastodon - Link to source

GrapheneOS

@jinx See grapheneos.social/@GrapheneOS/….


The functionality provided by Google's new Android System SafetyCore app available through the Play Store is covered here:

security.googleblog.com/2024/1…

Neither this app or the Google Messages app using it are part of GrapheneOS and neither will be, but GrapheneOS users can choose to install and use both. Google Messages still works without the new app.

in reply to Karpour

@karpour
Because it isn't happening:

grapheneos.social/@GrapheneOS/…


The functionality provided by Google's new Android System SafetyCore app available through the Play Store is covered here:

security.googleblog.com/2024/1…

Neither this app or the Google Messages app using it are part of GrapheneOS and neither will be, but GrapheneOS users can choose to install and use both. Google Messages still works without the new app.

in reply to Infrapink (he/his/him)

@Infrapink @karpour
The way Graphene handled releasing this feature (opt in, not installed by default, thorough risk analysis) is exactly the opposite of how Android released it(opt-out, installed OTA by default, limited explaintion)

While I agree with the Graphene team's analysis(client side AI can work in the user's best interest, some people might want & like this one) I think the style of rollout on Android alone is enough for many privacy minded folks to not trust this new feature

in reply to nullagent

Or it's this, scanning for #CSAM

actionnetwork.org/petitions/go…

You should get rid of it if you have something to hide.

#csam
This entry was edited (9 months ago)
in reply to Hyperbolix Prudens 🎹🖌️⌨️

@hyperbolix @leo @nazokiyoubinbou it does also say:
> All of this happens on-device to protect your privacy and keep end-to-end encrypted message content private to only sender and recipient.
which i think would imply it doesn't get sent to someone else either? it sounds like they're using an on device machine learning model to classify images then only use that result locally
in reply to moved to @b@mrrp.place

@m04 @leo @nazokiyoubinbou
I'm sorry but this refers only to the recognition process.
It is absolutely unclear how the recognition actually works and where the data or procedure comes from, that is used for this purpose and what happens with data, that is not explicitly mentioned in this text.