• Septimaeus@infosec.pub
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    9 months ago

    I usually wear the tin foil hat in these debates, but I must concede in this case: the eavesdropping phone theory in particular is difficult to substantiate, from a technical standpoint.

    For one, a user can check this themselves today with basic local network traffic monitors or packet sniffing tools. Even heavily compressed audio data will stand out in the log, no matter how it’s encrypted, streamed, batched or what have you.

    To get a sense of what I mean, run wireshark and give a wake phrase command to see what that looks like. Now imagine trying to obfuscate that type of transmission for audio longer than 2 seconds, and repeatedly throughout a day.

    Even assuming local audio inference and processing on a completely compromised device (rooted/jailbroken, disabled sandboxing/SIP, unrestricted platform access, the works) most phones will just struggle to do that recording and processing indeterminately without a noticeable impact on energy and data use.

    I’m sure advertising companies would love to collect that much raw candid data. It would seem quite a challenge to do so quietly, however, and given the apparent lack of evidence, is thus unlikely to have been implemented at any kind of scale.

    • admiralteal@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      9 months ago

      There’s also a totally plausible and far more insidious answer to what’s going on with the experiences people have of the ads matching their conversations.

      That explanation is advertising works. And worse, it works subconsciously. That you’re seeing the ads and don’t even notice you’re seeing them and then they’re worming their way into your conversations at which point you become more aware of them and then start noticing the ads.

      Which does comport with the billions of dollars spent on advertising every year. It would be very weird if an entire ad industry that’s at least a century old was all a complete nonsense waste of money this whole time.

      To me, this whole narrative is just another parable about why we need to do everything possible to limit our own exposure to ads to avoid being manipulated.

      • Septimaeus@infosec.pub
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        9 months ago

        Damn, I hadn’t thought of that. The chicken egg question of spooky ad relevance. Insidious indeed.

        I feel like the idea of some person or group having enough info to psychologically manipulate or predict should be way scarier than the black helicopter stuff, especially given that it’s one of the few conspiracy theories we actually have a bunch of high quality evidence for, just in marketing and statistics textbooks alone.

        But here we are. Government surveillance is the hot button, not the fact that marketers would happily sock puppet you given the chance.

    • library_napper@monyet.cc
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      9 months ago

      What if the processing is done locally and the only thing they send back home is keywords for marketable products?

      • Septimaeus@infosec.pub
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        9 months ago

        Yeah they’d have to it seems, but real time transcription isn’t free. Even late model devices with better inference hardware have limited battery and energy monitoring. I imagine it’d be hard to conceal that behavior especially for an app recording in the background.

        WetBeardHairs@lemmy.ml mentioned that mobile devices use the same hardware coprocessing used for wake word behavior to target specific key phrases. I don’t know anything about that, but it’s one way they could work around the technical limitations.

        Of course, that’s a relatively bespoke hardware solution that might also be difficult to fully conceal, and it would come with its own limitations. Like in that case, there’s a preset list of high value key words that you can tally, in order to send company servers a small “score card” rather than a heavy audio clip. But the data would be far less rich than what people usually think of with these flashy headlines (your private conversations, your bowel movements, your penchant for musical theater, whatever).

    • Zerush@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      9 months ago

      Smartphones by definition are Spyware, at least if you use the OS as is, because in them all aspects are controlled and logged, either by Google on Android or by Apple on iOS. Adding the default apps that cannot be uninstalled on a mobile that is not rooted. As COX alleges, they also use third-party logs and therefore can track and profile the user very well, even without using this technology that they claim to have.

      Although they feel authorized by the user’s consent to the TOS and PP, the legality depends directly on the legislation of each country. TOS and PP itself, to be a legal contract, must comply in all its points with local legislation to be applicable to the user. For this reason, I think that these practices are very different in the EU from those in the US, where legislation regarding privacy is conspicuous by its absence, that is, that US users should take these COX statements very seriously in their devices, although in the EU they must also be clear that Google and Apple know exactly what they do and where users live, although they are limited from selling this data to third parties.

      Basics:

      – READ ALWAYS TOS AND PP

      • Review the permissions of each app, leaving only the most essential ones
      • Desactivate GPS if not used
      • Review in Android every app with Exodus Privacy, maybe Lookout or MyCyberHome in iOS (Freemium apps !!!)
      • Use as less possible apps from the store
      • Be aware of discount apps from the Supermarket or Malls
      • Don’t store important data in the Phone (Banking, Medical…)
      • Septimaeus@infosec.pub
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        9 months ago

        Agreed, though I think it’s possible to use smart devices safely. For Android it can be difficult outside custom roms. The OEM flavors tend to have spyware baked in that takes time and root to fully undo, and even then I’m never sure I got it all. These are the most common phones, however, especially in economy price brackets, which is why I’d agree that for the average user most phones are spyware.

        Flashing is not useful advice to most. “Just root it bro” doesn’t help your nontechnical relatives who can’t stop downloading toolbars and VPN installers. But with OEM variants undermining privacy at the system level, it feels like a losing battle.

        I’d give credit to Apple for their privacy enablement, especially with E2EE, device lockdown, granular access permission control and audits. Unfortunately their devices are not as affordable and I’m not sure how to advise the average Android user beyond general opt-out vigilance.

    • Goun@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      9 months ago

      I agree.

      What could be possible, would be maybe send tiny bits. For example, a device could categorize some places or times, detect out of pattern behaviours and just record a couple of seconds here and there, then send it to the server when requesting something else to avoid being suspicious. Or just pretend it’s a “false positive” or whatever and say “sorry, I didn’t get that.”

      I don’t think they’re listening to everything, but they could technically get something if they wanted to target you.

      • Septimaeus@infosec.pub
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        9 months ago

        Right, I suppose cybersecurity isn’t so different than physical security in that way. Someone who really wants to get to you always can (read: why there are so many burner phones at def con).

        But for the average person, who uses consumer grade deadbolts in their home and doesn’t hire a private detail when they travel, does an iPhone fit within their acceptable risk threshold? Probably.

    • WetBeardHairs@lemmy.ml
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      9 months ago

      That is glossing over how they process the data and transmit it to the cloud. The assistant wake word for “Hey Google” invokes an audio stream to an off site audio processor in order to handle the query. So that is easy to identify via traffic because it is immediate and large.

      The advertising-wake words do not get processed that way. They are limited in scope and are handled by the low power hardware audio processor used for listening for the assistant wake word. The wake word processor is an FPGA or ASIC - specifically because it allows the integration of customizable words to listen for in an extremely low power raw form. When an advertising wake word is identified, it sends an interrupt to the CPU along with an enumerated value of which word was heard. The OS then stores that value and transmits a batch of them to a server at a later time. An entire day’s worth of advertising wake word data may be less than 1 kb in size and it is sent along with other information.

      Good luck finding that on wireshark.

    • Cheradenine@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      Fucking thank you. As I said in another reply, if this was true my firewall logs would be full, or my data cap blown in a week.