Sells devices that use facial recognition via sensors in “real time/real world” for Face ID and machine learning on photos stored in iCloud
No one reads privacy policies; Apple’s is remarkable: (underlines are mine)
“Face ID data—including mathematical representations of your face—is encrypted and protected by the Secure Enclave.
… Face ID data doesn’t leave your device and is never backed up to iCloud or anywhere else.
… Apps are notified only as to whether the authentication is successful.”
“If you choose to enroll in Face ID, you can control how it's used or disable it at any time.
“Your data is private property... your Home app data is stored in a way that Apple can’t read it. Your accessories are controlled by your Apple devices instead of the cloud, and communication is encrypted end-to-end."
So only you and the people you choose can access your data.
For facial recognition, Apple doesn’t upload data from your network to its servers. Instead, it relies on faces you’ve already identified in the Photos app.
Sells products that are more directly competitive to proposed Durin offerings
Facial recognition disabled in Illinois
Directly competitive devices; could be partners on services
“Ring does not use facial recognition technology in any of its devices or services, and will neither sell nor offer facial recognition technology to law enforcement.”
“Beginning next week, public safety agencies will only be able to request information or video from their communities through a new, publicly viewable post category on Neighbors called Request for Assistance. ... All Request for Assistance posts will be publicly viewable in the Neighbors feed, and logged on the agency’s public profile. This way, anyone interested in knowing more about how their police agency is using Request for Assistance posts can simply visit the agency’s profile and see the post history.”
In response to a letter from Senator Ed Markey (D-Mass.), Amazon admitted that is has shared video with police without user consent.
Directly competitive, de-emphasizing privacy... or really any "policy" issues
Wyze discloses information about users -- defined loosely:
“In certain circumstances, we disclose (or permit others to directly collect) information about you.”
We're in the business of trust, and working in a zero-trust environment.
Illustrating the policy implications of having different forms and what is possible with each. e.g. ignoring bystanders, processing multi-factors
Particularly worth comparing Active and Passive IDe.g. Actual face representations are turned into a UUID that's not linked/linkable to Faces, we use that to understand regular usage patterns, etc. OR look at aggregated trends, etc.
|What values do we want our products to espouse?|
|What can we NOT build as it will violate our values?|
|What data must we collect in order to succeed?|
|What is counter-intuitive in collecting data to protect privacy?|
|How do we think about balancing analyzing data for better security with reasonably protecting privacy?|
|Where are privacy v security tradeoffs NOT needed? e.g. where can we get both|
|What is the difference between active and passive ID?|