The issue is that, until recently, authorities have been willing to let tech companies figure this out themselves. But with growing concern about abuse of personal information and politically-driven laissez-faire social media treatment of disinformation, there is pressure to impose stronger standards. JL
Thomas Brewster reports in Forbes:
When Apple systems idenfity hashes - signatures attached to previously-identified child abuse photos and videos - passing through the company's servers, a flag will go up. The email or file containing the potentially illegal images will be quarantined for further inspection. A staff member then looks at the content of the files and analyzes the emails. The real battleground is where content is encrypted.
Apple is well known for the secretive ways it sometimes operates, whether that’s in protecting its iOS platform or how it deals with egregious criminals on its servers, especially when it comes to prying into their emails.But thanks to a search warrant uncovered by Forbes, for the first time we now know how the iPhone maker intercepts and checks messages when illegal material - namely, child abuse - is found within. The warrant, filed in Seattle, Washington, this week, shows that despite reports of Apple being unhelpful in serious law enforcement cases, it's being helpful in investigations.How does Apple 'intercept' emails?To be clear: Apple isn't manually checking all of your emails. It uses what most other major tech companies like Facebook or Google use to detect child abuse imagery: hashes.Think of these hashes as signatures attached to previously-identified child abuse photos and videos. When Apple systems - not staff - see one of those hashes passing through the company's servers, a flag will go up. The email or file containing the potentially illegal images will be quarantined for further inspection.What happens next?Once the threshold has been met, that's enough for a tech company to contact the relevant authority, typically the National Center for Missing and Exploited Children (NCMEC). NCMEC is a non-profit that acts as the nation’s law enforcement "clearing house" for information regarding online child sexual exploitation. It will typ cally call law enforcement after being tipped about illegal content, often launching criminal investigations.But in Apple's case, its staff are clearly being more helpful, first stoppinemails containing abuse material from being sent. A staff member then looks at the content of the files and analyzes the emails. That's according to a search warrant in which the investigating officer published an Apple employee's comments on how they first detected “several images of suspected child pornography” being uploaded by an iCloud user and then looked at their emails. (As no charges have been filed against that user, Forbes has chosen to publish neither his name nor the warrant.)"When we intercept the email with suspected images they do not go to the intended recipient. This individual ... sent 8 emails that we intercepted. [Seven] of those emails contained 12 images. All 7 emails and images were the same, as was the recipient email address. The other email contained 4 images which were different than the 12 previously mentioned. The intended recipient was the same," the Apple workers' comments read."I suspect what happened was he was sending these images to himself and when they didn’t deliver he sent them again repeatedly. Either that or he got word from the recipient that they did not get delivered.”The Apple employee then examined each of these images of suspected of child pornography, according to the special agent at the Homeland Security Investigations unit.In this case, Apple proved even more useful, providing data on the iCloud user, including his name, address and mobile phone number that the user submitted when they signed up. The government also asked for the contents of the user's emails, texts, instant messages and "all files and other records stored on iCloud." A document outlining what further information was retrieved from Apple simply reveals a file on the iCloud account was sent in early February without revealing what specific data was provided.Has Apple’s approach changed?Previously, Apple has been reticent on how it deals with such issues. Talking at the Consumer Electronics Show in Las Vegas in January, chief privacy officer Jane Horvath said images were scanned, but wouldn’t say whether or not it looked at images uploaded to its iCloud, or disclose what techniques Apple used to check for illegal material. It’s taken a government court filing, one that Forbes had to dig out, to reveal more about Apple’s practices here.Is there a privacy problem here?As long as Apple employees are only looking into emails when abusive images are detected by its computing systems, there shouldn't be much of a privacy issue here.Apple, like all tech companies, has to balance privacy with safety. "I think the balance that Apple have drawn is a good one," says professor Alan Woodward, a cybersecurity expert at the University of Surrey. "It allows for the search for known extreme imagery but also has safeguards to prevent abuse of the ability to search emails."No matter how much automation there is in initially flagging illegal imagery, a human has to do the final check."Woodward hopes that Apple isn't being asked to search for other kinds of content, though. "It immediately makes you wonder if the system could be abused by issuing warrants to ask them to search for content that is different in nature," he tells Forbes. (Apple hadn't provided comment at the time of publication.)What about encrypted messages?The real battleground, however, is where content is encrypted. It's impossible for Apple systems to flag illegal content in messages that have been sealed with a key that only customers have.The so-called "cryptowar" - where tech companies like Apple are being told by the government to help break their own security to allow access to user data - was sparked into life recently, when the FBI sent a letter to Apple demanding it help unlock two iPhones of the alleged terrorist shooter on a Pensacola naval base in December. The FBI wanted to get at the encrypted data within to search for possible leads, but didn't explain why forensics tools like the iPhone cracking GrayKey couldn't unlock the devices. Such tools have long been able to get at data on older iPhones like the 5 and 7 models in the Pensacola case. And, as with the case above, Apple had already given over relevant iCloud information the feds.One senior FBI employee recently confirmed to Forbes the agency had "been able to get into those for years." And, the source added, they didn't believe the DOJ and attorney general William Barr had chosen the right hill to die on in that case, raising concerns about how damaging the letter might have been to increasingly positive relations with Apple."I'm not sure that case was the one they needed," he said, adding a final warning: "There will be others."
0 comments:
Post a Comment