Gmail just got an awesome new security feature that we'll probably never see at work and one that we should have had for years. However, it may have not been possible before, as this feature relies on deep learning algorithms to enhance user protection against a particularly fruitful attack from hackers.
You may have been advised over the years not to open email attachments that originate from unknown parties, and you may have heard how seemingly innocuous Office files turned out to contain malicious payloads capable of hacking the target's machine. Going forward, Google aims to reduce the instances where such emails actually reach their destinations.
That's where deep algorithms come into play that will scan attachments coming to your inbox in search of hidden hacks. Google has been using the new tool since late 2019, the company explained in a blog post. Since then, it increased its daily detection of Office documents with malicious scripts by 10%:
Our technology is especially helpful at detecting adversarial, bursty attacks. In these cases, our new scanner has improved our detection rate by 150%. Under the hood, our new scanner uses a distinct TensorFlow deep-learning model trained with TFX (TensorFlow Extended) and a custom document analyzer for each file type. The document analyzers are responsible for parsing the document, identifying common attack patterns, extracting macros, deobfuscating content, and performing feature extraction.
The scanner will run alongside the existing scanning technologies, and both of them will deliver a verdict to the decision engine that blocks a malicious document.
Google notes that malicious documents represent 58% of the malware targeting Gmail users, which is why the technology is so important. The scanner will work only with Office files, not other attachments, which means you'll still have to rely on your wits when it comes to other files from unknown individuals.
Google presented its findings at RSA 2020, and you can read more about them here, or watch the clip below:
No comments:
Post a Comment