Research papers on microsft windows security
Get started with Microsoft Security
By continuing to browse this site, you agree to this use. Learn more. Featured image for Top 6 email security best practices to protect against phishing attacks and business email compromise. What should IT and security teams be looking for in an email security solution to protect all their users, from frontline workers to the C-suite?
Security, privacy, and cryptography – Microsoft Research
Here are 6 tips to ensure your organization has a strong email security posture. Featured image for Guarding against supply chain attacks—Part 1: The big picture.
Paying attention to every link in your supply chain is vital to protect your assets from supply chain attacks. Featured image for Patching as a social responsibility. Featured image for Deep learning rises: New methods for detecting malicious PowerShell.
We adopted a deep learning technique that was initially developed for natural language processing and applied to expand Microsoft Defender ATP's coverage of detecting malicious PowerShell scripts, which continue to be a critical attack vector. Featured image for From unstructured data to actionable intelligence: Using machine learning for threat intelligence. Machine learning and natural language processing can automate the processing of unstructured text for insightful, actionable threat intelligence.
Featured image for A case study in industry collaboration: Poisoned RDP vulnerability disclosure and response. Through a cross-company, cross-continent collaboration, we discovered a vulnerability, secured customers, and developed fix, all while learning important lessons that we can share with the industry.
Featured image for How Windows Defender Antivirus integrates hardware-based system integrity for informed, extensive endpoint protection. The deep integration of Windows Defender Antivirus with hardware-based isolation capabilities allows the detection of artifacts of attacks that tamper with kernel-mode agents at the hypervisor level. Featured image for New machine learning model sifts through the good to unearth the bad in evasive malware.
Most machine learning models are trained on a mix of malicious and clean features. Attackers routinely try to throw these models off balance by stuffing clean features into malware. Monotonic models are resistant against adversarial attacks because they are trained differently: they only look for malicious features.
To evade a monotonic model, an attacker would have to remove malicious features.