Microsoft established an AI ‘red team’ in 2018 to proactively address emerging risks in artificial intelligence. Acting as threat actors, the team identifies vulnerabilities in AI systems before they can be exploited. Recently, Microsoft released a whitepaper from the red team, outlining key findings and lessons learned from their work.
The whitepaper highlights the team’s focus on addressing novel risks specific to AI, including the challenges posed by generative AI in modern applications. It emphasizes the importance of combining human expertise with automation to effectively detect and mitigate risks. Additionally, traditional vulnerabilities such as outdated software dependencies and improper security engineering remain significant concerns that require human intervention.
Microsoft stresses the need for continuous testing, updated practices, and a ‘break-fix’ cycle to maintain cybersecurity effectiveness. The red team found that subject matter experts in fields like medicine and cybersecurity are essential for understanding automation risks. Cultural competence and emotional intelligence were also identified as crucial cybersecurity skills.
Overall, Microsoft’s AI red team’s efforts demonstrate a proactive approach to securing AI systems and highlight the evolving nature of cybersecurity in the face of advancing technology.