Tuesday, May 7, 2024
No menu items!
HomeCloud ComputingCloud CISO Perspectives: Early July 2023

Cloud CISO Perspectives: Early July 2023

Welcome to the first Cloud CISO Perspectives for July 2023. Today, I’ll be asking my colleague Royal Hansen, vice president of Privacy, Safety, and Security Engineering at Google, about AI, security, and risk topics.

This year has been a banner year for artificial intelligence (AI), and especially for AI and security. There’s been a huge surge of interest in AI and how it can be applied to many fields, including security. However, in the pursuit of progress within these new frontiers of innovation, there need to be clear industry security standards for building and deploying this technology in a responsible manner. 

I dive into Google’s progress in this area with Royal, who is exploring these topics at the Aspen Security Forum this week. I hope you all will find Royal’s answers to my questions informative.

As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.

aside_block[StructValue([(u’title’, u’Board of Directors Insights Hub’), (u’body’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ec8119f6490>), (u’btn_text’, u’Visit the Hub’), (u’href’, u’https://cloud.google.com/solutions/security/board-of-directors’), (u’image’, <GAEImage: gcat small.jpg>)])]

The promise and perils of AI and security

Phil Venables: Let’s start by talking about our AI Red Team, which we feature in our new paper released yesterday at the Aspen Security Forum. Why does AI need a red team?  

Royal Hansen: I’m really excited about this. At Google, we believe that red teaming — friendly hackers tasked with looking for security weaknesses in technology — will play a decisive role in preparing every organization for attacks on AI systems. Google has been an AI-first company for many years now, and this paper shows how red teaming is a core component of securing AI technologies. 

It focuses on three important areas: 1) what red teaming in the context of AI systems is and why it is important; 2) what types of attacks AI red teams simulate; and 3) lessons we have learned that we can share with others.

aside_block[StructValue([(u’title’, u’Read the AI Red Teamu2019s new report’), (u’body’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ec8119f64d0>), (u’btn_text’, u’Download the report’), (u’href’, u’https://services.google.com/fh/files/blogs/google_ai_red_team_digital_final.pdf’), (u’image’, <GAEImage: gcat small.jpg>)])]

PV: Our team has a singular mission, to simulate threat actors targeting AI deployments. What kinds of attacks is the red team simulating? 

RH: The AI Red Team is focusing squarely on attacks on AI systems. We detail in the report six tactics, techniques, and procedures (TTP) that attackers are likely to use against AI: prompt attacks, extraction of training data, backdooring the AI model, adversarial examples to trick the model, data poisoning, and exfiltration. 

Since AI systems often exist as part of a larger whole, we do stress that AI Red Team TTPs should be used along with traditional red team exercises. A good example of this is how our AI Red Team has worked with our Trust and Safety team to help prevent content abuse.

PV: Can you talk about what we learned from the report? 

RH: Sure. I’ll start with some tactical lessons.

We know that to protect against many kinds of attacks, traditional security controls such as ensuring the systems and models are properly locked down can significantly mitigate risk. This is true in particular for protecting the integrity of AI models throughout their lifecycle, which can help prevent data poisoning and backdoor attacks. 

It was helpful to learn that many attacks on AI systems can be detected in the same way as traditional attacks. But others — including prompt attacks and content issues — may require layering multiple safety models. Traditional security philosophies, such as validating and sanitizing both input and output to the models, still apply in the AI space.

From a higher-level point of view, addressing red team findings can be challenging, and some attacks may not have simple fixes. We encourage red teams to partner with security and AI subject matter experts for realistic end-to-end adversarial simulations.

PV: Let’s pull back the lens a bit and look at how we got here with AI, which has really dominated the technology conversation this year. How would you describe its evolution — particularly during your time at Google? 

RH: AI isn’t new — we’ve been incorporating AI into our products for more than a decade. If you’ve been using Google Search, or Translate, or Maps, or Gmail, or the Play Store for apps, you’ve been using and benefiting from AI for years. 

One of our AI milestones includes using machine learning to help detect anomalies on our internal networks back in 2011. Today, those capabilities have evolved and regularly help our red teams discover and test sophisticated hacking techniques against Google’s own systems. 

In 2014, we started a Machine Learning Fairness team. In 2018, we adopted our AI principles, which led to spearheading the movement to adopt responsible AI, based on mitigating complexities and risks, and also improving people’s lives while addressing social challenges. 

This year, we built on our collaborative approach to cybersecurity by launching our Secure AI Framework (SAIF). SAIF is inspired by best practices for security that we’ve applied to software development, while incorporating our understanding of security megatrends and risks specific to AI systems.

Technology can create new threats, but it can also help us fight them. AI can often help counter the issues created by AI. It could even give security defenders the upper hand over attackers for the first time since the creation of the internet.

SAIF is designed to help mitigate risks specific to AI systems like stealing the model, poisoning the training data, injecting malicious inputs through prompt injection, and extracting confidential information in the training data

So, while there’s a lot of discussion about generative AI in cybersecurity – and beyond – right now, we’ve been using and learning from AI more broadly in our day-to-day work for years. 

PV: How can we ensure a higher quality of online information, particularly in critical situations such as moments of crisis and war, or elections? How are you thinking about security and protections in these moments that matter, particularly in the age of AI? 

RH: Technology can create new threats, but it can also help us fight them. AI can often help counter the issues created by AI. It could even give security defenders the upper hand over attackers for the first time since the creation of the internet. 

For example, Gmail uses AI right now to automatically block more than 99.9% of malware, phishing, and spam, and protects more than 1.5 billion inboxes. AI can help identify and track misinformation, disinformation, and manipulated media. One notable example of that happened last year, when Mandiant discovered and sounded the alarm about the AI-generated “deepfake” video impersonating Ukrainian President Volodymyr Zelensky surrendering to Russia.

We already use machine learning to identify toxic comments and problematic videos. More technical AI innovations we’re working on include watermarking AI-generated images, and creating tools to evaluate online information — like the upcoming “About this Image” feature in Google Search. We’ve also joined the Partnership on AI’s Responsible Practices for Synthetic Media, which promotes responsible practices in the development, creation, and sharing of media created with generative AI.

While frontier AI models offer tremendous promise to improve the world… their development and deployment will require significant care — including potential new regulatory requirements.

Looking ahead, our challenge is to put appropriate controls in place to prevent malicious use of AI and to work collectively to address bad actors, while maximizing the potential benefits of AI to stay at the front of the global competitiveness race. 

PV: What work is Google doing to manage risks we might face from AI?

RH: We think about AI and security primarily through two lenses. First, using AI to enhance safety and security, and second, securing AI from attack.

While frontier AI models offer tremendous promise to improve the world, governments and industry agree that appropriate guardrails are required on the policy level, on the business level, and on the technology level. 

Their development and deployment will require significant care — including potential new regulatory requirements. We’ve already seen important contributions to these efforts by the U.S. and U.K. governments, the European Union, the G7 through the Hiroshima AI Process, and others. To build on these efforts, further work is needed on safety standards and evaluations to ensure advanced AI systems are developed and deployed responsibly.

The lines are blurring between safety and security in ways that require us to collaborate across cyber and trust and safety, across consumer and enterprise, across public and private sector, national and international.

With the stakes so high, we’re calling on governments, the private sector, academia, and civil society to work together on a responsible AI policy agenda. And to enable progress in AI, we must focus on three key areas: opportunity, responsibility, and security.

PV: How have you seen Google’s approach to monitoring and responding to cyberattacks change? Where do we stand now? How has user protection evolved?

RH: Keeping users safe online is more complex and urgent than ever before. We’re seeing an increasing number of new malware families, financially motivated attacks such as ransomware, supply chain attacks as well as rising cyber attacks from nation-state-backed actors against critical infrastructure. This has brought a decades-long problem into focus for policymakers and enterprise leadership as it has disrupted our way of life and made the stakes higher than ever.

We need to expand our thinking about the threat landscape to secure users, governments, and enterprises holistically from ever-changing future attacks.

PV: How has your experience been at the Aspen Security Forum this week? What were some of the key takeaways? 

RH: Events like this are a great way to hear from some of the best minds in security. I came eager to listen, learn, and return to Google with lessons that will make us a better partner in privacy, safety, and security. It was a jam-packed week connecting with new and old colleagues across the private and public sector. 

I’m struck by the many different dimensions of AI and security here. The lines are blurring between safety and security in ways that require us to collaborate across cyber and trust and safety, across consumer and enterprise, across public and private sector, national and international. The SAIF framework has been a great way to help organizations begin building this approach into their AI plans.

aside_block[StructValue([(u’title’, u’Hear monthly from our Cloud CISO in your inbox’), (u’body’, <wagtail.wagtailcore.rich_text.RichText object at 0x3ec8283053d0>), (u’btn_text’, u’Subscribe today’), (u’href’, u’https://go.chronicle.security/cloudciso-newsletter-signup?utm_source=cgc-blog&utm_medium=blog&utm_campaign=FY23-Cloud-CISO-Perspectives-newsletter-blog-embed-CTA&utm_content=-&utm_term=-‘), (u’image’, <GAEImage: gcat small.jpg>)])]

In case you missed it

Here are the latest updates, products, services, and resources from our security teams so far this month: 

Get ready for Google Cloud Next: Discounted early-bird registration for Google Cloud Next ‘23 has sold out, but you can still register for the conference. This year’s Next comes at an exciting time, with the emergence of generative AI, breakthroughs in cybersecurity, and more. It’s clear that there has never been a better time to work in the cloud industry. Check out our scheduled security sessions, and register now.

Boards should bring on experts to help raise their cybersecurity IQ: In our second Perspectives on Security for the Board report, learn more on how boards of directors who give a seat to security can better influence their organizations’ migration to the cloud, respond to the latest threats, and use AI responsibly. Read more.

How Safe Browsing helped pave the way to our passwordless future: Launched in 2005 as an anti-phishing plugin for Firefox, today Google Safe Browsing protects more than 5 billion devices across the world.  It’s also a quintessential demonstration of how tech companies can use their insight-at-scale to improve security. Read more.

How Google Cloud NAT helped strengthen Macy’s security: Macy’s is well known for its high-end fashion worldwide. It’s less known for the strong measures it takes to ensure its customers’ data security. When Macy’s decided to move its infrastructure from on-premises to Google Cloud, it required the move be done without sacrificing security or degrading the user experience. Read more.

Google Workspace earns Dutch government approval: The Dutch Ministry of Education affirmed to the Dutch Parliament that Google has delivered on the commitments it made as part of the data protection impact assessment, conducted by the Dutch government and education sector representatives. Public sector entities and educational institutions in the Netherlands can continue to use Google Workspace and Google Workspace for Education with renewed confidence. Read more.

How to configure Workload Identity Federation for GitHub and Terraform Cloud: Workload Identity Federation can be integrated with external providers, such as Gitlab, GitHub actions, and Terraform Cloud. We show how tokens issued by external providers can be mapped to various attributes, and how they can be used to evaluate conditions to restrict which identities can authenticate. Read more.

Google Cloud, CyberGRX partner to help scale, accelerate assessments: CyberGRX provides a comprehensive and objective view of Google Cloud’s security posture based on a number of local compliance regime requirements and the MITRE ATT&CK framework. Our collaboration can help scale and accelerate risk assessments and due diligence services. Read more.

News from Mandiant

The GRU’s disruptive playbook: Mandiant has been tracking how Russian military intelligence (GRU) uses a standard five-phase playbook in its disruptive operations against Ukraine, with the likely goal of deliberately increasing the speed, scale, and intensity at which the GRU can conduct offensive cyber operations while also minimizing the odds of detection. Read more.

Threat actors nurse their nostalgia for USB drives in new attacks: In the first half of 2023, Mandiant Managed Defense observed a threefold increase in the number of attacks using infected USB drives to steal secrets. In this blog post, we detail two USB-based cyber espionage campaigns from this year. Read more.

Defend against latest Active Directory Certificate Services threats: In this hardening guide, Mandiant explains how organizations can better defend against cyberattacks that target their Active Directory Certificate Services. Read more.

Escalating privileges via third-party Windows installers: Learn how Mandiant’s Red Team researches and exploits zero-day vulnerabilities in third-party Windows installers and what software developers should do to reduce the risk of exploitation. We also introduce a new tool to simplify enumeration of cached Microsoft Software Installer files. Read more.

Google Cloud Security podcasts

We launched a weekly podcast focusing on Cloud Security in February 2021. Hosts Anton Chuvakin and Timothy Peacock chat with cybersecurity experts about the most important and challenging topics facing the industry today. Earlier this month, they discussed:

Are you really using the cloud securely? “The cloud is secure, you’re just not using it securely” is a common aphorism among the cloud security set. How much truth is there behind the words? From the practical meaning of using the cloud securely to the growing interest in SaaS, we deflate myths and debate cloud security realities with Steve Riley, field chief technology officer at Netskope. Listen here.

How CISO cloud dreams and realities collide: What are the realistic cloud risks today for an organization using the public cloud? And does the cloud really make security “easier”? We discuss the chasm between the cloud realities and cloud myths with Rick Doten, vice president of Information Security at Centene Corporation and CISO of Carolina Complete Health. Listen here.

Just the facts on building enterprise threat intelligence capability: If threat intelligence was easy, more organizations would be doing it — yet the fact is that many organizations struggle to operationalize threat intelligence. So we tracked down John Doyle, principle intelligence enablement consultant on our Mandiant team, to explain how businesses can better use threat intel, and explore the new intelligence class he created that’s focused on building enterprise threat intelligence capabilities. Listen here.

Mandiant podcasts

Threat Trends: A requirements-driven approach to cyber threat intelligence: Dr. Jamie Collier, senior threat intelligence advisor at Mandiant, joins host Luke McNamara to discuss the recent white paper from Mandiant on developing a requirements-driven approach to intelligence, challenges that organizations face in this area, and the importance of recurring stakeholder feedback to a well-functioning cyber threat intelligence team. Listen here.

To have our Cloud CISO Perspectives post delivered twice a month to your inbox, sign up for our newsletter. We’ll be back in two weeks with more security-related updates from Google Cloud.

Cloud BlogRead More

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments