We recommend that you upgrade your Amazon RDS Custom for SQL Server instances to the latest GDRs using the Amazon RDS Management Console, or by using the AWS SDK or CLI. Learn more about upgrading your database instances through Amazon RDS Custom User Guide.
Welcome to the second Cloud CISO Perspectives for August 2025. Today, David Stone and Marina Kaganovich, from our Office of the CISO, talk about the serious risk of cyber-enabled fraud — and how CISOs and boards can help stop it.
As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.
aside_block
<ListValue: [StructValue([(‘title’, ‘Get vital board insights with Google Cloud’), (‘body’, <wagtail.rich_text.RichText object at 0x3e9735660160>), (‘btn_text’, ‘Visit the hub’), (‘href’, ‘https://cloud.google.com/solutions/security/board-of-directors?utm_source=cloud_sfdc&utm_medium=email&utm_campaign=FY24-Q2-global-PROD941-physicalevent-er-CEG_Boardroom_Summit&utm_content=-&utm_term=-‘), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
How CISOs and boards can help fight cyber-enabled fraud
By David Stone, director, Office of the CISO, and Marina Kaganovich, executive trust lead, Office of the CISO
David Stone, director, Office of the CISO
Cybercriminals are using IT to rapidly scale fraudulent activity — and directly challenge an organization’s health and reputation. Known as cyber-enabled fraud (CEF), it’s a major revenue stream for organized crime, making it a top concern for board members, CISOs, and other executive leaders.
The financial toll of cyber-enabled fraud on businesses is staggering. The FBI noted that cyber-enabled fraud cost $13.7 billion in 2024, a nearly 10% increase from 2023, and represented 83% of all financial losses reported to the FBI in 2024.
Marina Kaganovich, executive trust lead, Office of the CISO
“Regions that are highly cashless and digital-based” are more vulnerable to the money-laundering risks of cyber-enabled fraud,” said the international Financial Action Task Force in 2023. “CEF can have [a] significant and crippling financial impact on victims. But the impact is not limited to monetary losses; it can have devastating social and economic implications
Tactics used in cyber-enabled fraud, including “ransomware, phishing, online scams, computer intrusion, and business email compromise,” are frequently perceived as posing “high” or “very high” threats, according to Interpol’s 2022 Global Crime Trend Report.
Cyber-enabled fraud drives a complex and dangerous ecosystem, where illicit activities intersect and fuel each other in a vicious cycle. For example, the link between cybercrime and human trafficking is becoming more pronounced, with criminal networks often using the funds obtained through cyber-enabled fraud to fuel operations where trafficked workers are forced to perpetrate “romance baiting” cryptocurrency scams.
At Google Cloud’s Office of the CISO, we believe that a strategic shift toward a proactive, preventive mindset is crucial to helping organizations take stronger action to address cyber-enabled fraud. That starts with a better understanding of the common fraudulent activities that can threaten your business, such as impersonation, phishing, and account takeovers.
Disrupting this ecosystem is a top reason for combating cyber-enabled fraud, yet most efforts to do so are currently fragmented because data, systems, and organizational structures have been siloed. We often see organizations use a myriad of tools and platforms across divisions and departments, which results in inconsistent rule application.
Those weaknesses can limit visibility and hinder comprehensive detection and prevention efforts. Fraud programs in their current state are time-consuming and resource-intensive, and can feel like an endless game of whack-a-mole for the folks on the ground.
At Google Cloud’s Office of the CISO, we believe that a strategic shift toward a proactive, preventive mindset is crucial to helping organizations take stronger action to address cyber-enabled fraud. That starts with a better understanding of the common fraudulent activities that can threaten your business, such as impersonation, phishing, and account takeovers.
From there, it’s essential to build a scalable risk assessment using a consistent approach. We recommend using the Financial Services Information Sharing and Analysis Center’s Cyber Fraud Prevention Framework, which ensures a common lexicon and a unified approach across your entire enterprise. The final piece involves meticulously mapping out the specific workflows where fraudulent activity is most likely to occur.
By categorizing these activities into distinct phases, you can identify the exact points where controls can be implemented, breaking the chain before a threat can escalate into a breach.
In parallel, consider the types of fraud-prevention capabilities that may already be available to support your fraud prevention efforts. Our recent paper on tackling scams and fraud together describes Google Cloud’s efforts in this space, some of which are highlighted below.
Remove scams and fraudulent links, including phishing and executive impersonation, from Google Ads and Google Workspace services through the Financial Services Priority Flagger Program.
Combating cyber-enabled fraud is a key task that CISOs and boards of directors can collaborate on to ensure alignment with executive leadership, especially given the financial and reputational risks. Regular dialogue between boards and CISOs can help build a unified, enterprise-wide strategy that moves from siloed departments and disparate tools to a proactive defense model.
Boardrooms should hear regularly from CISOs and other security experts who understand the intersection of fraud and cybersecurity, and the issues at stake for security practitioners and risk managers. We also recommend that boards should regularly ask CISOs questions about the threat landscape and the fraud risks that the business faces, and how best to mitigate those risks.
<ListValue: [StructValue([(‘title’, ‘Join the Google Cloud CISO Community’), (‘body’, <wagtail.rich_text.RichText object at 0x3e9735660760>), (‘btn_text’, ‘Learn more’), (‘href’, ‘https://rsvp.withgoogle.com/events/ciso-community-interest?utm_source=cgc-blog&utm_medium=blog&utm_campaign=2024-cloud-ciso-newsletter-events-ref&utm_content=-&utm_term=-‘), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
In case you missed it
Here are the latest updates, products, services, and resources from our security teams so far this month:
Security Summit 2025: Enabling defenders, securing AI innovation: At Security Summit 2025, we’re sharing new capabilities to help secure your AI initiatives, and to help you use AI to make your organization more secure. Read more.
Introducing Cloud HSM as an encryption key service for Workspace CSE: To help highly-regulated organizations meet their encryption key service obligation, we are now offering Cloud HSM for Google Workspace CSE customers. Read more.
From silos to synergy: New Compliance Manager, now in preview: Google Cloud Compliance Manager, now in preview, can help simplify and enhance how organizations manage security, privacy, and compliance in the cloud. Read more.
Going beyond DSPM to protect your data in the cloud, now in preview: Our new DSPM offering, now in preview, provides end-to-end governance for data security, privacy, and compliance. Here’s how it can help you. Read more.
Google named a Leader in IDC MarketScape: Worldwide Incident Response 2025 Vendor Assessment: Mandiant, a core part of Google Cloud Security, can empower organizations to navigate critical moments, prepare for future threats, build confidence, and advance their cyber defense programs. Read more.
A fuzzy escape: Vulnerability research on hypervisors: Follow the Cloud Vulnerability Research (CVR) team on their journey to find a virtual machine escape bug. Read more.
Please visit the Google Cloud blog for more security stories published this month.
aside_block
<ListValue: [StructValue([(‘title’, ‘Fact of the month’), (‘body’, <wagtail.rich_text.RichText object at 0x3e9735660040>), (‘btn_text’, ‘Learn more’), (‘href’, ‘https://cloud.google.com/blog/products/identity-security/cloud-ciso-perspectives-adding-new-layered-protections-to-2fa/’), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
Threat Intelligence news
PRC-nexus espionage hijacks web traffic to target diplomats: Google Threat Intelligence Group (GTIG) has identified a complex, multifaceted espionage campaign targeting diplomats in Southeast Asia and other entities globally, that we attribute to the People’s Republic of China (PRC)-nexus threat actor UNC6384. Read more.
Analyzing the CORNFLAKE.V3 backdoor: Mandiant Threat Defense has detailed a financially-motivated operation where threat actors are working together. One threat actor, UNC5518, has been using the ClickFix technique to gain initial access, and another threat actor, UNC5774, has deployed the CORNFLAKE.V3 backdoor to deploy payloads. Read more.
Please visit the Google Cloud blog for more threat intelligence stories published this month.
Now hear this: Podcasts from Google Cloud
Cyber-resiliency for the rest of us: Errol Weiss, chief security officer, Health-ISAC, joins hosts Anton Chuvakin and Tim Peacock to chat about making organizations more digitally resilient, shifting from a cybersecurity perspective to one that’s broader, and how to increase resilience given tough budget constraints. Listen here.
Linux security, and the detection and response disconnect: Craig Rowland, founder and CEO, Sandfly Security, joins Anton and Tim to discuss the most significant security blind spots on Linux, and the biggest operational hurdles teams face when trying to conduct incident response across distributed Linux environments. Listen here.
Defender’s Advantage: How cybercriminals view AI tools: Michelle Cantos, GTIG senior analyst, joins host Luke McNamara to discuss the latest trends and use cases for illicit AI tools being sold by threat actors in underground marketplaces. Listen here.
Behind the Binary: Scaling bug bounty programs: Host Josh Stroschein is joined by Jared DeMott to discuss managing bug bounty programs at scale and what goes into a good bug report. Listen here.
To have our Cloud CISO Perspectives post delivered twice a month to your inbox, sign up for our newsletter. We’ll be back in a few weeks with more security-related updates from Google Cloud.
The promise of Google Kubernetes Engine (GKE) is the power of Kubernetes with ease of management, including planning and creating clusters, deploying and managing applications, configuring networking, ensuring security, and scaling workloads. However, when it comes to autoscaling workloads, customers tell us the fully managed mode of operation, GKE Autopilot, hasn’t always delivered the speed and efficiency they need. That’s because autoscaling a Kubernetes cluster involves creating and adding new nodes, which can sometimes take several minutes. That’s just not good enough for high-volume, fast-scale applications.
Enter the container-optimized compute platform for GKE Autopilot, a completely reimagined autoscaling stack for GKE that we introduced earlier this year. In this blog, we take a deeper look at autoscaling in GKE Autopilot, and how to start using the new container-optimized compute platform for your workloads today.
Understanding GKE Autopilot and its scaling challenges
With the fully managed version of Kubernetes, GKE Autopilot users are primarily responsible for their applications, while GKE takes on the heavy lifting of managing nodes and nodepools, creating new nodes, and scaling applications. With traditional Autopilot, if an application needed to scale quickly, GKE first needed to provision new nodes onto which the application could scale, which sometimes took several minutes.
To circumvent this, users often employed techniques like “balloon pods” — creating dummy pods with low priority to hold onto nodes; this helped ensure immediate capacity for demanding scaling use cases. However, this approach is costly, as it involves holding onto actively unused resources, and is also difficult to maintain.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud containers and Kubernetes’), (‘body’, <wagtail.rich_text.RichText object at 0x3e9751f0c820>), (‘btn_text’, ”), (‘href’, ”), (‘image’, None)])]>
Introducing the container-optimized compute platform
We developed the container-optimized compute platform with a clear mission: to provide you with near-real-time, vertically and horizontally scalable compute capacity precisely when you need it, at optimal price and performance. We achieved this through a fundamental redesign of GKE’s underlying compute stack.
The container-optimized compute platform runs GKE Autopilot nodes on a new family of virtual machines that can be dynamically resized while they are running, from fractions of a CPU, all without disrupting workloads. To improve the speed of scaling and resizing, GKE clusters now also maintain a pool of dedicated pre-provisioned compute capacity that can be automatically allocated for workloads in response to increased resource demands. More importantly, given that with GKE Autopilot, you only pay for the compute capacity that you requested, this pre-provisioned capacity does not impact your bill.
The result is a flexible compute that provides capacity where and when it’s required. Key improvements include:
Up to 7x faster pod scheduling time compared to clusters without container-optimized compute
Significantly improved application response times for applications with autoscaling enabled
Introduction of in-place pod resize in Kubernetes 1.33, allowing for pod resizing without disruption
The container-optimized compute platform also includes pre-enabled high-performance Horizontal Pod Autoscaler (HPA) profile, which delivers:
Highly consistent horizontal scaling reaction times
Up to 3x faster HPA calculations
Higher resolution metrics, leading to improved scheduling decisions
Accelerated performance for up to 1000 HPA objects
All these features are now available out of the box in GKE Autopilot 1.32 or later.
The power of the new platform is evident in demonstrations where replica counts are rapidly scaled, showcasing how quickly new pods get scheduled.
How to leverage container-optimized compute
To benefit from these improvements in GKE Autopilot, simply create a new GKE Autopilot cluster based on GKE Autopilot 1.32 or later.
If your existing cluster is on an older version, upgrade it to 1.32 or newer to benefit from container-optimized compute platform’s new features offered.
To optimize performance, we recommend that you utilize the general purpose compute class for your workload. While the container-optimized compute platform supports various types of workloads, it works best with services that require gradual scaling and small (2 CPU or less) resource requests like web applications.
While the container-optimized compute platform is versatile, it is not currently suitable for specific deployment types:
One-pod-per-node deployments, such as anti-affinity situations
Batch workloads
The container-optimized compute platform marks a significant leap forward in improving application autoscaling within GKE and will unlock more capabilities in the future. We encourage you to try it out today in GKE Autopilot.
Editor’s note:Target set out to modernize its digital search experience to better match guest expectations and support more intuitive discovery across millions of products. To meet that challenge, they rebuilt their platform with hybrid search powered by filtered vector queries and AlloyDB AI. The result: a faster, smarter, more resilient search experience that’s already improved product discovery relevance by 20% and delivered measurable gains in performance and guest satisfaction.
The search bar on Target.com is often the first step in a guest’s shopping journey. It’s where curiosity meets convenience and where Target has the opportunity to turn a simple query into a personalized, relevant, and seamless shopping experience.
Our Search Engineering team takes that responsibility seriously. We wanted to make it easier for every guest to find exactly what they’re looking for — and maybe even something they didn’t know they needed.
That meant rethinking search from the ground up.
We set out to improve result relevance, support long-tail discovery, reduce dead ends, and deliver more intuitive, personalized results.
As we pushed the boundaries of personalization and scale, we began reevaluating the systems that power our digital experience. That journey led us to reimagine search using hybrid techniques that bring together traditional and semantic methods and are backed by a powerful new foundation built with AlloyDB AI.
Hybrid search is where carts meet context
Retail search is hard. You’re matching guest expectations, which can sometimes be expressed in vague language, against an ever-changing catalog of millions of products. Now that generative AI is reshaping how customers engage with brands, we know traditional keyword search isn’t enough.
That’s why we built a hybrid search platform combining classic keyword matching with semantic search powered by vector embeddings. It’s the best of both worlds: exact lexical matches for precision and contextual meaning for relevance. But hybrid search also introduces technical challenges, especially when it comes to performance at scale.
Fig. 1: Hybrid Search blends two powerful approaches to help guests find the most relevant results
Choosing the right database for AI-powered retrieval
Our goals were to surface semantically relevant results for natural language queries, apply structured filters like price, brand, or availability, and deliver fast, personalized search results even during peak usage times. So we needed a database that could power our next-generation hybrid search platform by supporting real-time, filtered vector search across a massive product catalog, while maintaining millisecond-level latency even during peak demand.
We did this by using a multi-index design that yields highly relevant results by fusing the flexibility of semantic search with the precision of keyword-based retrieval. In addition to retrieval, we developed a multi-channel relevance framework that dynamically modifies ranking tactics in response to contextual cues like product novelty, seasonality, personalization and other relevance signals.
Fig. 2: High level architecture of the services benign built within Target
We had been using a different database for similar workloads, but it required significant tuning to handle filtered approximate nearest neighbor (ANN) search at scale. As our ambitions grew, it became clear we needed a more flexible, scalable backend that also provided the highest quality results with the lowest latency. We took this problem to Google to explore the latest advancements in this area, and of course, Google is no stranger to search!
AlloyDB for PostgreSQL stood out, as Google Cloud had infused the underlying techniques from Google.com search into the product to enable any organization to build high quality experiences at scale. It also offered PostgreSQL compatibility with integrated vector search, the ScaNN index, and native SQL filtering in a fully managed service. That combination allowed us to consolidate our stack, simplify our architecture, and accelerate development. AlloyDB now sits at the core of our search system to power low-latency hybrid retrieval that scales smoothly across seasonal surges and for millions of guest search sessions every day while ensuring we serve more relevant results.
aside_block
<ListValue: [StructValue([(‘title’, ‘Build smarter with Google Cloud databases!’), (‘body’, <wagtail.rich_text.RichText object at 0x3e9751ae87c0>), (‘btn_text’, ”), (‘href’, ”), (‘image’, None)])]>
Filtered vector search at scale
Guests often search for things like “eco-friendly water bottles under $20” or “winter jackets for toddlers.” These queries blend semantic nuance with structured constraints like price, category, brand, sizes or store availability. With AlloyDB, we can run these hybrid queries that combine vector similarity and SQL filters easily without sacrificing speed or relevance.
up to 10x faster execution compared to our previous stack
product discovery relevance improved by 20%
halved the number of “no results” queries
These improvements have extended deeper into our operations. We’ve reduced vector query response times by 60%, which resulted in a significant improvement in the guest experience. During high-traffic events, AlloyDB has consistently delivered more than 99.99% uptime, providing us with the confidence that our digital storefront can keep pace with demand when it matters most. Since search is an external –facing, mission-critical service, we deploy multiple AlloyDB clusters across multiple regions, allowing us to effectively achieve even higher effective reliability. These reliability gains have also led to fewer operational incidents, so our engineering teams can devote more time to experimentation and feature delivery.
Fig 3: AlloyDB AI helps Target combine structured and unstructured data with SQL and Vector search. For example, this improved search experience now delivers more seasonally relevant styles (ie. Long Sleeves) on Page One!
AlloyDB’s cloud-first architecture and features give us the flexibility to handle millions of filtered vector queries per day and support thousands of concurrent users – no need to overprovision or compromise performance.
Building smarter search with AlloyDB AI
What’s exciting is how quickly we can iterate. AlloyDB’s managed infrastructure and PostgreSQL compatibility let us move fast and experiment with new ranking models, seasonal logic, and even AI-native features like:
Semantic ranking in SQL: We can prioritize search results based on relevance to the query intent.
Natural language support: Our future interfaces will let guests search the way they speak – no more rigid filters or dropdowns.
AlloyDB features state-of-the-art models and natural language in addition to the state-of-the-art ScaNN vector index. Google’s commitment and leadership in AI infused in AlloyDB has given us the confidence to evolve our service together with pace of the overall AI & data landscape.
The next aisle over: What’s ahead for Target
Search at Target is evolving into something far more dynamic – an intelligent, multimodal layer that helps guests connect with what they need, when and how they need it. As our guests engage across devices, languages, and formats, we want their experience to feel seamless and smart.
With AlloyDB AI and Google Cloud’s rapidly evolving data and AI stack, we’re confident in our ability to stay ahead of guest expectations and deliver more personalized, delightful shopping moments every day.
Note from Amit Ganesh, VP of Engineering at Google Cloud :
Target’s journey is a powerful example of how enterprises are already transforming search experiences using AlloyDB AI. As Vishal described, filtered vector search is unlocking new levels of relevance and scale. At Google Cloud, we’re continuing to expand the capabilities of AlloyDB AI to support even more intelligent, agent-driven, multimodal applications. Here’s what’s new:
Agentspace integration: Developers can now build AI agents that query AlloyDB in real time, combining structured data with natural language reasoning.
AlloyDB natural language: Applications can securely query structured data using plain English (or French, or 250+ other languages) backed by interactive disambiguation and strong privacy controls.
Enhanced vector support: With AlloyDB’s ScaNN index and adaptive query filtering, vector search with filters now performs up to 10x faster.
AI queryengine: SQL developers can use natural language expressions to embed Gemini model reasoning directly into queries
Three new models: AlloyDB AI now supports Gemini’s text embedding model, a cross-attention reranker, and a multimodal model that brings vision and text into a shared vector space.
These capabilities are designed to accelerate innovation – whether you’re improving product discovery like Target or building new agent-based interfaces from the ground up.
Amazon Connect now offers new generative text-to-speech voices enabling you to deliver natural, human-like, and expressive conversations with your customers. With this launch, you now have access to 20 different generative-enhanced voices across languages such as English, French, Spanish, German, and Italian. These voices can be used to deliver text-to-speech experiences like welcome messages, policy information, or even power your dynamic conversational AI experiences. These capabilities can be configured directly in the drag-and-drop flow designer using the “Set Voice” flow block or through public APIs.
Starting today, Amazon EC2 High Memory U7i instances with 12TB of memory (u7i-12tb.224xlarge) are now available in the US Asia Pacific (Seoul) region. U7i-12tb instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-12tb instances offer 12TiB of DDR5 memory enabling customers to scale transaction processing throughput in a fast-growing data environment.
U7i-12tb instances offer 896 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server.
Amazon OpenSearch Serverless has added support for attribute-based authorization (ABAC) for Data Plane APIs, making it easier to manage access control for data read and write operations. This feature is part of an AWS campaign to drive consistent adoption of AWS Identity and Access Management (IAM) features across all AWS services. Customers can use identity policies in IAM to define permissions and control who has access to the data on Amazon OpenSearch Serverless collections.
Amazon OpenSearch Serverless now also supports resource control policy (RCP). RCP is a new type of authorization policy managed in AWS Organizations that will allow OpenSearch Serverless customers to enforce organization-wide preventative controls across resources in their organization centrally, without the need to update individual resource-based policies. You can refer to documentation for examples.
AWS is announcing the general availability of new general-purpose Amazon EC2 M8i and M8i-flex instances. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The M8i and M8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver up to 20% better performance than M7i and M7i-flex instances, with even higher gains for specific workloads. The M8i and M8i-flex instances are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to M7i and M7i-flex instances.
M8i-flex are the easiest way to get price performance benefits for a majority of general-purpose workloads like web and application servers, microservices, small and medium data stores, virtual desktops, and enterprise applications. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don’t fully utilize all compute resources.
M8i instances are a great choice for all general purpose workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. The SAP-certified M8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications.
M8i and M8i-flex instances are available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), and Europe (Spain).
To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information about the new M8i and M8i-flex instances visit the AWS News blog.
The backbone of U.S. national defense is a resilient, intelligent, and secure supply chain. The Defense Logistics Agency (DLA) manages this critical mission, overseeing the end-to-end global supply chain for all five military services, military commands, and a host of federal and international partners.
Today, Google Public Sector is proud to announce a new $48 million contract with the DLA to support its vital mission. Through a DLA Enterprise Platform agreement, Google Public Sector will provide a modern, secure, and AI-ready cloud foundation to enhance DLA’s operational capabilities and provide meaningful cost savings. This marks a pivotal moment for the DoD – away from legacy government clouds and onto a modern, born-in-the-cloud provider that is also a DoD-accredited commercial cloud environment.
The Need for a Modern Foundation
To effectively manage a supply chain of global scale and complexity, DLA requires access to the most advanced digital tools available. Previously, DLA, like many other federal agencies and organizations across the federal government, were restricted to a “GovCloud” environment, which are isolated and often less-reliable versions of commercial clouds. These limitations created challenges in data visualization, interoperability between systems, and network resiliency, while also contributing to high infrastructure and support costs.
The driver for change was clear: a need for a modern, scalable, and secure platform to ensure mission success into the future. By migrating to Google Cloud, DLA will be able to harness modern cloud best practices combined with Google’s highly performant and resilient cloud infrastructure.
A Modern, Secure, and Intelligent Platform
DLA leadership embraced a forward-thinking approach to modernization, partnering with Google Public Sector to deploy the DLA Enterprise Platform. This multi-phased approach provides a secure, intelligent foundation for transformation, delivering both immediate value and a long-term modernization roadmap.
The initial phase involved migrating DLA’s key infrastructure and data onto Google Cloud which will provide DLA with an integrated suite of services to unlock powerful data analytics and AI capabilities—turning vast logistics data into actionable intelligence with tools like BigQuery, Looker, and Vertex AI Platform. Critically, the platform is protected end-to-end by Google’s secure-by-design infrastructure and leading threat intelligence, ensuring DLA’s mission is defended against sophisticated cyber threats.
By leveraging Google Cloud, DLA will be empowered to:
Optimize logistics and reduce costs through the migration of business planning resources to a more efficient, born-in-the-cloud infrastructure.
Enhance decision-making with advanced AI/ML for warehouse modernization and transportation management.
Improve collaboration through a more connected and interoperable technology ecosystem.
Strengthen security by defending against advanced cyber threats with Mandiant’s expertise and Google Threat Intelligence.
Google Public Sector’s partnership with DLA builds on the momentum of its recent $200 million-ceiling contract award by the DoD’s Chief Digital and Artificial Intelligence Office (CDAO) to accelerate AI and cloud capabilities across the agency. We are honored to support DLA’s mission as it takes this bold step into the future of defense logistics.
Register to attend our Google Public Sector Summit taking place on Oct. 29, 2025, in Washington, D.C. Designed for government leaders and IT professionals, 1,000+ attendees will delve into the intersection of AI and security with agency and industry experts, and get hands-on with Google’s latest AI technologies.
In addition to table buckets, you can now create tables and namespaces using AWS CloudFormation and AWS Cloud Development Kit (AWS CDK). This simplifies the way developers create, update, and manage S3 Tables resources using infrastructure as code. With improved CloudFormation and CDK support, teams can now consistently deploy any S3 Tables resource across multiple AWS accounts while maintaining version control of their configurations.
Amazon Virtual Private Cloud (Amazon VPC) Traffic Mirroring is now supported on additional instance types. Amazon VPC Traffic Mirroring allows you to replicate the network traffic from EC2 instances within your VPC to security and monitoring appliances for use cases such as content inspection, threat monitoring, and troubleshooting.
With this release, VPC Traffic Mirroring can be enabled on all Nitro v4 instances. You can see the complete list of instances that support VPC Traffic Mirroring in our documentation. You can see the complete list of instances built on different Nitro system versions in our AWS Nitro Systems documentation.
VPC Traffic Mirroring is supported on these additional instance types in all regions. To learn more about VPC Traffic Mirroring, please visit the VPC Traffic Mirroring documentation.
Ein Beitrag von Dr. Alexander Alldridge, Geschäftsführer von EuroDaT
Geldwäschebekämpfung ist Teamarbeit. Banken, Regierungen und Technologiepartner müssen eng zusammenarbeiten, um kriminelle Netzwerke effektiv aufzudecken. Diese Herausforderung ist im streng regulierten Finanzsektor besonders komplex: Wie funktioniert Datenabgleich, wenn die Daten, um die es geht, hochsensibel sind? In diesem Blogbeitrag erklärt Dr. Alexander Alldridge, Geschäftsführer von EuroDaT, welche Rolle ein Datentreuhänder dabei spielen kann – und wie EuroDaT mit Lösungen von Google Cloud eine skalierbare, DSGVO-konforme Infrastruktur für genau diesen Zweck aufgebaut hat.
Wenn eine Bank eine verdächtige Buchung bemerkt, beginnt ein sensibler Abstimmungsprozess. Um mögliche Geldflüsse nachzuverfolgen, bittet sie andere Banken um Informationen zu bestimmten Transaktionen oder Konten. Aktuell geschieht das meist telefonisch – nicht, weil es keine digitalen Alternativen gäbe, sondern weil die Weitergabe sensibler Finanzdaten wie IBANs oder Kontobewegungen nur unter sehr engen rechtlichen Vorgaben erlaubt ist.
Das Hin und Her per Telefon ist nicht nur mühsam, sondern auch fehleranfällig. Deutlich schneller und sicherer wäre ein digitaler Datenabgleich, der nur berechtigten Stellen Zugriff auf genau die Informationen gibt, die sie im konkreten Verdachtsfall benötigen.
Hier bei EuroDaT, einer Tochtergesellschaft des Landes Hessen, bieten wir genau das: Als Europas erster transaktionsbasierter Datentreuhänder ermöglichen wir einen kontrollierten, anlassbezogenen Austausch sensibler Finanzdaten, der vertrauliche Informationen schützt und alle gesetzlichen Vorgaben erfüllt.
safeAML: Ein neuer Weg für den Datenaustausch im Finanzsektor
Mit safeAML haben wir in Zusammenarbeit mit der Commerzbank, der Deutschen Bank und N26 ein System entwickelt, das den Informationsaustausch zwischen Finanzinstituten digitalisiert. Statt aufwendig andere Institute abzutelefonieren, kann künftig jede Bank selbst die relevanten Daten von anderen Banken hinzuziehen, um auffällige Transaktionen besser einordnen zu können.
Der Datenaustausch läuft dabei kontrolliert und datenschutzkonform ab: Die Daten werden pseudonymisiert verarbeitet und so weitergegeben, dass nur die anfragende Bank sie am Ende wieder zuordnen kann. Wir bei EuroDaT haben als Datentreuhänder zu keinem Zeitpunkt Zugriff auf personenbezogene Inhalte.
safeAML Anwendung
Höchste Sicherheits- und Compliance-Standards mit Google Cloud
safeAML ist eine Cloud-native Anwendung, wird also vollständig in der Cloud entwickelt und betrieben. Dafür braucht es eine Infrastruktur, die nicht nur technisch leistungsfähig ist, sondern auch die strengen Vorgaben im Finanzsektor erfüllt – von der DSGVO bis zu branchenspezifischen Sicherheits- und Cyber-Resilienz-Anforderungen. Google Cloud bietet dafür eine starke Basis, weil das Google Cloud-Team technisch und vertraglich schon früh die passenden Grundlagen für solche sensiblen Anwendungsfälle gelegt hat. Für uns war das ein entscheidender Vorteil gegenüber anderen Anbietern.
Unsere gesamte Infrastruktur ist auf Google Kubernetes Engine (GKE) aufgebaut. Darüber richten wir sichere, isolierte Umgebungen ein, in denen jede Anfrage nachvollziehbar und getrennt von anderen verarbeitet werden kann. Alle technischen Ressourcen, darunter auch unsere Virtual Private Clouds (VPCs), sind in der Google-Cloud-Umgebung über Infrastruktur als Code definiert. Das bedeutet: Die gesamte Infrastruktur von EuroDaT wird automatisiert und wiederholbar aufgebaut, inklusive der Regeln dafür, welche Daten wohin fließen dürfen.
Diese transparente, einfach reproduzierbare Architektur hilft uns auch dabei, die strengen Compliance-Anforderungen im Finanzsektor zu erfüllen: Wir können jederzeit belegen, dass sicherheitsrelevante Vorgaben automatisch umgesetzt und überprüft werden.
Banken nutzen safeAML für schnellere Verdachtsprüfung
safeAML ist inzwischen bei den ersten deutschen Banken testweise im Einsatz, um verdächtige Transaktionen schneller und besser einordnen zu können. Anstatt wie gewohnt zum Telefon greifen zu müssen, können Ermittler*innen jetzt gezielt ergänzende Informationen von anderen Instituten einholen, ohne dabei sensible Daten offenzulegen.
Das beschleunigt nicht nur die Prüfung, sondern reduziert auch Fehlalarme, die bisher viel Zeit und Kapazitäten gebunden haben. Die Meldung, ob ein Geldwäscheverdacht vorliegt, bleibt dabei weiterhin eine menschliche Einzelfallentscheidung, wie es das deutsche Recht verlangt.
Dass Banken über safeAML erstmals kontrolliert Daten austauschen können, ist bereits ein großer Schritt für die Geldwäschebekämpfung in Deutschland. Wir stehen aber noch am Anfang: Jetzt geht es darum, mehr Banken einzubinden, die Vernetzung national und international auszuweiten und den Prozess so unkompliziert wie möglich zu machen. Denn je mehr Institute mitmachen, desto besser können wir ein vollständiges Bild verdächtiger Geldflüsse zeichnen. Die neue Datenbasis kann künftig auch dabei helfen, Verdachtsfälle besser einzuordnen und fundierter zu bewerten.
Nachhaltiger Datenschutz: Sicherer Austausch von ESG-Daten
Unsere Lösung ist aber nicht auf den Finanzbereich beschränkt. Als Datentreuhänder können wir das Grundprinzip, sensible Daten nur gezielt und kontrolliert zwischen dazu berechtigten Parteien zugänglich zu machen, auch auf viele andere Bereiche übertragen. Wir arbeiten dabei immer mit Partnern zusammen, die ihre Anwendungsideen auf EuroDaT umsetzen, und bleiben als Datentreuhänder selbst neutral.
Leistungsangebot EuroDaT
Ein aktuelles Beispiel sind ESG-Daten: Nicht nur große Firmen, sondern auch kleine und mittlere Unternehmen stehen zunehmend unter Druck, Nachhaltigkeitskennzahlen offenzulegen – sei es wegen neuer gesetzlicher Vorgaben oder weil Geschäftspartner wie Banken und Versicherer sie einfordern.
Gerade für kleinere Firmen ist es schwierig, diesen Anforderungen gerecht zu werden. Sie haben oft nicht die nötigen Strukturen oder Ressourcen, um ESG-Daten standardisiert bereitzustellen, und möchten sensible Informationen wie Verbrauchsdaten verständlicherweise auch nicht einfach öffentlich machen.
Hier kommt EuroDaT ins Spiel: Wir sorgen als vertrauenswürdige Zwischenstelle dafür, dass Nachhaltigkeitsdaten sicher weitergegeben werden, ohne dass Unternehmen die Kontrolle darüber verlieren. Mit dem Deutschen Nachhaltigkeitskodex (DNK) führen wir aktuell Gespräche zu einer Lösung, die kleinen Firmen das Übermitteln von ESG-Daten an Banken, Versicherungen und Investor*innen über EuroDaT als Datentreuhänder erleichtern kann.
Forschung im Gesundheitssektor: Sensible Daten, sichere Erkenntnisse
Auch im Gesundheitssektor sehen wir großes Potenzial für unsere Technologie. Hier geht es natürlich um besonders sensible Daten, die nur unter strengen Auflagen verarbeitet werden dürfen. Trotzdem gibt es viele Fälle, in denen Gesundheitsdaten zusammengeführt werden müssen – etwa für die Grundlagenforschung, die Ausgestaltung klinischer Studien und politische Entscheidungen.
Im Auftrag der Bundesregierung hat die Unternehmensberatung d-fine jetzt gezeigt, wie Gesundheitsdaten mithilfe von EuroDaT genutzt werden können – etwa zur Analyse der Auswirkungen von Post-COVID auf die Erwerbstätigkeit. Dafür müssen diese Daten mit ebenfalls hochsensiblen Erwerbsdaten zusammengeführt werden, was durch EuroDaT möglich wird: Als Datentreuhänder stellen wir sicher, dass die Daten vertraulich bleiben und dennoch sinnvoll genutzt werden können.
Datensouveränität als Schlüssel zur digitalen Zusammenarbeit
Wenn Daten nicht ohne Weiteres geteilt werden dürfen, hat das meist gute Gründe. Gerade im Finanzwesen oder im Gesundheitssektor sind Datenschutz und Vertraulichkeit nicht verhandelbar. Umso wichtiger ist, dass der Austausch dieser Daten, wenn er tatsächlich notwendig wird, rechtlich sicher und kontrolliert stattfinden kann.
Als Datentreuhänder sorgen wir deshalb nicht nur für sicheren Datenaustausch in sensiblen Branchen, sondern stärken dabei auch die Datensouveränität aller Beteiligten. Gemeinsam mit Google Cloud verankern wir Datenschutz fest im Kern der digitalen Zusammenarbeit zwischen Unternehmen, Behörden und Forschungseinrichtungen.
Amazon CloudWatch RUM, which enables customers to monitor their web applications by collecting client side performance and error data in real time, is additionally available in the following AWS Regions starting today: AWS GovCloud (US-East), and AWS GovCloud (US-West).
CloudWatch RUM provides curated dashboards for web application performance experienced by real end users including anomalies in page load steps, core web vitals, and JavaScript and HTTP errors across different geolocations, browsers, and devices. Custom events and metrics sent to CloudWatch RUM can be easily configured to monitor specific parts of the application for real user interactions, troubleshoot issues, and get alerted for anomalies. CloudWatch RUM comes integrated with the application performance monitoring (APM) capability, CloudWatch Application Signals. As a result, client-side data from your application can easily be correlated with performance metrics such as errors, faults, and latency observed in your APIs (service operations) and dependencies to address the root cause.
To get started, see the RUM User Guide. Usage of CloudWatch RUM is charged on the number of collected RUM events, which refers to each data item collected by the RUM web client, as detailed here.
Starting today, customers can enable two new capabilities for their EC2 Mac Dedicated Hosts: Host Recovery and Reboot-based Host Maintenance. Host Recovery automatically detects potential hardware issues on Mac Dedicated Hosts and seamlessly migrates Mac instances to a new replacement host, minimizing disruption to workloads. Reboot-based Host Maintenance automatically stops and restarts instances on replacement hosts when scheduled maintenance events occur, eliminating the need for manual intervention during planned maintenance windows. Together, these features significantly enhance the reliability and manageability of EC2 Mac instances, delivering improved uptime for macOS workloads in the cloud while reducing operational overhead and the need for continuous monitoring.
These features are available across all EC2 Mac instance families, including both Intel (x86) and Apple silicon platforms. Customers can access this feature in all AWS regions where EC2 Mac instances are currently supported. To learn more about Host recovery, please visit the documentation here. To learn more about Host maintenance, please visit the documentation here. To learn more about EC2 Mac instances, click here.
For years, enterprises and governments with the strictest data security and sovereignty requirements have faced a difficult choice: adopt modern AI or protect their data. Today, that compromise ends. We are announcing the general availability of Gemini on GDC air-gapped and preview of Gemini on GDC connected, bringing Google’s most advanced models directly into your data center.
We are inspired by initial feedback from customers, including Singapore’s Centre for Strategic Infocomm Technologies (CSIT), Government Technology Agency of Singapore (GovTech Singapore), Home Team Science and Technology Agency (HTX), KDDI, and Liquid C2, who are excited to gain the advantages of generative AI with Gemini on GDC.
Transformative AI capabilities, on-premises
Gemini models offer groundbreaking capabilities, from processing extensive context to native multimodal understanding of text, images, audio, and video. This unlocks a wide array of high-impact use cases on secure infrastructure:
Unlock new markets and global collaboration: Instantly break down language barriers across your international operations, creating a more connected and efficient global workforce.
Accelerate decision-making: Make faster, data-driven decisions by using AI to automatically summarize documents, analyze sentiment, and extract insights from your proprietary datasets.
Improve employee efficiency and customer satisfaction: Deliver instant, 24/7 support and enhance user satisfaction by developing intelligent chatbots and virtual assistants for customers and employees.
Increase development velocity: Ship higher-quality software faster by using Gemini for automated code generation, intelligent code completion, and proactive bug detection.
Strengthen safety & compliance: Protect your users with AI-powered safety tools that automatically filter harmful content and ensure adherence to industry policies.
aside_block
<ListValue: [StructValue([(‘title’, ‘Try Google Cloud for free’), (‘body’, <wagtail.rich_text.RichText object at 0x3e5bfd50fd00>), (‘btn_text’, ”), (‘href’, ”), (‘image’, None)])]>
Secure AI infrastructure where you need it
It takes more than just a model to drive business value with generative AI; you need a complete platform that includes scalable AI infrastructure, a library with the latest foundational models, high-performance inferencing services, and pre-built AI agents like Agentspace search. GDC provides all that and more with an end-to-end AI stack combining our latest-generation AI infrastructure with the power of Gemini models to accelerate and enhance all your AI workloads.
Delivering these transformative capabilities securely requires a complete, end-to-end platform that only Google is providing today :
Performance at scale: GDC utilizes the latest NVIDIA GPU accelerators, including the NVIDIA Hopper and Blackwell GPUs. A fully managed Gemini endpoint is available within a customer or partner data center, featuring a seamless, zero-touch update experience. High performance and availability are maintained through automatic load balancing and auto-scaling of the Gemini endpoint, which is handled by our L7 load balancer and advanced fleet management capabilities.
Foundation of security and control: Security is a core component of our solution, with audit logging and access control capabilities that provide full transparency for customers. This allows them to monitor all data traffic in and out of their on-premises AI environment and meet strict compliance requirements. The platform also features Confidential Computing support for both CPUs (with Intel TDX) and GPUs (with NVIDIA’s confidential computing) to secure sensitive data and prevent tampering or exfiltration.
Flexibility and speed for your AI strategy: the platform supports a variety of industry-leading models including Gemini 2.5 Flash and Pro, Vertex AI task-specific models (translation, optical character recognition, speech-to-text, and embeddings generation), and Google’s open-source Gemma models. GDC also provides managed VM shapes (A3 & A4 VMs) and Kubernetes clusters giving customers the ability to deploy any open-source or custom AI model, and custom AI workloads of their choice. This is complemented by Vertex AI services that provide an end-to-end AI platform including a managed serving engine, data connectors, and pre-built agents like Agentspace search (in preview) for a unified search experience across on-premises data.
What our customers are saying
“As a key GDC collaboration partner in shaping the GDC air-gapped product roadmap and validating the deployment solutions, we’re delighted that this pioneering role has helped us grow our cutting-edge capabilities and establish a proven deployment blueprint that will benefit other agencies with similar requirements. This is only possible with the deep, strategic collaboration between CSIT and Google Cloud. We’re also excited about the availability of Gemini on GDC, and we look forward to building on our partnership to develop and deploy agentic AI applications for our national security mission.” – Loh Chee Kin, Deputy Chief Executive, Centre for Strategic Infocomm Technologies (CSIT)
“One of our priorities is to harness the potential of AI while ensuring that our systems and the services citizens and businesses rely on remain secure. Google Cloud has demonstrated a strong commitment to supporting the public sector with initiatives that enable the agile and responsible adoption of AI. We look forward to working more closely with Google Cloud to deliver technology for the public good.” – Goh Wei Boon, Chief Executive, Government Technology Agency of Singapore
“The ability to deploy Gemini on Google Distributed Cloud will allow us to bridge the gap between our on-premises data and the latest advancements in AI. Google Distributed Cloud gives us a secure, managed platform to innovate with AI, without compromising our strict data residency and compliance requirements.” – Ang Chee Wee, Chief AI Officer, Home Team Science & Technology Agency (HTX)
“The partnership with Google Cloud and the integration of Google’s leading Gemini models will bring cutting-edge AI capabilities, meet specific performance requirements, address data locality and regulatory needs of Japanese businesses and consumers.” – Toru Maruta, Executive Officer, Head of Advancing Business Platform Division, KDDI
“Data security and sovereignty are paramount for our customers. With Gemini on Google Distributed Cloud, our Liquid Cloud and Cyber Security solution would deliver strategic value to ensure our customers in highly regulated industries can harness the power of AI while keeping their most valuable data under their control.” – Oswald Jumira, CEO Liquid C2
Today, Amazon Elastic Kubernetes Service (EKS) introduced support for on-demand refresh of cluster insights, enabling customers to more efficiently test and validate if applied recommendations have successfully taken effect.
Every Amazon EKS cluster undergoes automatic, periodic checks against a curated list of insights, which provide detection of issues, such as warnings about changes required before Kubernetes version upgrades, as well as recommendations for how to address each insight. With on-demand cluster insights refresh functionality, customers can fetch the latest insights immediately after making changes, accelerating the testing and verification process when performing upgrades or making configuration changes to your cluster.
EKS upgrade insights and the new refresh capability is available in all commercial AWS Regions. To learn more visit the EKS documentation.
AWS Client VPN now supports Windows Arm64 client with version 5.3.0. You can now run the AWS supplied VPN client on the latest Windows Arm64 OS versions. AWS Client VPN desktop clients are available free of charge, and can be downloaded here.
AWS Client VPN is a managed service that securely connects your remote workforce to AWS or on-premises networks. It supports desktop clients for MacOS, Windows x64, Windows Arm64 and Ubuntu-Linux. With this release, Client VPN now supports the Windows Arm64 5.3.0. It already supports Mac OS version 13.0, 14.0 and 15.0, Windows 10 (x64) and Windows 11 (Arm64 and x64), and Ubuntu Linux 22.04 and 24.04 LTS versions.
This client version is available in all regions where AWS Client VPN is generally available with no additional cost.
AWS Transfer Family Terraform module now supports deployment of SFTP connectors to transfer files between Amazon S3 and remote SFTP servers. This adds to the existing support for deploying SFTP server endpoints using Terraform, enabling you to automate and streamline centralized provisioning of file transfer resources using Infrastructure as Code (IaC).
SFTP connectors provide a fully managed and low-code capability to copy files between Amazon S3 and remote SFTP servers. You can now use Terraform to programmatically provision your SFTP connectors, associated dependencies and customizations in a single deployment. The module also provides end-to-end examples to automate file transfer workflows based on a schedule or event triggers. Using Terraform for deployment eliminates the need for time-consuming and error-prone manual configurations, and provides you a fast, repeatable and secure deployment option that can scale.
Customers can get started by downloading the Terraform module source code on GitHub. To learn more about Transfer Family, visit the product page and user guide. To see all the regions where Transfer Family is available, visit the AWS Region table.
Amazon SageMaker HyperPod now supports customer managed AWS KMS keys (CMK) for encrypting EBS volumes, enabling enterprise customers to deploy machine learning clusters that meet their specific organizational security and compliance requirements. Customers training foundation models need full control over their encryption keys while maintaining high-performance computing capabilities, but previously could only rely on SageMaker HyperPod owned keys for cluster storage encryption.
This capability allows customers to encrypt both root and secondary EBS volumes using their own KMS keys, delivering enhanced security controls, regulatory compliance capabilities, and integration with existing key management workflows. The feature uses a grants-based approach for secure cross-account access and supports independent key selection for root and secondary volumes. You can specify customer managed KMS keys when creating or updating clusters using the CreateCluster and UpdateCluster APIs for clusters in continuous provisioning mode.
Customer managed KMS key support is available in all AWS Regions where SageMaker HyperPod is available. To learn more about customer managed key encryption for SageMaker HyperPod, see the user guide.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7i instances powered by custom 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) are available in the Asia Pacific (Osaka) Region. C7i instances are supported by custom Intel processors, available only on AWS, and offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers.
C7i instances deliver up to 15% better price-performance versus C6i instances and are a great choice for all compute-intensive workloads, such as batch processing, distributed analytics, ad-serving, and video encoding. C7i instances offer larger instance sizes, up to 48xlarge, and two bare metal sizes (metal-24xl, metal-48xl). These bare-metal sizes support built-in Intel accelerators: Data Streaming Accelerator, In-Memory Analytics Accelerator, and QuickAssist Technology that are used to facilitate efficient offload and acceleration of data operations and optimize performance for workloads.
C7i instances support new Intel Advanced Matrix Extensions (AMX) that accelerate matrix multiplication operations for applications such as CPU-based ML. Customers can attach up to 128 EBS volumes to a C7i instance vs. up to 28 EBS volumes to a C6i instance. This allows processing of larger amounts of data, scale workloads, and improved performance over C6i instances.