Amazon EventBridge announces support for Amazon Key Management Service (KMS) Customer Managed Keys (CMK) in API destinations connections. This enhancement enables you to encrypt your HTTPS endpoint authentication credentials managed by API destinations with your own keys instead of an AWS owned key (which is used by default). With CMK support, you now have more granular security control over your authentication credentials used in API destinations, helping you meet your organization’s security requirements and governance policies.
Customer managed Keys (CMK) are KMS keys that you create and manage by yourself. You can also audit and track usage of your keys via CloudTrail. EventBridge API destinations are private and public HTTPS endpoints that you can invoke as the target of an event bus rule or pipe, similar to how you invoke an AWS service or resource as a target. API destinations provides flexible authentication options for HTTPS endpoints, such as API key and OAuth, storing and managing credentials securely in AWS Secrets Manager on your behalf.
CMK support for EventBridge API destinations connections is now available across all AWS Regions where EventBridge API destinations is available. Please refer to the EventBridge user guide and KMS documentation for details.
Welcome to the first Cloud CISO Perspectives for April 2025. Today, Google Cloud Security’s Peter Bailey reviews our top 27 security announcements from Next ‘25.
As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.
–Phil Venables, strategic security advisor, Google Cloud
aside_block
<ListValue: [StructValue([(‘title’, ‘Get vital board insights with Google Cloud’), (‘body’, <wagtail.rich_text.RichText object at 0x3e19d6588220>), (‘btn_text’, ‘Visit the hub’), (‘href’, ‘https://cloud.google.com/solutions/security/board-of-directors?utm_source=cloud_sfdc&utm_medium=email&utm_campaign=FY24-Q2-global-PROD941-physicalevent-er-CEG_Boardroom_Summit&utm_content=-&utm_term=-‘), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
27 top security announcements at Next ‘25
By Peter Bailey, VP/GM SecOps, Google Cloud Security
We just wrapped our annual Google Cloud Next conference in Las Vegas, where we introduced innovations across AI, app development, infrastructure, data cloud, partners, and more — including security.
Peter Bailey, VP/GM SecOps, Google Cloud Security
From the moment the curtain went up at our opening keynote, we showcased 229 new products, new capabilities, and new enhancements that highlight Google Cloud’s commitment to how our AI-optimized platform can help transform the way that companies work and our skyrocketing customer momentum.
Google Unified Security brings together our visibility, threat detection, AI powered security operations, continuous virtual red-teaming, the most trusted enterprise browser, and Mandiant expertise — in one converged security solution running on a planet-scale data fabric.
(Be sure to check out the reimagining of the Wizard of Oz at The Sphere, a collaboration between Sphere Entertainment, Google DeepMind, Google Cloud, Hollywood production company Magnopus, and five others.)
For the first time this year, we also hosted CISO Connect at Next, a unique opportunity for security and business leaders to delve into the ever-evolving cybersecurity landscape with experts from Google on the current threat landscape, breach mitigation strategies, and the transformative potential of AI in fortifying your organization’s security posture.
“We are all solving for the same security challenges; CISO Connect offers a safe environment to collaborate and share, unlike any other conference,” said Mike Orosz, CISO, Vertiv.
We also focused heavily on innovations across our security portfolio, designed to deliver stronger security outcomes and enable every organization to make Google a part of their security team. Fresh from Next ‘25, here’s our top 27 security announcements.
Google Unified Security brings together our visibility, threat detection, AI powered security operations, continuous virtual red-teaming, the most trusted enterprise browser, and Mandiant expertise — in one converged security solution running on a planet-scale data fabric.
The alert triage agent in Google Security Operations will perform dynamic investigations on behalf of users. Expected to preview for select customers in Q2 2025, it analyzes the context of each alert, gathers relevant information, and renders a verdict on the alert, along with a history of the agent’s evidence and decision making.
The malware analysis agent in Google Threat Intelligence will investigate whether code is safe or harmful. Expected to preview for select customers in Q2 2025, it builds on Code Insight to analyze potentially malicious code, including the ability to create and execute scripts for deobfuscation.
Google Security Operations
New data pipeline management capabilities, now generally available, can help customers better manage scale, reduce costs, and satisfy compliance mandates.
The new Mandiant Threat Defense service, now generally available, provides comprehensive active threat detection, hunting, and response. Mandiant experts work alongside customer security teams, using AI-assisted threat hunting techniques to identify and respond to threats, conduct investigations, and scale response through security operations SOAR playbooks, effectively extending customer security teams.
Security Command Center
Model Armor is now integrated directly with Vertex AI. As part of our recently-announced AI Protection capabilities that can help manage risk across the AI lifecycle, developers can automatically route prompts and responses for protection without any changes to applications.
New Data Security Posture Management (DSPM) capabilities, coming to preview in June, can enable discovery, security, governance, and monitoring of sensitive data including AI training data. DSPM can help discover and classify sensitive data, apply data security and compliance controls, monitor for violations, and enforce access, flow, retention, and protection directly in Google Cloud data analytics and AI products.
A new Compliance Manager, launching in preview at the end of June, will combine policy definition, control configuration, enforcement, monitoring, and audit into a unified workflow. It builds on the configuration of infrastructure controls delivered using Assured Workloads, providing Google Cloud customers with an end-to-end view of their compliance state, making it easier to monitor, report, and prove compliance to auditors with Audit Manager.
Integration with Snyk’s developer security platform, in preview, to help teams find and fix software vulnerabilities faster.
New Security Risk dashboards for Google Compute Engine and Google Kubernetes Engine. Now generally available, they can deliver insights into top security findings, vulnerabilities, and open issues directly in the product consoles.
An expandedRisk Protection Program, with new program partners Beazley and Chubb, two of the world’s largest cyber-insurers. They will provide discounted cyber-insurance coverage based on cloud security posture.
Chrome Enterprise Premium
New employee phishing protections use Google Safe Browsing data to help protect employees against lookalike sites and portals attempting to capture credentials.
Data masking in Chrome Enterprise Premium is now generally available.
We are also extending key enterprise browsing protections to Android, including copy and paste controls, and URL filtering.
Mandiant Cybersecurity Consulting
The Mandiant Retainer provides on-demand access to Mandiant experts. Customers now can redeem prepaid funds for investigations, education, and intelligence to boost their expertise and resilience.
Mandiant Consulting is partnering withRubrik andCohesity to create a solution to minimize downtime and recovery costs after a cyberattack. As part of the program, our partners provide affirmative AI insurance coverage, exclusively for Google Cloud customers and workloads. Chubb will also offer coverage for risks resulting from quantum exploits, proactively helping to address the risk of quantum computing attacks.
Sovereign Cloud
We’ve partnered with Thales to launch theS3NS Trusted Cloud, now in preview, designed to meet France’s highest level of cloud certification. As part of our broad portfolio of sovereign cloud solutions, it is the first sovereign cloud offering based on Google Cloud platform, that is in this case operated, majority-owned and fully controlled by a European organization.
Identity and Access Management
Unified access policies, coming to preview in Q2, create a single definition for IAM allow and IAM deny policies, enabling you to more consistently apply fine grained access controls.
We’re also expanding our Confidential Computing offerings. Confidential GKE Nodes with AMD SEV-SNP and Intel TDX will be generally available in Q2, requiring no code changes to secure your standard GKE workloads. Confidential GKE Nodes with NVIDIA H100 GPUs on the A3 machine series will be in preview in Q2, offering confidential GPU computing without code modifications.
Single-tenant Cloud Hardware Security Module (HSM), now in preview, provides dedicated, isolated HSM clusters managed by Google Cloud, while granting customers full administrative control.
Network security
Network Security Integration allows enterprises to easily insert third-party network appliances and service deployments to protect Google Cloud workloads without altering routing policies or network architecture. Out-of-band integrations with ecosystem partners are generally available now, while in-band integrations are available in preview.
DNS Armor, powered by Infoblox Threat Defense, coming to preview later this year, uses multi-sourced threat intelligence and powerful AI/ML capabilities to detect DNS-based threats.
Cloud Armor Enterprise now includes hierarchical policies for centralized control and automatic protection of new projects, available in preview.
Cloud NGFW Enterprise supports L7 domain filtering capabilities to monitor and restrict egress web traffic to only approved destinations, coming to preview later this year.
Secure Web Proxy (SWP) now includes inline network data loss protection capabilities through integrations with Google’s Sensitive Data Protection and Symantec DLP using service extensions, available in preview.
To learn more about how your organization can benefit from our announcements at Next ‘25, check out our CISO Insights Hub, and stay tuned for our announcements later this month at the RSA Conference in San Francisco.
aside_block
<ListValue: [StructValue([(‘title’, ‘Join the Google Cloud CISO Community’), (‘body’, <wagtail.rich_text.RichText object at 0x3e19d65880a0>), (‘btn_text’, ‘Learn more’), (‘href’, ‘https://rsvp.withgoogle.com/events/ciso-community-interest?utm_source=cgc-blog&utm_medium=blog&utm_campaign=2024-cloud-ciso-newsletter-events-ref&utm_content=-&utm_term=-‘), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
In case you missed it
Here are the latest updates, products, services, and resources from our security teams so far this month:
Demystifying AI security: How to use SAIF in the real world: Our new paper, “SAIF in the real world,” takes a deep look at how to apply Google’s Secure AI Framework (SAIF) throughout the AI development lifecycle. Read more.
Shadow AI strikes back: Following our previous spotlight on shadow AI, we look at a new, more insidious form of shadow AI — emerging from within organizations themselves. Read more.
Google announces Sec-Gemini v1, a new experimental cybersecurity model: Sec-Gemini v1 is our new experimental AI model focused on advancing cybersecurity AI frontiers. It can power security operations workflows with state-of-the-art reasoning capabilities and extensive, current cybersecurity knowledge. Read more.
Building sovereign AI solutions with Google Cloud: The world has changed a lot since we started to speak about the options for data residency, operational transparency, and privacy controls in Google Cloud. Organizations are increasingly seeking AI solutions that drive innovation and enforce regional regulations. Here’s how Cloud Run can help. Read more.
Detecting IngressNightmare without the nightmare: To help detect the IngressNightmare vulnerability chain affecting Kubernetes Ingress Nginx Controllers, discovered by Wiz, we’ve developed a novel non-intrusive technique. Read more.
Please visit the Google Cloud blog for more security stories published this month.
aside_block
<ListValue: [StructValue([(‘title’, ‘Fact of the month’), (‘body’, <wagtail.rich_text.RichText object at 0x3e19d6588310>), (‘btn_text’, ‘Learn more’), (‘href’, ‘https://security.googleblog.com/2025/04/google-launches-sec-gemini-v1-new.html’), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
Threat Intelligence news
DPRK IT workers expanding in scope and scale: Google Threat Intelligence Group (GTIG) has identified an increase of active North Korean IT insider worker operations in Europe, confirming the threat’s expansion beyond the United States. This growth is coupled with evolving tactics, such as intensified extortion campaigns and the move to conduct operations in corporate virtualized infrastructure. Read more.
Suspected China-nexus threat actor actively exploiting critical Ivanti Connect Secure vulnerability: Ivanti disclosed a critical security vulnerability impacting many Ivanti Connect Secure VPN appliances on April 3. GTIG has linked UNC5221, a suspected China-nexus espionage actor, to some of the exploits of the vulnerability. Read more.
Windows RDP, going from remote to rogue: GTIG observed a novel phishing campaign in October 2024 that targeted European government and military organizations. Unlike typical remote desktop protocol (RDP) attacks focused on interactive sessions, this campaign creatively used resource redirection and malicious remote apps including a RDP proxy tool to automate malicious activities. The campaign likely enabled attackers to read victim drives, steal files, capture clipboard data (including passwords), and obtain victim environment variables. Read more.
Please visit the Google Cloud blog for more threat intelligence stories published this month.
Now hear this: Podcasts from Google Cloud
Decoding cyber-risk and threat actors in Asia-Pacific: From big-picture views to nuanced details only an expert could know, Steve Ledzian, APAC CTO, Mandiant at Google Cloud, shares his insight and knowledge with hosts Anton Chuvakin and Tim Peacock. Listen here.
The state of IAM, from cloud to AI: Henrique Teixeira, senior vice-president of strategy, Saviynt, explores with hosts Anton and Tim how identity and access management has evolved from the beginning of the cloud era through to today’s AI sea change. Listen here.
What not to do when red teaming AI: From uncovering surprises to facing new threats and exposing the same old mistakes, Alex Polyakov, CEO, Adversa AI, discusses how and why his company focuses on red teaming AI systems. Listen here.
Behind the Binary: Inside the mind of a binary ninja: Jordan Wiens, developer of the widely-used Binary Ninja and cofounder of Vector 35, brings his expertise as an avid CTF player to a discussion about the complexities of building a commercial reverse engineering platform. Listen here.
To have our Cloud CISO Perspectives post delivered twice a month to your inbox, sign up for our newsletter. We’ll be back in a few weeks with more security-related updates from Google Cloud.
GitLab Duo with Amazon Q is generally available for Self-Managed Ultimate customers, embedding advanced agent capabilities for software development, Java modernization, enhanced quality assurance, and code review optimization directly in GitLab’s enterprise DevSecOps platform. GitLab Duo with Amazon Q delivers a seamless development experience that accelerates the execution of complex, multistep tasks and collaborative workflows in the GitLab platform your developers already know.
Using GitLab Duo with Amazon Q, developers and teams can collaborate with Amazon Q agents to accelerate feature development, maximize code quality and security, detect and resolve vulnerabilities, automate testing coverage, troubleshoot failed pipeline jobs, and upgrade legacy Java code bases. GitLab’s unified data store across the software development lifecycle gives Amazon Q project context to accelerate software development and deployment, simplifying the complex toolchains historically required for collaboration across teams.
Streamline software development: Delegate feature development to the Amazon Q agent from any issue. Detailed summaries, implementation plans, and commit messages keep developers informed on every change. Using feedback in comments, Amazon Q iterates to apply changes on the merge request.
Maximize code quality and security with review and testing agents: Standardize code review best practices with agent-assisted security, quality, and deployment risk scanning on every merge request. Amazon Q can generate new tests to add complete coverage on code changes and apply fixes to merge requests, making QA seamless.
Faster debugging, troubleshooting, and vulnerability resolution: During deployment, platform teams can quickly troubleshoot and resolve failed CI/CD jobs from context-aware web chat using analysis and suggested fixes powered by Amazon Q.
Transform enterprise workloads: Upgrade Java 8 or 11 code bases to Java 17 directly from a GitLab project to improve application security and performance and remove technical debt.
Amazon S3 Tables now support server-side encryption using AWS Key Management Service (SSE-KMS) with customer-managed keys. You can use your own KMS keys to encrypt the tables stored in table buckets to meet regulatory and governance requirements.
By default, S3 Tables encrypt all objects with server-side encryption using S3-managed keys (SSE-S3). With support for customer-managed keys, you have the option to set a default customer-managed key for all new tables in the table bucket, set a dedicated key per table, or implement a combination of both approaches. With SSE-KMS support, S3 Tables use S3 Bucket Keys by default for cost optimization, and provide AWS CloudTrail logging for auditing the usage of customer-managed keys.
Today, we are excited to announce throughput improvements to dynamic run storage for AWS HealthOmics. AWS HealthOmics is a HIPAA-eligible service that helps healthcare and life sciences customers accelerate scientific breakthroughs with fully managed biological data stores and workflows.
Dynamic run storage automatically scales storage capacity based on workflow needs. With this release, dynamic run storage now also scales throughput using Elastic Throughput mode on Amazon Elastic File System. This feature is recommended for runs requiring faster start times, workflows with unpredictable storage requirements, and iterative development cycles, helping research teams reduce time-to-insight for time-sensitive genomic analyses.
Dynamic run storage with elastic throughput is now available in all regions where AWS HealthOmics is available: US East (N. Virginia), US West (Oregon), Europe (Frankfurt, Ireland, London), Asia Pacific (Singapore) and Israel (Tel Aviv). To get started with dynamic run storage, see the documentation.
Amazon CloudWatch agent now supports Security-Enhanced Linux (SELinux) environments through a pre-configured security policy that allow monitoring in systems where security enforcement is required. This feature benefits customers in regulated industries and government sectors who maintain strict security controls across their Linux infrastructure. These security policies, when applied before CloudWatch Agent installation, help customers maintain their security posture while collecting essential monitoring data.
This launch enables organizations to deploy the CloudWatch agent in SELinux-enabled environments while maintaining their security posture. It addresses a critical need where enforcing access controls is essential. The pre-configured SELinux configurations allow customers to benefit from AWS monitoring and observability features while helping to adhering to their compliance requirements. This feature helps to simplify the deployment process and reduce the risk of security misconfigurations during agent installation.
To get started with Amazon CloudWatch agent in Security-Enhanced Linux (SELinux) environments, see Installing the CloudWatch agent in the Amazon CloudWatch User Guide.
Customers in AWS Mexico (Central) Region can now use AWS Transfer Family for file transfers over Secure File Transfer Protocol (SFTP), File Transfer Protocol (FTP), FTP over SSL (FTPS) and Applicability Statement 2 (AS2).
AWS Transfer Family provides fully managed file transfers for Amazon Simple Storage Service (Amazon S3) and Amazon Elastic File System (Amazon EFS) over SFTP, FTP, FTPS and AS2 protocols. In addition to file transfers, Transfer Family enables common file processing and event-driven automation for managed file transfer (MFT) workflows, helping customers to modernize and migrate their business-to-business file transfers to AWS.
To learn more about AWS Transfer Family, visit our product page and user-guide. See the AWS Region Table for complete regional availability information.
We are excited to announce that Amazon Athena is now available in Mexico (Central) and Asia Pacific (Thailand).
Athena is a serverless, interactive query service that makes it simple to analyze petabytes of data using SQL, without requiring infrastructure setup or management. Athena is built on open-source Trino and Presto query engines, providing powerful and flexible interactive query capabilities, and supports popular data formats such as Apache Parquet and Apache Iceberg.
For more information about the AWS Regions where Athena is available, see the AWS Region table. To learn more, see Amazon Athena.
Amazon CloudFront announces Anycast Static IPs support for apex domains, enabling customers to easily use their root domain (e.g., example.com) with CloudFront. This new feature simplifies DNS management by providing just 3 static IP addresses instead of the previous 21, making it easier to configure and manage apex domains with CloudFront distributions.
Previously, customers had to create CNAME records to point their domains to CloudFront. However, due to DNS rules, root domains (apex domains) cannot point to CNAME records and must use A records or Route53’s ALIAS records. With the new Anycast Static IPs support, customers can now easily configure A records for their apex domains. Organizations can maintain their existing DNS infrastructure while using CloudFront’s global content delivery network to deliver apex domains with low latency and high data transfer speeds. Anycast routing automatically directs traffic to the optimal edge location, ensuring high performance content delivery for end users worldwide.
CloudFront supports Anycast Static IPs from all CloudFront edge locations. This excludes Amazon Web Services China (Beijing) region, operated by Sinnet, and the Amazon Web Services China (Ningxia) region, operated by NWCD. Standard CloudFront pricing applies, with additional charges for Anycast Static IP addresses. To learn more, visit the CloudFront Developer Guide for detailed documentation and implementation guidance.
Today, we are announcing the general availability of AWS Wavelength in partnership with Sonatel, an affiliate of Orange, in Dakar, Senegal. With this first Wavelength Zone in Sub-Saharan Africa, Independent Software Vendors (ISVs), enterprises, and developers can now use AWS infrastructure and services to support applications with data residency, low latency, and resiliency requirements.
AWS Wavelength, in partnership with Sonatel, delivers on-demand AWS compute and storage services to customers in West Africa. AWS Wavelength enables customers to build and deploy applications that meet their data residency, low-latency, and resiliency requirements. AWS Wavelength offers the operational consistency, industry leading cloud security practices, and familiar tools for automation that are similar to an AWS Region. With AWS Wavelength in partnership with Sonatel, developers can now build the applications needed for use cases, such as AI/ML inference at the edge, gaming, and fraud detection.
Spring is a great reminder to spring clean – an annual tradition that should extend not only to your household, but also to your virtual cloud infrastructure. Why not start with Google Cloud’s FinOps Hub?
As Google Cloud customers have adopted the FinOps hub to guide their optimization initiatives, we started getting additional feedback from our business community. For example, while DevOps users have access to tools and utilization metrics to identify waste, business teams often lack clear insights into resource consumption, leading to a significant blind spot. The most recent State of FinOps 2025 Report reinforces this need, underscoring the importance of workload optimization and waste reduction as the #1 Top FinOps concern. It’s extremely difficult to optimize workloads or applications if customers cannot fully understand how much is even being used. Why purchase a committed use discount for compute cores that you might not even be fully using?
Sometimes the easiest optimizations our customers can make are really just using more efficiently the resources they are actually paying for. That’s why, in 2025, we are focused on the deep clean of your optimization opportunities and have upgraded FinOps Hub to help you find, highlight, and eliminate wasted spend.
aside_block
<ListValue: [StructValue([(‘title’, ‘Try Google Cloud for free’), (‘body’, <wagtail.rich_text.RichText object at 0x3ea0a47c7610>), (‘btn_text’, ‘Get started for free’), (‘href’, ‘https://console.cloud.google.com/freetrial?redirectPath=/welcome’), (‘image’, None)])]>
1. Find waste: FinOps Hub 2.0 now comes with new utilization insights to zero in on optimization opportunities.
At Google Cloud Next 2025, we introduced FinOps Hub 2.0,focused exclusively on bringing utilization insights on your resources to the forefront so you can see what potential waste may exist and take action immediately. Waste can come in many forms: from a VM that is barely getting used at 5% (overprovisioned), to a GKE cluster that is actually running hot at 110% utilization and might fail (underprovisioned), to managed resources like Cloud Run instances that may not be optimally configured (suboptimal configuration) or, worse yet, a VM that might not ever have been used (idle). FinOps users can now quickly view the most expensive waste category in one, easy-to-understand heatmap by service or AppHub application. But FinOps Hub doesn’t just show you where there may be waste; it also includes more cost optimizations for Kubernetes Engine (GKE), Compute Engine (GCE), Cloud Run, and Cloud SQL to remedy the waste too.
Waste map showing identified resources with their corresponding utilization metrics
2. Highlight waste: Gemini Cloud Assist supercharges FinOps Hub to summarize optimization insights and send opportunities to engineering.
But perhaps what really makes this a 2.0 release is that we supercharged the most time-consuming tasks on FinOps Hub with Gemini Cloud Assist. Our first launch of Gemini Cloud Assist, which helps create personalized cost reports and synthesize insights, has resulted in >100k FinOps hours saved by our customers annually (from January 2024 to January 2025). The power of Gemini Cloud Assist to supercharge and automate workflows is a huge benefit, so we applied that to FinOps Hub in two ways. First, FinOps can now see embedded optimization insights on the hub itself –similar to cost reports – so you don’t need to solve the “needle in the haystack” problem of optimization. Second, you can now use Gemini Cloud Assist to summarize and send top waste insights to your engineering teams to take action and remediate fast.
Gemini summary and draft emails with top optimization opportunities
3. Eliminate waste: introducing a NEW IAM role permission for your tech solution owners to see & directly take action on these optimization opportunities.
Finally, perhaps our most exciting feature – and long overdue for FinOps – is that we are unlocking access to the Billing console for tech solution owners, so that these owners can get FinOps insights and Gemini Cloud Assist insights across all their projects, in a single pane. For example, if you want to give access to FinOps Hub or cost reports to an entire department that only uses a subset of projects for their infrastructure – without providing them with broader billing data access, but still allowing them to see all of their data in a single view – now you can, with multi-project views in the billing console. Multi-project views are enabled using the new Project Billing Costs Manager IAM role (or related granular permissions). These new permissions are currently in private preview so sign-up to get access. Now you can truly extend the power of FinOps tools across your organization with these new access controls.
So take this Spring to try FinOps Hub 2.0 with Gemini Cloud Assist, and do some spring cleaning on your cloud infrastructure, because as the saying goes, “With clouds overgrown, like winter’s old grime, Spring clean your servers, save dollars and time.” – well at least that’s what they say according to Gemini.
On Apr 15, 2025 Amazon announced quarterly security and critical updates for Amazon Corretto Long-Term Supported (LTS) and Feature Release (FR) versions of OpenJDK. Corretto 24.0.1, 21.0.7, 17.0.15, 11.0.27, 8u452 are now available for download. Amazon Corretto is a no-cost, multi-platform, production-ready distribution of OpenJDK.
Click on the Corretto home page to download Corretto 8, Corretto 11, Corretto 17, Corretto 21, or Corretto 24. You can also get the updates on your Linux system by configuring a Corretto Apt or Yum repo.
The Amazon EventBridge connector for Apache Kafka Connect is now generally available. This open-source connector streamlines event integration of Kafka environments with dozens of AWS services and partner integrations without writing custom integration code or running multiple connectors for each target. The connector includes built-in support for Kafka schema registries, offloading large event payloads to S3, and IAM role-based authentication, and is available under the Apache 2.0 license in the AWS GitHub organization.
Amazon EventBridge is a serverless event router that enables you to create highly scalable event-driven applications by routing events between your own applications, third-party SaaS applications, and other AWS services. With the EventBridge Connector for Apache Kafka Connect, customers can leverage advanced features such as dynamic event filtering, transformation, and scalable routing through a unified connector in Kafka environments. The connector simplifies event routing from Kafka to AWS targets, custom applications and third-party SaaS services. Organizations can deploy the connector on any Apache Kafka Connect installation, including Amazon Managed Streaming for Kafka (MSK) Connect.
This feature is available in all AWS Regions, including AWS GovCloud (US). To get started, download the latest release from GitHub, configure it in your Kafka Connect environment, and refer to our developer documentation for detailed implementation guidance. Amazon MSK users can find specific instructions in the MSK Connect developer guide.
AWS Batch now supports Amazon Elastic Container Service (ECS) Exec and AWS FireLens log router for AWS Batch on Amazon ECS and AWS Fargate. With ECS Exec you can track the progress of your application and troubleshoot issue by by running interactive commands against the containers in your AWS Batch job. AWS FireLens allows you to stream logs of your AWS Batch jobs to your chosen destinations including Amazon CloudWatch, Amazon S3, Amazon OpenSearch Service, Amazon Redshift, partner services such as Splunk and more.
You can configure ECS Exec and AWS FireLens while registering a new AWS Batch job definition or making a revision to an existing job definition. For more information, see Register Job Definition page in the AWS Batch API reference and Amazon ECS Developer Guide for ECS Exec and AWS FireLense.
AWS Batch supports developers, scientists, and engineers in running efficient batch processing for ML model training, simulations, and analysis at any scale. ECS Exec and AWS FireLens are supported in any AWS Region where AWS Batch is available.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M8g instances are available in AWS Asia Pacific (Tokyo, Sydney) regions. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 M8g instances are built for general-purpose workloads, such as application servers, microservices, gaming servers, midsize data stores, and caching fleets. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads.
AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. These instances offer larger instance sizes with up to 3x more vCPUs and memory compared to Graviton3-based Amazon M7g instances. AWS Graviton4 processors are up to 40% faster for databases, 30% faster for web applications, and 45% faster for large Java applications than AWS Graviton3 processors. M8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS).
Driven by generative AI innovations, the Business Intelligence (BI) landscape is undergoing significant transformation, as businesses look to bring data insights to their organization in new and intuitive ways, lowering traditional barriers that have often kept discoveries out of the hands of the broader organization.
We’re spearheading this trend with Gemini in Looker, which builds upon Looker’s history as a cloud-first BI tool underpinned by a semantic layer that aligns data and that changes how users interact with it: with intelligent, AI-powered BI powered by Google’s latest AI models. The convergence of AI and BI stands to democratize data insights across organizations, moving beyond traditional methods to make data exploration more intuitive and accessible.
Gemini in Looker lowers technical barriers to accessing information, enhancing collaboration, and accelerating the process of turning raw data into actionable insights. As we announced at Google Cloud Next 25, we are expanding access to Gemini in Looker, making it now available to all Looker platform users. In this post, we discuss its key features, underlying architecture, and its transformative potential for both data analysts and business users.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud data analytics’), (‘body’, <wagtail.rich_text.RichText object at 0x3e322d85fbe0>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/bigquery/’), (‘image’, None)])]>
Using AI to enhance productivity and efficiency
We designed Gemini in Looker with a clear objective: to improve productivity for analysts and business users with AI. Gemini in Looker makes it easier to prepare data and semantic models for BI, and simplifies building dashboard visualizations and reports. Additionally, Gemini in Looker can help business users’ efficiency by improving their data literacy and fluency, enabling them to tell data stories in their presentations, and use natural language to go beyond the dashboard to get answers to their questions. The result is analysts can do their jobs faster and business users can tell data stories and get answers.
Gemini in Looker does this through a suite of gen-AI-powered capabilities that make analytics tasks and workflows easier:
Looker Conversational Analytics allows users to ask questions about their data in natural language, gaining instant, highly visual answers powered by AI and grounded in Looker’s semantic model. Data exploration is now as simple as chatting with your team’s data expert.
Talk to your data the same way you talk to your data analyst, only faster.
Automatic Slide Generation exports Looker reports to Google Slides, as well as AI-generated summaries of charts and their key insights, to automate creating presentations. With Automatic Slide Generation, presentations stay current and relevant, as the slides are directly connected to the underlying reports, so that the data they present is always up-to-date.
Rapidly transform your reports into live presentations you can share.
Formula Assistant simplifies the creation of calculated fields for ad-hoc analysis by allowing analysts to describe the desired calculation in natural language. The formula is automatically generated using AI, saving time and effort for analysts and report builders.
LookML Assistant simplifies LookML code creation by letting users describe what they are looking to build in natural language and automatically creating the corresponding LookML measures and dimensions. This helps streamline the process of creating and maintaining governed data.
Advanced Visualization Assistant creates customized data visualizations that users describe with natural language, while. Gemini in Looker creates the necessary JSON code configurations.
The semantic layer: The foundation of AI accuracy
A critical component of Looker’s AI architecture is the LookML semantic modeling layer, which in conjunction with LLMs like Gemini, provides the necessary context for the LLM to comprehend the data, and helps ensure centralized metric definitions, preventing inconsistencies that can derail AI models. Without a semantic layer, AI answers may be inaccurate, leading to unreliable results, lack of adoption, and wasted effort. Looker’s semantic model enables data governance integration, maintaining compliance and trust with existing controls, and evolves with your business, iteratively updating data sets and measures so that AI answers are accurate. According to our own internal tests, Looker’s semantic layer reduces data errors in gen AI natural language queries by as much as two thirds.
How Google protects your data and privacy
You can use Gemini in Looker knowing that your data is protected. Gemini prioritizes data privacy, and does not store customer prompts and outputs without permission. Critically, customer data, including prompts and generated output, is never used to train Google’s generative AI models.
Looker’s agentic AI architecture powers intelligent BI
Announced at Next 25, the Looker Conversational Analytics API serves as the agentic backend for Looker AI. It answers questions using a reasoning agent that uses multiple tools to answer analytical questions. It also uses conversation history to answer multi-turn questions and enable more efficient Looker queries, including the ability to open them in the Explore UI.
Looker’s AI architecture is designed for accuracy and quality, taking a multi-pronged approach to gen AI quality:
Agentic reasoning
A semantic layer foundation
A dynamic knowledge graph that provides context for Retrieval Augmented Generation (RAG)
Fine-tuned models for SQL and Python generation
This robust architecture enables Looker to move beyond simply answering “What?” questions to addressing more complex queries like “How does this compare?” “Why?” “What will happen?” and ultimately, “What should we do?”
Looker’s AI and BI roadmap
With Looker, we’re committed to converging AI and BI, and are working on a number of new offerings including:
Code Interpreter for Conversational Analytics makes advanced analytics easy, enabling business users to perform complex tasks like forecasting and anomaly detection using natural language, without needing in-depth Python expertise. You can learn more about this new capability and sign up here for the Preview.
Centralize and share your Looker agents with Agentspace, which offers centralized access, faster deployment, enhanced team collaboration, and secure governance.
Automated semantic model generation with Gemini helps democratize LookML creation, boost developer productivity, and unlock data insights with multi-modal inputs. Gemini leverages diverse input types like natural language descriptions, SQL queries, and database schemas.
Embracing BI’s AI-powered future
Gemini in Looker is a significant milestone in the AI/BI revolution. By integrating the power of Google’s Gemini models with Looker’s robust data modeling and analytics capabilities, organizations can empower their analysts, enhance the productivity of their business users, and unlock deeper, more actionable insights from their data. Gemini in Looker is transforming how we understand and leverage data to make smarter, more informed decisions. The journey from asking “What?” to confidently determining “What next?” is now within reach, powered by Gemini in Looker. Learn more at https://cloud.google.com/looker, or click here to learn more about Gemini in Looker and how to enable it for your Looker deployment. You can also choose to enable Trusted Tester features to gain access to early features in development.
Amazon Web Services (AWS) announces the availability of Amazon EC2 I7ie instances in the AWS Europe (Ireland) region. Designed for large storage I/O intensive workloads, these new instances are powered by 5th generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.2 GHz, offering up to 40% better compute performance and 20% better price performance over existing I3en instances.
I7ie instances offer up to 120TB local NVMe storage density—the highest available in the cloud for storage optimized instances—and deliver up to twice as many vCPUs and memory compared to prior generation instances. Powered by 3rd generation AWS Nitro SSDs, these instances achieve up to 65% better real-time storage performance, up to 50% lower storage I/O latency, and 65% lower storage I/O latency variability compared to I3en instances. Additionally, the 16KB torn write prevention feature, enables customers to eliminate performance bottlenecks for database workloads.
I7ie instances are high-density storage-optimized instances, for workloads that demand rapid local storage with high random read/write performance and consistently low latency for accessing large data sets. These versatile instances are offered in eleven different sizes including 2 metal sizes, providing flexibility to match customers computational needs. They deliver up to 100 Gbps of network performance bandwidth, and 60 Gbps of dedicated bandwidth for Amazon Elastic Block Store (EBS), ensuring fast and efficient data transfer for the most demanding applications.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M7i-flex instances powered by custom 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) are available in AWS Asia Pacific (Melbourne) Region. These custom processors, available only on AWS, offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers.
M7i-flex instances are the easiest way for you to get price-performance benefits for a majority of general-purpose workloads. They deliver up to 19% better price-performance compared to M6i. M7i-flex instances offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don’t fully utilize all compute resources such as web and application servers, virtual-desktops, batch-processing, and microservices. In addition, these instances support the new Intel Advanced Matrix Extensions (AMX) that accelerate matrix multiplication operations for applications such as CPU-based ML. For workloads that need larger instance sizes (up to 192 vCPUs and 768 GiB memory) or continuous high CPU usage, you can leverage M7i instances.
To learn more, visit Amazon EC2 M7i-flex instance page.
We’re at an inflection point right now, where every industry and entire societies are witnessing sweeping change, with AI as the driving force. This isn’t just about incremental improvements, it’s about total transformation. The public sector is already experiencing sweeping change with the introduction of AI, and that pace will only intensify. This is the promise of AI, and it’s here and now. At our recent Google Cloud Next ‘25 we showcased our latest innovations and reinforced our commitment to bringing the latest and best technologies to help public sector agencies meet their missions.
Key public sector announcements at Next
It was an exciting week at Next ‘25 with hundreds of product and customer announcements from Google Cloud. Here are key AI, security, and productivity announcements that can help the public sector deliver improved services, enhance decision-making and operate with greater efficiency.
Advancements in Google Distributed Cloud that let customers bring Gemini models on premises. This compliments our GDC air-gapped product, now authorized for U.S. Government Secret and Top Secret levels, and on which Gemini is available, provides the highest levels of security and compliance. This enables public sector agencies to have greater flexibility in how and where they access the latest Google AI innovations.
Support for a full suite of generative media models and Gemini 2.5 – Our most intelligent model yet, Gemini 2.5 is designed for the agentic era and now available in Vertex AI platform. This builds on our recent announcement that Vertex AI Search and Generative AI (with Gemini) achieve FedRAMP High authorization,providing agencies with a secure platform and the latest AI innovations and capabilities.
Simplifying security with the launch of Google Unified Security– We are offering customers a security solution powered by AI that brings together our best-in-class security products for threat intelligence, security operations, cloud security, and secure enterprise browsing, along with Mandiant expertise to provide a unified view and improved threat detection across complex infrastructures.
Transforming agency productivity and unlocking significant savings – We are offering Google Workspace, our FedRAMP High authorized communication and collaboration platform, at a significant discount of 71% off for U.S. federal government agencies. This offering in combination with Gemini in Workspace being authorized at the FedRAMP High level gives unprecedented access to cutting edge AI services for U.S. government workers.
Helping customers meet their mission
All of this incredible technology – and more – came to life on stage and across the showfloor at our Google Public Sector Hub, where we showcased our solutions for security, defense, transportation, productivity & automation, education, citizen services, health & human services, and Google Distributed Cloud (GDC). In case you missed our live demos on Medicaid redetermination, unemployment insurance claims, transportation coordination, and research grant sourcing, contact us to schedule a virtual demo or discuss a pilot. To get hands on with the technology register for an upcoming Google Cloud Days training for the public sector here.
We are proud to work with customers across the public sector, as they apply the latest Google innovations and technologies to achieve real mission-value impact. Ai2 and Google Cloud announced a partnership with Google Cloud to make its portfolio of open AI models available in Vertex AI Model Garden. The collaboration will help set a new standard for openness that leverages Google Cloud’s infrastructure resources and AI development platform with Ai2’s open models that will advance AI research and offer enterprise-quality deployment for the public sector. This builds on our announcement that Ai2 and Google Cloud will commit $20M to advance AI-powered research for the Cancer AI Alliance. You can catch the highlights from my conversation at Next with Ali Farhadi, CEO of Ai2 here.
CEO perspectives: A new era of AI-powered research and innovation
All of this incredible innovation with our customers is further enabled by our ecosystem of partners who help us scale our impact across the public sector. At Google Cloud Next, Accenture Federal Services and Google Public Sector announced the launch of a joint Managed Extended Detection and Response (MxDR) solution. The new MxDR for government solution integrates Google Security Operations (SecOps) platform with Accenture Federal’s deep federal cybersecurity expertise. This solution uses security-specific generative artificial intelligence (Gen AI) to significantly enhance threat detection and response, and the overall security posture for federal agencies.
Lastly, Lockheed Martin and Google Public Sector also announced a collaboration to advance generative AI for national security. Integrating Google’s advanced generative artificial intelligence into Lockheed Martin’s AI Factory ecosystem will enhance Lockheed Martin’s ability to train, deploy, and sustain high-performance AI models and accelerate AI-driven capabilities in critical national security, aerospace, and scientific applications.
A new era of innovation and growth
AI presents a unique opportunity to enter a new era of innovation and economic growth, enabling the public sector to get more out of limited resources to improve public services and infrastructure, make public systems more secure, and better meet the needs of their constituents. Harnessing the power of AI can help governments become agile and more secure, and serve citizens better. At Google Public Sector, we’re passionate about applying the latest cloud, AI and security innovations to help you meet your mission.
Subscribe to our Google Public Sector Newsletter to stay informed and stay ahead with the latest updates, announcements, events and more.
Amazon Q Developer in the AWS Management Console and Amazon Q Developer in the IDE is now GA in the Europe (Frankfurt) Region.
Pro tier customers can now use and configure Amazon Q Developer in the AWS Management Console and Amazon Q Developer in the IDE to store data in the Europe (Frankfurt) Region and perform inference in European Union (EU) Regions giving them more choice over where their data resides and transits. Amazon Q Developer Administrators can configure their user settings so that data is stored in Europe (Frankfurt) Region and inference is performed in EU geographies using cross-region inference (CRIS) to reduce latency and optimize availability. If you are requesting to contact AWS Support your data will be processed in the US East (N. Virginia) region.
Amazon Q Developer in is generally available, and you can use it in the following AWS Regions: US East (N. Virginia), and Europe (Frankfurt).