Azure – Azure CycleCloud 7 will be retired on 30 September 2023
Retirement notice: Transition to CycleCloud 8 by 30 September 2023.
Read More for the details.
Retirement notice: Transition to CycleCloud 8 by 30 September 2023.
Read More for the details.
Support for PostgreSQL 11 ends on 9 November 2023, plan and upgrade your Hyperscale (Citus) clusters to a higher PostgreSQL version.
Read More for the details.
Upgrade to application insights Java 3.X auto-instrumentation by 30 September 2025.
Read More for the details.
Retirement Notice: Transition to Azure Batch Spot VMs before 30 September 2025.
Read More for the details.
In November 2021, Amazon Elastic Block Store (EBS) introduced EBS Snapshots Archive, a new tier for EBS Snapshots that can help you save up to 75% on storage costs for EBS Snapshots that you intend to retain for more than 90 days and rarely access. Today, we are announcing that you can use Amazon Data Lifecycle Manager to automatically move snapshots created by Data Lifecycle Manager to EBS Snapshots Archive, further reducing the need to manage complex custom scripts and the risk of having unattended storage costs.
Read More for the details.
Fully customize the look and feel of your private indoor maps.
Read More for the details.
Amazon Connect now publishes metrics for third-party integrations for Wisdom, Tasks, and Customer profiles to Amazon CloudWatch, making it easier to help monitor operational metrics. These integrations enable Amazon Connect customers to ingest data (e.g., purchase history, knowledge content) and/or automate task creation from third-party SaaS applications such as Zendesk, Salesforce, or ServiceNow. You can now collect, view, and analyze operational and utilization metrics, such as number and size of documents ingested by Wisdom from a third-party application like Zendesk. This feature is available out-of-the box and no coding is required to access data through CloudWatch.
Read More for the details.
AWS Service Catalog makes improvements on usability for customers who are provisioning AWS resources via the AWS Service Catalog console. These changes will help enable better error handling and reduce friction in the provisioning workflow.
Read More for the details.
Amazon Relational Database Service (Amazon RDS) for MySQL now supports MySQL minor version 5.7.39. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of MySQL, and, to benefit from the numerous fixes, performance improvements, and new functionality added by the MySQL community.
Read More for the details.
We are excited to announce that Amazon SageMaker Model Training now supports SageMaker Training Managed Warm Pools. Users can now opt in to keep their machine learning (ML) model training hardware instances warm for a specified duration of time after the job completes. Using this feature, customers can do iterative experimentation or run consecutive jobs at scale for model training on the same warm instances, with up to 8x reduction reduction in job startup latency.
Read More for the details.
AWS Support continues to provide a mix of tools, technology, people, and programs to help you optimize performance, lower costs, and innovate faster. Today, the new AWS Support Plans Console experience makes it easier to view your current Support plan, the included features within that plan, and compare your plan with other AWS Support Plans.
Read More for the details.
Welcome to September’s Cloud CISO Perspectives. This month, we’re focusing on Google Cloud’s acquisition of Mandiant and what it means for us and the broader cybersecurity community. Mandiant has long been recognized as a leader in dynamic cyber defense, threat intelligence, and incident response services. As I explain below, integrating their technology and intelligence with Google Cloud’s will help improve our ability to stop threats and to modernize the overall state of security operations faster than ever before.
As with all Cloud CISO Perspectives, the contents of this newsletter will continue to be posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.
Cybersecurity is moving through a tumultuous period of growth, change, and modernization as small organizations, global enterprises, and entire industries move to the cloud. Their digital transformations are an opportunity to do security better and more efficiently than before.
At Google Cloud, we believe that our industry should evolve beyond defense strategies and incident response techniques that, in some cases, predate the wide availability of broadband Internet. Our acquisition of Mandiant only underscores how important this belief is to how we work with our customers, putting their security first.
Mandiant has been a leader in incident response and threat intelligence for well over a decade. In my experience, they’ve been at the forefront in dealing with all major developments of threats, threat actors, and landmark events in the industry. We have no intention of changing this – their expertise and capabilities will be even more amplified within Google Cloud.
In fact, we see this as a terrific opportunity to combine what we’re both good at when it comes to security operations. Google Cloud already has excellent SIEM and SOAR capabilities with Chronicle and Siemplify. With Mandiant, we’re able to provide more threat intelligence and incident response capabilities than ever before. At the end of the day, this is a natural and complementary combination of products and services.
We hope to lead the industry towards a democratization of security operations that focuses on “workflows, personnel, and underlying technologies to achieve an autonomic state of existence,” as Google Cloud CEO Thomas Kurian said. And as Mandiant CEO and founder Kevin Mandia wrote, protecting good people from bad is what this is all about. “We can help organizations find and validate potential security issues before they become an incident,” he said.
Mandiant also embraces our shared fate vision, where we are actively involved in the outcomes of our customers. We want to work with customers where they are, and help them achieve better outcomes at every phase of their security lifecycle. From building secure infrastructure, to understanding and defending against new threats, to reacting to security incidents, we want to be there for our customers – and so does Mandiant.
Mandiant is the largest acquisition ever at Google Cloud, and the second-largest in Google history. As cybercriminals continue to exploit new and old vulnerabilities — see last month’s column for more on that — bringing Mandiant on as part of Google Cloud only underscores how important effective cybersecurity has become.
Our big annual user conference Google Cloud Next ‘22 is just around the corner, and it’s going to be an incredible three days of news, conversations, and hopefully more than a little inspiration. For current cloud customers and those among you who are cloud-curious, security is a foundational element in everything we do at Google Cloud and will be ever-present at Next.
From October 11 – 13, you’ll be able to dive into the latest cloud tech innovations, hear from Google experts and leaders, learn what your peers are up to, and even try new skills out in the lab sessions. You can read more about the sessions for further details, and sign up here.
The following week, Mandiant hosts its inaugural mWISE conference from October 18 – 20. This vendor-neutral conference is a must for SecOps leaders and security analysts, which will bring together cybersecurity leaders to transform knowledge into collective action in the fight against persistent and evolving cyber threats. You can read more about the sessions for further details, and sign up here.
Here are the latest updates, products, services and resources from our security teams this month:
Best Kept Security Secrets: Organization Policy Service: Our Organization Policy Service is a highly-configurable set of platform guardrails for security teams to set broad yet unbendable limits for engineers before they start working. Learn more.
Custom Organization Policy comes to GKE: Sometimes, predefined policies aren’t an exact fit for what an organization wants to accomplish. Now in Preview, the Custom Organization Policy for GKE can define and tailor policies to their organization’s unique needs. Read more.
What makes our security special: Our reflections 1 year after joining OCISO: Google Cloud’s Office of the CISO Taylor Lehmann and David Stone reflect on their first year helping customers be more secure at Google Cloud. Read more.
How to use Google Cloud to find and protect PII: Google Professional Services has developed a solution using Google Cloud Data Loss Prevention to inspect and classify sensitive data, and then apply these insights to automatically tag and protect data in BigQuery tables. Read more.
Introducing Workforce Identity Federation, a new way to manage Google Cloud access: This new Google Cloud Identity and Access Management (IAM) feature can rapidly onboard workforce user identities from external identity providers and provide direct secure access to Google Cloud services and resources. Learn more.
Three new features come to Google Cloud Firewall: Firewalls provide one of the basic building blocks for a secure cloud infrastructure, and three new features are now generally available: Global Network Firewall Policies, Regional Network Firewall Policies, and IAM-governed Tags. Here’s what they do.
New ways BeyondCorp Enterprise can protect corporate applications: Following our announcement with Jamf Pro for MacOS earlier this year, we are excited to announce a new BeyondCorp Enterprise integration: Microsoft Intune, now available in Preview. Read more.
Connect Gateway and ArgoCD: Integrating your ArgoCD deployment with Connect Gateway and Workload Identity provides a seamless path to deploy to Kubernetes on many platforms. ArgoCD can easily be configured to centrally manage various cluster platforms including GKE clusters, Anthos clusters, and many more. Read more.
Architecting for database encryption on Google Cloud: Learn security design considerations and how to accelerate your decision making when migrating or building databases with the various encryption options supported on Google Cloud. Read more.
Introducing fine-grained access control for Cloud Spanner: As Google Cloud’s fully managed relational database, Cloud Spanner powers applications of all sizes. Now in Preview, Spanner gets fine-grained access control for more nuanced IAM decisions. Read more.
Building a secure CI/CD pipeline using Google Cloud built-in services: In this post, we show how to create a secure software delivery pipeline that builds a sample Node.js application as a container image and deploys it on GKE clusters. Read more.
Introducing deployment verification to Google Cloud Deploy: Deployment verification can help developers and operators orchestrate and execute post-deployment testing without having to undertake a more extensive testing integration, such as with Cloud Deploy notifications or manually testing. Read more.
The 2022 Accelerate State of DevOps Report: Our 8th annual deep dive into the state of DevOps finds broad adoption of emerging security practices, especially among high-trust, low-blame cultures focused on performance. Read the full report.
Evolving our data processing commitments for Google Cloud and Google Workspace: We are pleased to announce that we have updated and merged our data processing terms for Google Cloud, Google Workspace, and Cloud Identity into one combined Cloud Data Processing Addendum. Read more.
Data governance building blocks for financial services: How does data governance for financial services correspond to Google Cloud services and beyond? Here we propose an architecture capable of supporting the entire data lifecycle, based on our experience implementing data governance solutions with world-class financial services organizations. Read more.
Update on regulatory developments and Google Cloud: As part of our commitment to be the most trusted cloud, we continue to pursue global industry standards, frameworks, and codes of conduct that tackle our customers’ foundational need for a documented baseline of addressable requirements. Here’s a summary of our efforts over the past several months. Read more.
We launched a new weekly podcast focusing on Cloud Security in February 2021. Hosts Anton Chuvakin and Timothy Peacock chat with cybersecurity experts about the most important and challenging topics facing the industry today. This month, they discussed:
Everything you wanted to know about securing AI (but were afraid to ask): What threats does artificial intelligence face? What are the best ways to approach those threats? What do we know so far about what works to secure AI? Hear answers to these questions and more with Alex Polyakov, CEO of Adversa.ai. Listen here.
Inside reCAPTCHA’s magic: More than just “click on buses,” here’s how reCAPTCHA actually protects people, with Badr Salmi, product manager for reCAPTCHA. Listen here.
SRE explains how to deploy security at scale: The art of Site Reliability Engineering has a lot to teach security teams about safe and rapid deployment, with our own Steve McGhee, reliability advocate at Google Cloud. Listen here.
An XDR skeptic discusses all things XDR with Dimitri McKay, principal security strategist at Splunk. Listen here.
To have our Cloud CISO Perspectives post delivered every month to your inbox, sign up for our newsletter. We’ll be back next month with more security-related updates.
Read More for the details.
Amazon SageMaker Data Wrangler reduces the time to aggregate and prepare data for machine learning (ML) from weeks to minutes. Amazon SageMaker Autopilot automatically builds, trains, and tunes the best machine learning models based on your data, while allowing you to maintain full control and visibility. Data Wrangler enables a unified data preparation and model training experience with Amazon SageMaker Autopilot in just a few clicks. This integration is now enhanced to include and reuse Data Wrangler feature transforms such as missing value imputers, ordinal/one-hot encoders etc., along with the Autopilot models for ML inference. When you prepare data in Data Wrangler and train a model by invoking Autopilot, you can now deploy the trained model along with all the Data Wrangler feature transforms as a SageMaker Serial Inference Pipeline. This will enable automatic preprocessing of the raw data with the reuse of Data Wrangler feature transforms at the time of inference. This feature is currently only supported for Data Wrangler flows that do not use join, group by, concatenate and time series transformations.
Read More for the details.
Amazon Relational Database Service (Amazon RDS) for MariaDB now supports MariaDB minor version 10.6.10. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of MariaDB, and to benefit from the numerous bug fixes, performance improvements, and new functionality added by the MariaDB community.
Read More for the details.
Data is critical for any organization to build and operationalize a comprehensive analytics strategy. For example, each transaction in the BFSI (Banking, Finance, Services, and Insurance) sector produces data. In Manufacturing, sensor data can be vast and heterogeneous. Most organizations maintain many different systems, and each organization has unique rules and processes for handling the data contained within those systems.
Google Cloud provides end-to-end data cloud solutions to store, manage, process, and activate data starting with BigQuery. BigQuery is a fully managed data warehouse that is designed for running analytical processing (OLAP) at any scale. BigQuery has built-in features like machine learning, geospatial analysis, data sharing, log analytics, and business intelligence. MongoDB is a document-based database that handles the real-time operational application with thousands of concurrent sessions with millisecond response times. Often, curated subsets of data from MongoDB are replicated to BigQuery for aggregation and complex analytics and to further enrich the operational data and end-customer experience. As you can see, MongoDB Atlas and Google Cloud BigQuery are complementary technologies.
Dataflow is a truly unified stream and batch data processing system that’s serverless, fast, and cost-effective. Dataflow allows teams to focus on programming instead of managing server clusters as Dataflow’s serverless approach removes operational overhead from data engineering workloads. Dataflow is very efficient at implementing streaming transformations, which makes it a great fit for moving data from one platform to another with any changes in the data model required. As part of Data Movement with Dataflow, you can also implement additional use cases such as identifying fraudulent transactions, real-time recommendations, etc.
Customers have been using Dataflow widely to move and transform data from Atlas to BigQuery and vice versa. For this, they have been writing custom code using Apache Beamlibraries and deploying it on the Dataflow runtime.
To make moving and transforming data between Atlas and BigQuery easier, the MongoDB and Google teams worked together to build templates for the same and make them available as part of the Dataflow page in the Google Cloud console. Dataflow templates allow you to package a Dataflow pipeline for deployment. Templates have several advantages over directly deploying a pipeline to Dataflow. The Dataflow templates and the Dataflow page make it easier to define the source, target, transformations, and other logic to apply to the data. You can key in all the connection parameters through the Dataflow page, and with a click, the Dataflow job is triggered to move the data.
To start with, we have built three templates. Two of these templates are batch templates to read and write from MongoDB to BigQuery and vice versa. And the third is to read the change stream data pushed on Pub/Sub and write to BigQuery. Below are the templates for interacting with MongoDB and Google Cloud native services currently available:
1. MongoDB to BigQuery template:
The MongoDB to BigQuery template is a batch pipeline that reads documents from MongoDB and writes them to BigQuery
2. BigQuery to MongoDB template:
The BigQuery to MongoDB template can be used to read the tables from BigQuery and write to MongoDB.
3. MongoDB to BigQuery CDC template:
The MongoDB to BigQuery CDC (Change Data Capture) template is a streaming pipeline that works together with MongoDB change streams. The pipeline reads the JSON records pushed to Pub/Sub via a MongoDB change stream and writes them to BigQuery
The Dataflow page in the Google Cloud console can help accelerate job creation. This eliminates the requirement to set up a java environment and other additional dependencies. Users can instantly create a job by passing parameters including URI, database name, collection name, and BigQuery table name through the UI.
Below you can see these new MongoDB templates currently available in the Dataflow page:
Below is the parameter configuration screen for the MongoDB to BigQuery (Batch) template. The required parameters vary based on the template you select.
Refer to the Google provided Dataflow templates documentation page for more information on these templates. If you have any questions, feel free to contact us or engage with the Google Cloud Community Forum.
Reference
Acknowledgement: We thank the many Google Cloud and MongoDB team members who contributed to this collaboration, and review, led by Paresh Saraf from MongoDB and Maruti C from Google Cloud.
Read More for the details.
As the use of electronic health records (EHR) continues to rise, government agencies and healthcare providers need digital assurances that health information is protected and complies with key regulations such as HIPAA. They also need to ensure that any sensitive data can be protected–no matter the scale and speed at which they operate. Two cloud capabilities—Healthcare De-identification and Cloud Data Loss Prevention—can be instrumental to meeting these needs and keeping health data secure.
Healthcare De-identification (De-ID)
With Healthcare De-ID, users can automate the de-identification of healthcare data in their native formats–no need to extract text or images. Healthcare De-ID can automatically remove protected health information (PHI) from Digital Imaging and Communications in Medicine (DICOM) and Fast Healthcare Interoperability Resources (FHIR) data. This enables multiple use cases, including:
When sharing health information with non-privileged parties
When creating datasets from multiple sources and analyzing them
When anonymizing data so that it can be used in machine learning models
Cloud Data Loss Prevention (DLP)
With Cloud Data Loss Prevention (DLP), Google Cloud has created a comprehensive set of technologies that automate the discovery, classification, and protection of your most sensitive data. Cloud DLP works on text and images, and has three key features:
Data discovery and classification – Includes over 150 built-in infotype (name, address, SSN, etc.) detectors and the ability to create custom infotypes. Native support for scanning, classifying, and profiling sensitive data in Cloud Storage (GCS), BigQuery (BQ), and Datastore. A streaming content API enables support for additional data sources, custom workloads, and applications.
Automated data de-identification, masking, and tokenization – Automatically mask, tokenize, and transform sensitive elements to help better manage your data. This also makes data more easily used for analytics. Preserve the utility of your data for joining, analytics, and AI while protecting raw sensitive identifiers.
Measuring risk of re-identified data – Quasi-identifiers are partially identifying elements or combinations of data that may link to a single person or very small group. Cloud DLP allows you to measure statistical properties, such as k-anonymity and l-diversity. This expands your ability to understand and protect data privacy.
Healthcare De-ID and Cloud DLP leverage machine learning (ML), the ML models used to identify sensitive data continue to improve over time as Google continuously trains these models.
Using Healthcare De-ID and Cloud DLP to protect healthcare data
Cloud DLP and Healthcare De-ID can be an essential part of a healthcare data security suite. With the Healthcare De-ID and DLP API, agencies are now able to automate the identification and redaction of sensitive information like Personally Identifiable Information (PII) and Protected Health Information (PHI) from medical images. In fact, a large federal healthcare agency is currently using Google Cloud Healthcare De-ID for this purpose. They’ve coupled Healthcare De-ID with Google Cloud Healthcare API to perform medical imaging de-identification on over 400 medical images. By automating de-identification, the agency is saving time while adding stronger layers of protection to their sensitive data.
Cloud DLP can be used to identify and de-identify both streaming and storage data. There are two main ways to do this. Both options offer the same level of healthcare data security.
“Content” methods:
Stream data directly into the API
Payload data is not stored or persisted by the API
Supports full classification and DeID/redaction
Works on data from virtually anywhere (Google Cloud, on-premises, or another cloud provider)
“Storage” methods:
Native support for Google Cloud Storage, BQ, Datastore
Currently supports classification methods
Saves detailed findings to BigQuery
BigQuery supports Risk Analytics (K-anon, etc.)
How Healthcare De-ID and Cloud DLP obscures sensitive data
Cloud DLP provides options for tokenizing sensitive data through techniques such as Dynamic Masking and Bucketing. The sample figure below shows examples of Cloud DLP masking phone numbers with hashes and other sensitive identifiers like email addresses and social security numbers as generic categories. Healthcare De-ID offers similar options for transforming sensitive data.
The same methods can be used on unstructured data such as images. The images below show an example of Cloud DLP de-identifying an x-ray image, automatically removing all identifiable information.
One of the biggest advantages Google’s data de-identification solutions bring is its ability to scale an organization’s de-identification capabilities to meet its needs. By automating the process, organizations free up staff resources and lessen the chances of human error.
Keeping data more secure with de-identification
Data de-identification has been another challenge for healthcare organizations. The nature of healthcare data and patient privacy laws like HIPAA have made identifying and redacting sensitive data a labor-intensive task. Personally Identifiable Information (PII) or Protected Health Information (PHI) often requires manual review. Google Cloud’s Healthcare De-ID machine learning capabilities identify, tokenize and redact sensitive data on both text-based FHIR data and image-based DICOM data, making it usable at scale.
Healthcare De-ID is integrated with Google Cloud’s Healthcare FHIR API and Healthcare DICOM API, and Cloud DLP capabilities are built into Google’s native services such as BigQuery and Cloud Storage, Google Cloud’s managed data warehouse object storage solutions. With Google Cloud data de-identification solutions, organizations can reduce the risk of leaking sensitive data.
These de-identification solutions are just one example of how Google Cloud is helping government and healthcare organizations solve their biggest data problems with the power of ML. You can learn more about these technologies on the Healthcare De-ID and Cloud DLP webpages, and test it yourself on the live interactive demo.
Google Cloud also provides a series of How-to Guides to help you get started quickly with using Healthcare De-ID.
Read More for the details.
Today, AWS Systems Manager announces adding CloudWatch Alarms to control tasks. Now customers can choose to stop a task or an action in Systems Manager services when a CloudWatch alarm is activated. This allows customers to use the broad range of monitoring metrics supported by CloudWatch, ranging from resource utilization to application performance, to control tasks. If the specified CloudWatch alarm is activated during a task, Automation will stop the task, RunCommand will cancel the commands, State Manager and Maintenance Windows will skip dispatch. It’s easy to get started, customers can add a CloudWatch alarm, while setting up a task in Systems Manager’s Automation, Run Command, State Manager, and Maintenance Windows services. Once setup, users can view the attached alarm in the task’s details page.
Read More for the details.
Although it is a common use case for Cloud Functions to connect to your resources hosted in a Virtual Private Cloud using private IP addresses, it might not always be obvious how to debug connection issues. For example, you might see timeout errors or other errors related to being unsuccessful at establishing a connection. And trying to debug based on those error messages alone might not have enough information for you to determine the issue.
A better framework for debugging is to determine
What is your network(s) configuration? Do you have just one VPC or more than one? Are you using VPC Peering or Shared VPC? Are you using a single region or multiple regions?
Where does your resource reside, e.g. which network or region?
Are you connecting over a private or public IP?
Where does your Cloud Function reside, e.g. which region?
Do you need to use a Serverless VPC Access Connection? The answer below!
Once you have confirmed what your network looks like, you can use a checklist-style approach to review your setup and verify each step has been configured properly.
In this blog post, we’ll go over 3 scenarios involving VPC and public/private IPs and how we’ve configured Cloud Functions to connect to a Cloud SQL instance in each, as illustrated in the diagram below.
Whenever you need a Serverless compute product to reach a resource within a VPC that’s using a private IP address, you will need to use a Serverless VPC Access Connector. Your serverless environment does not automatically have access to your VPC network. A Serverless VPC Connector enables such access, but you must enable it.
The service account for your Function will need the proper permissions to connect to Cloud SQL. You can follow these steps for configuring a Cloud Function to connect to Cloud SQL.
Also, a Serverless VPC Access Connector occurs a monthly cost for the size of the instance used for the connector. See pricing for more info.
In this first scenario, we will connect a Cloud Function to a Cloud SQL instance that has a public IP address. Since Cloud SQL public IP instances are accessible from anywhere on the public internet, we do not need a Serverless VPC Access Connector.
First, follow the steps in this quickstart to create a MySQL instance with a public IP that contains a “guestbook” database. Since this MySQL instance uses a public IP, you can use Cloud Shell.
Note:if you are using Cloud Functions 2nd gen, you will need to add a Cloud SQL connector to the underlying Cloud Run service. You can follow the directions here: Connect from Cloud Run | Cloud SQL for MySQL; otherwise, you might see Error: connect ENOENT /cloudsql/<project>:<cloud-sql-region>:<cloud-sql-instance-name>
Here is a Cloud Functions 2nd gen nodejs example using the promise-mysql dependency:
In the second scenario, we restrict the access of the Cloud SQL instance using a private IP address only. Here a Cloud Function will need to use a VPC connector to route traffic to the private IP of the Cloud SQL instance, since by default Serverless resources do not have access to your VPN.
1. Follow the steps in this quickstart tocreate a MySQL instance with a private IP that contains a “guestbook” database. You’ll need to create a compute engine VM in the same VPC as your Cloud SQL instance. For production workloads, we recommend using the Cloud SQL Auth proxy to encrypt the traffic from your VM to your instance. Once you have VM access to your Cloud SQL instance, you can follow the same steps for creating the guestbook database.
2. Create a Serverless VPC Access connector which will handle traffic between the Function and the VPC network. For our use case we will be using the default network and assign it an IP range of 10.8.0.0/28. This will ensure that the source IP address for any request sent from the connector will be from this IP range.
3. Configure your Cloud Function to use the VPC connector, as shown in the image below:
Configuring Cloud Functions to use a VPC connector
Once deployed, you’ll see the VPC Connector information in the Function’s Details tab. You can learn more in the VPC Connector docs.
For a private IP, you can use a TCP connection to connect to the private IP instance, as shown in the following Cloud Functions 2nd gen example:
As long as your Cloud Function and your VPC Connector are in the same region (just like in Scenario 2), your SQL instance can be in a different region than the VPC connector. And you’ll use the same code as before in the second scenario.
You can find more information about connecting your Cloud Function to a VPC in the networking documentation, including how to configure your Function for egress and ingress. Also, please review the documentation for best practices for optimizing your Cloud Function for networking.
To learn more about Cloud Functions 2nd gen, you can try out this codelab that provides an overview of the new capabilities, including min instances and traffic splitting.
Read More for the details.
Amazon File Cache is a fully managed, scalable, and high-speed cache on AWS for processing file data stored in disparate locations—including on premises. Amazon File Cache accelerates and simplifies cloud bursting and hybrid workflows including media and entertainment, financial services, health and life sciences, microprocessor design, manufacturing, weather forecasting, and energy.
Read More for the details.
AWS CloudShell is now generally available in the South America (São Paulo), Canada (Central), and Europe (London) regions.
Read More for the details.