Azure – Windows Server 2012/R2 reaches end of support
Windows Server 2012/R2 reaches end of support today on October 10, 2023. Microsoft has several options for organizations to remain protected.
Read More for the details.
Windows Server 2012/R2 reaches end of support today on October 10, 2023. Microsoft has several options for organizations to remain protected.
Read More for the details.
Upgrade Azure Functions to use Python 3.9 or above.
Read More for the details.
Amazon SageMaker Canvas now supports deploying machine learning (ML) models to real-time inferencing endpoints, allowing you take your ML models to production and drive action based on ML powered insights. SageMaker Canvas is a no-code workspace that enables analysts and citizen data scientists to generate accurate ML predictions for their business needs.
Read More for the details.
Amazon Relational Database Service (Amazon RDS) now supports M6in, M6idn, R6in, and R6idn database (DB) instances for RDS for PostgreSQL, MySQL, and MariaDB. These network optimized DB instances deliver up to 200Gbps network bandwidth, which is 300% more than similar sized M6i and R6i database instances. Enhanced network bandwidth makes M6in and R6in DB instances ideal for write-intensive workloads. M6idn and R6idn support local block storage with up to 7.6 TB of NVMe-based solid state disk (SSD) storage.
Read More for the details.
Amazon Rekognition content moderation is a deep learning-based feature that can detect inappropriate, unwanted, or offensive images and videos, making it easier to find and remove such content at scale. Customers across industries, such as social media, gaming, and advertising, use Rekognition’s content moderation capabilities to protect their brand reputation, and enable safe user communities. With Custom Moderation, customers can now enhance the accuracy of the moderation deep learning model on their business-specific data by training an adapter with as few as twenty annotated images in less than hour.
Read More for the details.
We are excited to launch Unified Settings for the AWS Console in the AWS GovCloud (US-West and US-East) Regions. With Unified Settings, settings will persist across devices, browsers, and services. At launch, Unified Settings supports default language, default Region, visual mode, and favorite service display. Default language displays your preferred language across the Management Console, default Region sets the AWS Region each time you sign in or visit a service console, visual mode lets you toggle between the default light mode or dark mode, and favorite service display will show services in the favorites bar with either the service icon and full name or only the service icon.
Read More for the details.
Starting today, customers can disable their unused or obsolete Amazon Machine Images (AMIs; pronounced ah-mee). Disabling an AMI changes its state to disabled, makes the AMI private if it was previously shared, and prevents any new EC2 instance launches from that disabled AMI. Customers creating, managing, and consuming AMIs at-scale can now simplify and streamline their workflows with this new capability.
Read More for the details.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7gd, M7gd, and R7gd instances with up to 3.8 TB of local NVMe-based SSD block-level storage are available in Asia Pacific (Singapore) and Asia Pacific (Tokyo) regions. Additionally, C7gd instances are now available in Asia Pacific (Sydney).
Read More for the details.
AWS customers now have the option to enable Systems Manager, and configure permissions for all EC2 instances in an organization that has been configured to use AWS Organizations, with a single action using Default Host Management Configuration (DHMC). This feature provides a method to help customers ensure core Systems Manager capabilities such as Patch Manager, Session Manager, and Inventory are available for all new and existing instances. DHMC is recommended for all EC2 customers, and offers a simple, scalable process to standardize the availability of System Manager tools.
Read More for the details.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M6in and M6idn instances are available in AWS Regions Europe (Stockholm) and Asia Pacific (Sydney). These sixth-generation network optimized instances, powered by 3rd Generation Intel Xeon Scalable processors and built on the AWS Nitro System, deliver up to 200Gbps network bandwidth, 2x more network bandwidth and up to 2x higher packet-processing performance over comparable fifth-generation instances. Customers can use M6in and M6idn instances to scale the performance and throughput of network-intensive workloads such as high-performance file systems, distributed web scale in-memory caches, caching fleets, real-time big data analytics, and Telco applications such as 5G User Plane Function (UPF).
Read More for the details.
With the announcement of Memorystore for Redis Cluster at Google Next, we have many Redis administrators and developers asking how they can migrate from their existing Redis cluster environments. We understand that these are business-critical applications and need a zero downtime migration.
Adopting Memorystore helps remove repetitive tasks like scaling, patching, backing up and configuring observability. This frees up Redis developers and administrators to focus on activities that provide direct value to their users like releasing features and applications. It can also reduce costs.
Memorystore for Redis Cluster is a managed service offering that is fully OSS compatible and easy to set up. Memorystore for Redis Cluster serves the most demanding use cases like caching, leaderboards and stream processing. Memorystore for Redis Cluster provides automatic zonal distribution of nodes for high availability, automated replica management and promotion, and zero-downtime scale in and out with automatic key redistribution. You can migrate from a variety of standalone node or clustered Redis sources, including from self-managed Redis on Compute Engine, Google Kubernetes Engine, or from third-party platforms like Redis Enterprise or Elasticache. You can learn more about Memorystore for Redis Cluster in our documentation.
In this blog, we will describe how to use RIOT, “Redis Input/Output Tool” for an online migration from an existing Redis cluster to a fully managed Memorystore for Redis Cluster. We will provide some guidelines to enable a problem-free migration.
RIOT is an open-source tool developed by Julien Ruaux, Principal Field Engineer, at Redis. RIOT is for data migration between various sources and targets including files, relational databases and Redis instances. We will focus on how it facilitates a hassle-free migration from one Redis cluster to another. Note that RIOT does not currently work with Redis 7.0 as a source. If you wish to migrate Redis 7.0, please look at type-based replication.
We recommend the following additional efforts take place to ensure a smooth migration:
Planning – Write a detailed migration project plan including dependencies, time estimates and tasks owners.Automation – Any actions should be scripted.Testing – Test the migration and incorporate lessons learned into the migration plan and automation. Iterate the tests several times.
This methodology will help eliminate downtime and human error. Further information about migration planning can be found in this blog.
Before we get started, let’s review a high level plan for the migration. The following diagrams are a logical overview of using RIOT for a no-downtime migration, though there may be some backfill as replication catches up at cutover.
1. Deploy a Memorystore for Redis Cluster instance sized similarly to your existing cluster.
2. Deploy a Compute Engine VM with Java Virtual Machine (JVM) and RIOT installed to manage the data movement. When you start RIOT, it will take a full snapshot of the current production Redis cluster instance and write the snapshot to your new Memorystore for Redis Cluster instance. This could take some time depending on the size of the cluster and the network connectivity.
3. RIOT propagates new changes from your existing Redis cluster to your new Memorystore for Redis Cluster instance while your application is live. Replication lag can range from milliseconds to seconds depending on the rate of change and network connectivity. A typical migration on the GCP network with a source and target that has adequate resources can have a replication latency measured in milliseconds.
4. When ready for cutover, stop traffic to the existing Redis cluster. Reconfigure the application to point to the new Memorystore for Redis Cluster instance.
Now that we’ve discussed the process at a high level. Let’s get into the finer details.
The following step by step instructions can be used as a guide for your near zero downtime migration.
Step 1: Create a VM to run RIOT
You can create the RIOT VM from the console or with a similar gcloud command. Edit the project, zone, network and service account as needed.
Note: Networking will be needed to the source and target Memorystore instance on the Redis ports. Memorystore uses the default Redis port 6379.
Step 2: Install RIOT and the JVM on a GCP VM.
Run the following command to install Java Virtual Machine (JVM) on your VM. This works on debian and may need to be adjusted for other distributions.
Run the following command to download RIOT. Check for the latest versions.
Extract RIOT:
Install the Redis CLI:
Let’s set up the environment. We need to edit host and port variables for the Memorystore target and the Redis source. You can get the Memorystore information from the console.
Both commands should return PONG as see below
Enable Key Space Notifications on the source Redis instance
RIOT uses keyspace notifications to capture any updates to the database for replication.
Step 3: Use RIOT to begin the migration
Start RIOT
RIOT will provide the status of the initial sync (Scanning) and the changes being streamed in real time (Listening)
Step 4: Validation
There are many ways you can validate the success of your migration such as dumping each database and comparing or checking the number of total keys. For the sake of this walkthrough, we will be validating by comparing key counts between source and target to ensure that the replication is caught up. Note: On instances with a high rate of change, this could be hard to get extremely accurate.
First get all of the Memorystore for Redis Cluster ports:
Loop through each Memorystore for Redis Cluster slot to obtain the key count per slot:
Step 5: Cutover production traffic and decommission the old instance:
You have two options for production cutover:
If your application does not require strong consistency between the source and destination, simply modify your redis client to point to the new Memorystore for Redis Cluster instance and go live. This will result in zero downtime and your databases will be eventually consistent.For use cases where strong consistency is required, stop write traffic to your Redis database. Wait for RIOT to complete the replication of the remaining changes to the new Memorystore for Redis Cluster instance. Update your redis client configuration to point to the new Memorystore for Redis Cluster instance and go live. This will result in a few seconds to a few minutes of downtime based on write frequency and replication lag, but will provide strong consistency.
You are now live on Memorystore for Redis Cluster. You can now review the Monitoring tab in the console to see usage metrics of your production workload.
The launch of Memorystore for Redis Cluster will allow you to take your applications to the highest scale while providing microseconds latency. Memorystore for Redis Cluster removes the burden of managing Redis, so that you can focus on shipping new features and applications that provide value to your users.
With this migration guide, you have a framework for an easy migration with zero downtime with some back fill if there is replication lag. GCP is here to support your adoption of Memorystore. To learn about the latest releases for Memorystore for Redis Cluster, we suggest following our Release Notes.
Read More for the details.
Innovators in financial services, gaming, retail, and many other industries rely on Cloud Spanner to power demanding relational database workloads that need to scale without downtime. Built on Google’s distributed infrastructure, Spanner provides a fully managed experience with the highest levels of consistency and availability at any scale. Cloud Spanner’s PostgreSQL interface makes the capabilities of Spanner accessible from the open-source PostgreSQL ecosystem.
Cloud Run is a fully-managed runtime that automatically scales your code, in a container, from zero to as many instances as needed to handle all incoming requests. Cloud Spanner and Cloud Run together provide end-to-end scaling and availability for operational applications. Sidecar containers are containers that run alongside the main container and add additional functionality without changing the existing container. Cloud Run now also supports sidecars, allowing you to start independent sidecar containers that run alongside the main container serving web requests.
Cloud Spanner PGAdapter is a lightweight proxy that translates the PostgreSQL network protocol to the Cloud Spanner gRPC protocol. This allows your application to use standard PostgreSQL drivers and frameworks to connect with a Cloud Spanner PostgreSQL database.
This blog shows how you can configure PGAdapter as a Cloud Run sidecar. This setup ensures the lowest possible latency between your application and PGAdapter, and automatically scales the number of PGAdapter instances with the number of application instances.
All containers within a Cloud Run instance share the same network namespace and can communicate with each other over localhost:port. The containers can also share files via shared volumes. Your application container is the container that receives traffic from the Internet. PGAdapter is set up as a sidecar container. A sidecar is an additional container that runs alongside your main container. It is only accessible from within the Cloud Run instance.
Your application connects to PGAdapter using either of the following options:
TCP connection on localhost:5432 (or any other port you choose for PGAdapter)Unix domain socket using an in-memory shared volume
This example shows how to use Unix domain sockets, which gives you the lowest possible latency between your application container and PGAdapter.
Cloud Run sidecars can currently only be configured using service.yaml files. This section shows you
How to configure PGAdapter as a sidecarHow to set up a shared in-memory volume that can be used for Unix domain socketsHow to set up a startup probe for PGAdapter
You create a sidecar container in Cloud Run by adding more than one container to the list of containers. The main container is the container that specifies a container port. Only one container may do this. All other containers are sidecar containers.
The two containers have a shared loopback network interface. That means that the application container can connect to the default PGAdapter port at localhost:5432 using a standard PostgreSQL driver.
Most PostgreSQL drivers support connections using Unix domain sockets. A Unix domain socket is a communication endpoint for processes running on the same host machine, typically using a file as the communication channel. Unix domain sockets offer lower latency than TCP connections.
PGAdapter also supports Unix domain sockets. For this, we need a volume that is accessible for both containers. Cloud Run supports this in the form of shared in-memory volumes. The following example adds a shared volume to our setup:
This configuration adds the following:
A shared in-memory volume with the name ‘sockets-dir’.A mount path ‘/sockets’ for the shared volume to the application container.A mount path ‘/sockets’ for the shared volume to the PGAdapter container.A command line argument ‘-dir /sockets’ for PGAdapter. This instructs PGAdapter to listen for incoming Unix domain sockets on this path.
Containers can be configured to start up in a fixed order, and you can also add startup probes to ensure that a container has been started before another container starts. We’ll use this to ensure that PGAdapter has started before the main application container is started.
This configuration registers the PGAdapter container as a dependency of the main application container. This ensures that PGAdapter is started before the application container. Cloud Run will also use the startup probe to verify that PGAdapter has successfully started before starting the application container. The startup probe checks that it can successfully make a TCP connection on port 5432, which is the default port where PGAdapter listens for incoming connections.
The main application connects to PGAdapter using a standard PostgreSQL driver. This example uses Go and the pgx driver. See this directory for a full list of samples using other programming languages and drivers.
The application connects to PGAdapter using a Unix domain socket on the shared in-memory volume mounted at ‘/sockets’. The pgx driver automatically knows that it should use a Unix domain socket instead of a TCP socket, because the host begins with a slash.
The Cloud Spanner database name that is used in the connection string must be the fully qualified database name in the form projects/my-project/instances/my-instance/databases/my-database.
This sample application starts a simple HTTP server that listens on port 8080. Whenever it receives a request, it will create a connection to PGAdapter and execute a query that returns ‘Hello World!’. This message is then printed to the response stream of the HTTP server.
The full sample code can be found here.
Similar samples for other programming languages can be found here:
The introduction of Cloud Run sidecars significantly simplifies the deployment of scalable applications with Cloud Spanner.
Cloud Spanner’s PostgreSQL interface provides developers with access to Spanner’s consistency and availability at scale using the SQL they already know. PGAdapter adds the ability to use PostgreSQL off-the-shelf drivers and frameworks with Cloud Spanner—the same ones in their PostgreSQL applications today. Cloud Run complements Spanner with elastic application scaling. The addition of sidecar containers provide additional flexibility to automatically scale related processes with the application. Using Unix domain sockets with in-memory volumes from your sidecar provides the lowest possible latency for communication between your application and PGAdapter.
Learn more about what makes Spanner unique and how it’s being used today. Or try it yourself for free for 90-days or for as little as $65 USD/month for a production-ready instance that grows with your business without downtime or disruptive re-architecture.
Read More for the details.
The ability to execute Data Flow pipelines immediately from within the developer IDE is a powerful attribute that allows Data engineers and scientists to build out complete Data Flow pipelines. It smoothes the Dataflow pipeline development flow for new and existing users by enabling a rich development experience for Cloud Dataflow in the IDE so developers, Data engineers/scientists can easily reduce the learning curve and ramp up time for building the Dataflow streaming pipelines
We are excited to announce Cloud Code Plugin integrations for Dataflow that makes it extremely easy for users to create and execute Dataflow pipelines using IntelliJ IDE with the following features.
Executing Dataflow pipeline from IntelliJ IDE – With this new feature, you can now create and run Dataflow pipelines directly from your IntelliJ IDE. This means that you can develop and execute your pipelines in the same environment where you write your code, making it easier to get your pipelines up and running quickly.Error checking and diagnostics – While executing the Dataflow pipeline, all information related to diagnostics (e.g. errors, warnings, info) is now available to review and act from inside the IntelliJ IDE. This feature will improve the Dataflow developer experience because, while working in IntelliJ IDE, users do not have to switch between IDE, and GCP web console, saving time and effort.Step-by-step onboarding on various aspects of the Dataflow pipeline (e.g. source, sink, permissions etc) -This feature provides a step-by-step wizard to create necessary infrastructure required to run the Dataflow pipeline depending on the source and sink for the Dataflow pipeline.This will help the users to understand different infrastructure requirements needed to execute the Dataflow pipeline.This will make it easier for users to get started with Dataflow, and will provide them with the tools they need to develop and execute data pipelines.As part of this release, we are excited to provide Step-by-step onboarding for the Dataflow pipeline which uses PubSub as the data source and BigQuery as the data Sink.
Cloud Code is a set of IDE plugins for popular IDEs that make it easier to create, deploy and integrate applications with Google Cloud. It supports your favorite IDE: VSCode, IntelliJ, PyCharm, GoLand, WebStorm, and Cloud Shell Editor. It speeds up your GKE and Cloud Run development with Skaffold integration
Google Cloud Dataflow is a fully-managed service that meets your data processing needs in both batch (historical) and stream (real-time modes). Apache Beam is an open-source SDK used to describe directed acyclic graphs (DAGs). Dataflow executes Beam pipelines, and aims to compute transformations & analytics for the lowest cost & lowest latency possible.
Getting Started
Installation of the Cloud Code Plugin
For first time users of cloud code, please follow the instructions given here toinstall the Cloud Code Plugin on IntelliJ IDE.
If you’ve been working with Cloud Code, you likely already have it installed in IntelliJ. You just need to make sure you have the latest version of Cloud Code (this feature is available on 23.6.2 and beyond).
You can check the IntelliJ version by going to “Tools > Google Cloud Code > Help/About > About Cloud Code…”
Getting Started with IntelliJ
In this section, we’ll show you how to set up and use the plug-in in a step by step guide.
1. Open IntelliJ and Create Dataflow Project
If IntelliJ is not open, start the IntelliJ IDE. You will see the following screen. If you are already in the IntelliJ IDE, please click on “File>New>Project” and you will see the following screen.
Enter the project name and location. Click on Create button at the bottom.
Click on the “New Window” button.
Go through the readme file for the details on the feature.
2. Sign in to the Google Cloud Platform and Configure Billable Project
1a. Sign in to Google Cloud Project using the Menu selection
From the Menu select: Tools -> Google Cloud Code -> Sign in to Google Cloud Platform.
1b. Set Google Cloud SDK configuration to GCP project where you want to create Dataflow pipeline
From the Menu select: Tools -> Google Cloud Code -> Tools -> Cloud Code Terminal
Execute the following command, where <Your Google Cloud Project ID> is a Google Cloud project to which you have the IAM role to provision resources.
gcloud config set project <Your Google Cloud Project ID>
3. Provision required resources
Open the readme.md file from the working directory in the IntelliJ IDE and click on the following image to trigger a step by step tutorial on how to create infrastructure resources needed for running a Dataflow pipeline.
The tutorial will open in the browser window showing Google Cloud Console. Select the GCP Project where you would like to create the infrastructure resources.
Click on the start button to go through following steps:
Setup Environment:Provision the networkProvision data source and data sink resources
Copy the command at the bottom and paste/run in the cloud shell to execute.
Please wait for the command to finish before you move to the next step of the tutorial.
Follow the same steps for the rest of the tutorial to provision the network and data source, data sink resources.
4. Acquire gradle.properties file
The previous provision required resources step will generate this for you and download to your machine. Move this downloaded gradle.properties file to your IDE working directory on your local machine.
Append the downloaded gradle.properties configuration to the gradle.wrapper.properties file located at <working directory>/gradle/gradle-wrapper.properties
5. Select the included IntelliJ Run Configuration
This sample comes with an IntelliJ Run Configuration. Find it by clicking on “Edit Configurations…” and select it as shown in the screenshot below.
6. Click the Run Button
After selecting the included IntelliJ Run Configuration, click the green run button as shown in the screenshot above in step 4.
This runs the sample and submits the Dataflow Job to your Google Cloud project, referencing values in the gradle.properties file. Best practice is to instead use Dataflow templates. However, this sample provides you the gradle.properties file for convenience.
7. Open the Dataflow Job URL printed in the output
After you clicked the run button in the previous step, you should see output related to submitting the Dataflow Job. At the end, it will print a URL that you can open to navigate to your Dataflow Job.
8. Examine the Dataflow Job Graph
After opening the Dataflow Job URL provided by the code output, you should see the Dataflow Job Graph in the Google Cloud console. The pipeline will take some time before running the code. There is no action you need to take here and is just informational.
9. Examine data in BigQuery
When the pipeline finally runs, it writes data to a BigQuery table. Navigate to BigQuery in the Google Cloud console and notice that the pipeline created a new table. Note that the Infrastructure-as-Code created the dataset earlier.
The pipeline also automatically creates the table schema.
Finally, as the pipeline consumes messages from the Pub/Sub subscription, originating from the public Pub/Sub taxicab topic, it writes the data into the BigQuery table. A preview of this is shown below.
9. Clean up Google Cloud Billable Resources
The following lists instructions on deleting Google Cloud Billable resources.
Stop Dataflow JobDelete Pub/Sub SubscriptionDelete BigQuery dataset
What next?
By simplifying the Dataflow pipeline development not only rejuvenates your development teams, but refocuses their attention on innovation and value creation. Get started developing data pipelines with Cloud Dataflow today.
Read More for the details.
October marks the 20th anniversary of Cybersecurity Awareness Month, and the perfect time to hold Google Cloud Security Talks, our quarterly digital event on all things security.
On October 25, we’ll bring together experts from Google Cloud and the broader Google security community to share insights, best practices, and ways to help increase resilience against modern risks and threats.
We’ll kick things off with a keynote session on the overall state of Google Security with Royal Hansen, vice president, Privacy, Safety and Security Engineering. Hansen will provide Google’s perspective and insight on two topics that are top of mind for almost every organization today: using AI to enhance security and securing AI from attacks.
He’ll be followed by sessions on the latest threat intel, and how AI — specifically Google Cloud Security AI Workbench and Duet AI — can help address pervasive and fundamental security challenges: the exponential growth in threats, the toil it takes for security teams to achieve desired outcomes, and the chronic shortage of security talent.
You will also hear from experts on how Google red teams are ensuring Pixel, AI systems, and more stay secure, and learn about all the latest Google Cloud Security product innovations. We want you to walk away with a better understanding of today’s threat actors and potential attack vectors, and fresh ideas for detecting, investigating, and responding to threats faster.
Here’s a quick peek at the insightful sessions you can look forward to on our agenda:
Cloud security threat briefing with Mandiant: Learn how the threat landscape is shifting and how to be more proactive in defense by understanding how the tactics, techniques and procedures are evolving to include cloud infrastructure.
Reduce exposures before threat actors exploit them: Explore how security teams can be better prepared to defend against attacks with a continuous program of identifying and remediating threats and exposures.
A blueprint for modern security operations: Learn how organizations can combine reactive and proactive techniques, gain context into threat actor activity, and leverage AI as a force multiplier to keep themselves safer.
Supercharging defenders with Security AI Workbench and Duet AI: Explore ways you can use generative AI to protect your organizations from cyberattacks with an overview of Google Cloud Security AI Workbench and Duet AI, and how they can help defenders like you address threat overload, toilsome tools, and the cyber workforce skills gap.
Work safer with new Zero Trust and digital sovereignty controls in Google Workspace:
Learn how to enable safer work in your organization with Google Workspace. We will show you how our cloud-native, Zero Trust architecture and Google AI help prevent account takeovers, enforce contextual access, and prevent data loss.
Why red teams play a central role in helping organizations secure AI systems: Discuss what red teaming is in the context of AI systems and why it’s important; what types of attacks AI red teams simulate; and lessons we have learned that we can share with you.
Over the air, under the radar: Attacking and securing the Pixel modem: Hear examples of over-the-air (OTA) remote code execution (RCE) attacks and how the Android Red Team works closely with both internal and external partners to secure the modem implementation for millions of devices by implementing additional hardening and security mitigations in the modem code, and also evangelized the development of a fuzzing program within the manufacturer’s organization.
Still need one last reason to join us? Security Talks is 100% digital and free to attend — so make sure you sign up now to grab a virtual front-row seat to learn about our latest insights and solutions. We look forward to seeing you on October 25.
Read More for the details.
AWS Lambda now allows Lambda functions to access resources in dual-stack VPC (outbound connections) over IPv6, at no additional cost. With this launch, and Lambda’s support for public IPv6 endpoints (inbound connections) in 2021, Lambda enables you to scale your application without being constrained by the limited number of IPv4 addresses in your VPC, and to reduce costs by minimizing the need for translation mechanisms.
Read More for the details.
Amazon Textract is a machine learning service that automatically extracts printed text, handwriting, and data from any document or image. Today, we are pleased to announce Custom Queries, a new Amazon Textract feature that enables customers to adapt the Queries feature and improve extraction accuracy for their business-specific documents. Queries is a feature within the Analyze Document API that lets you extract specific pieces of information from documents using natural language questions. Custom Queries allows customers to quickly adapt the Queries feature to meet their business-specific needs without requiring them to have machine learning expertise.
Read More for the details.
AWS Network Load Balancer (NLB) now supports Availability Zone DNS affinity, disable connection termination for unhealthy targets, and UDP connection termination by default.
Read More for the details.
Today, AWS announced the opening of a new AWS Direct Connect location within the Digital Realty ICN10 data center in Seoul, South Korea. By connecting your network to AWS at the new location, you gain private, direct access to all public AWS Regions (except those in China), AWS GovCloud Regions, and AWS Local Zones.
Read More for the details.
AWS Step Functions announces an Optimized Integration for Amazon EMR Serverless , adding support for the Run a Job (.sync) integration pattern with 6 EMR Serverless API Actions (CreateApplication, StartApplication, StopApplication, DeleteApplication, StartJobRun, and CancelJobRun).
Read More for the details.
Amazon SageMaker Canvas is a service for business analysts to generate machine learning (ML) and artificial intelligence (AI) predictions without having to write a single line of code. As announced on October 5, customers can access and evaluate foundation models (FMs) to generate and summarize content.
Read More for the details.