AWS – Announcing availability of AWS Outposts rack in Panama
AWS Outposts rack can now be shipped and installed at your data center and on-premises locations in Panama.
Read More for the details.
AWS Outposts rack can now be shipped and installed at your data center and on-premises locations in Panama.
Read More for the details.
The new version of the Google logging client library for Go has been released. Version 1.5 adds new features and bug fixes including new structured logging capabilities that complete last year’s effort to enrich structured logging support in Google logging client libraries.
Here are few of the new features in v1.5:
Faster and more robust way to detect and capture Google Cloud resources that the application is running on.Automatic source location detection to support log observability for debugging and troubleshooting.W3C header traceparent for capturing tracing information within the logged entries.Better control over batched ingestion of the log entries by supporting the partialSuccess flag within Logger instances.Support for out-of-process ingestion with redirection of the logs to stdout and stderr using a structured logging format.
Let’s look into each closer:
Resource detection is an existing feature of the logging library. It detects a resource on which an application is running. Retrieves the resource’s metadata. And implicitly adds this metadata to each log entry the application ingests using the library. It is especially useful for applications that run on Google Cloud since it collects a lot of resource’s attributes from the Metadata server of the resource. These attributes enrich ingested logs with additional information such as a location of the VM, a name of the container or a service Id of the AppEngine service. The below Json shows a sample of the retrieved information after detecting the resource as a GKE container and retrieving resource metadata according to the documentation.
The implementation is optimized to avoid performance degradation during the data collection process. Previously, the heuristic for identifying the resource was heavily based on environment variables which could result in many false positives. Additionally, the implementation performed too many queries to the metadata server which could sometimes cause delayed responses. In the 1.5 release the heuristic was updated to use additional artifacts beside the environment variables in the resource detection logic and the number of the queries to the metadata server was reduced to a bare minimum. As a result, false detection of GCP resources is decreased by an order of magnitude and the performance penalties to run the heuristic in non-GCP resources is decreased as well. The change does not affect the ingestion process and does not require any changes in the application’s code.
It is useful to capture the location in code where the log was ingested. While the main usage is in troubleshooting and debugging it can be useful in other circumstances. In this version of the library you can configure your logger instance to capture the source location metadata for each log entry ingested using Logger.Log() or Logger.LogSync() functions. Just pass the output of the SourceLocationPopulation() as a LoggerOption argument in the call to Client.Logger() when creating a new instance of the logger. The following snippet creates a logger instance that adds source location metadata into each ingested log with severity set to Debug:
The function SourceLocationPopulation() accepts the following constants:
logging.DoNotPopulateSourceLocation ‒ is a default configuration that prevents capturing the source location in the ingested logs
logging.PopulateSourceLocationForDebugEntries ‒ adds the source location metadata into logs with Debug severity.
logging.AlwaysPopulateSourceLocation ‒ populates the source location in all ingested logs.
This feature has to be enabled explicitly because the operation of capturing the source location in Go may increase the total execution time of the log ingestion by a factor of 2. It is strongly discouraged to enable it for all ingested logs.
You could add tracing information with your logs in the previous versions of the library. The way to do it was directly, by providing trace and span identification and, optionally, the sampling flag. The following code demonstrates the manual setting of the trace and span identifiers:
Or indirectly, by passing an instance of the http.Request as a part of the Http request metadata:
In the latter case, the library will try to pull tracing information from the x-cloud-tracing-context header. From this release, the library also supports W3C tracing context header. If both headers are present, the tracing information is captured from the W3C traceparent header.
By default the library supports synchronous and asynchronous log ingestions by calling the Cloud Logging API directly. In certain cases the log ingestion is better to be done using external logging agents or built-in support for logs collection. In this release, you can configure a logger instance to write logs to stdout or stderr instead of ingesting it to Cloud Logging directly. The following example creates a logger that redirects logs to stdout using specially formatted Json string:
The above code will print something like the following line to the standard output:
In some circumstances, when the standard output cannot be used for printing logs, the logger can be configured to redirect output to the standard error (os.Stderr) with the same effect.
There are a couple of things to be aware of when you use the out-of-process logging:
Methods Logger.Log() and Logger.LogSync() behave the same way when the logger is configured with the out-of-process logging option. They write the Jsonified logs to the provided io.Write writer. And an external logging agent determines the logs’ collection and ingestion.
You do not have control over the Log ID. All logs that are ingested by the logging agent or the built-in support of the managed service (e.g. Cloud Run) will use the Log ID that is determined out-of-process.
When you ingest logs using Logger.Log() function, the asynchronous ingestion batches multiple log entries together and ingest them using the entries.write Logging API. If the ingestion of any of the aggregate logs fails, no logs get ingested. Starting with this release you can control this logic by opting in the partial success flag. When the flag is set, the Logging API tries to ingest all logs, even if some other log entry fails due to a permanent error such as INVALID_ARGUMENT or PERMISSION_DENIED. This option can be opted-in when creating a new logger using the PartialSuccess logger option:
When you upgrade to version 1.5 you get a more robust and deterministic resource detection algorithm while keeping the behavior of the library unchanged. Additional functionality such as out-of-process ingestion, source location or batch ingestion control can be opted-in using the logger options. With these new features and fixes the behavior of the library becomes more deterministic and robust.
Learn more about the release at go.pkg.dev. Please also visit the library’s project on Github.
Read More for the details.
Amazon WorkMail now supports invoking AWS Lambda for user availability, through Custom Availability Provider Lambdas (CAP Lambdas). CAP Lambdas are a new way for WorkMail to get availability information from external availability sources. A customer can use these CAP Lambdas to give WorkMail access to availability information for users on other calendaring providers they own, even if their endpoints are private, or if they do not have an Exchange Web Services (EWS) endpoint.
Read More for the details.
Today, AWS announced the ability to toggle routes on and off when using AWS Migration Hub Refactor Spaces. This feature lets customers create inactive routes which can be activated after creation once the route’s targeted service is ready to receive traffic. Customers can use route toggling to fine-tune their routing approach and deliver just-in-time route changes as applications are incrementally refactored.
Read More for the details.
AWS Database Migration Service (AWS DMS) now supports IBM Db2 z/OS as a source for the full load operational mode. Using AWS Schema Conversion Tool (SCT), you can convert schemas and code objects from IBM DB2 z/OS to Aurora MySQL, Aurora PostgreSQL, MySQL and PostgreSQL targets. Once you have the schema and objects in a format compatible with target database you can utilize AWS DMS to migrate data from IBM DB2 running on the z/OS operating system to any AWS DMS supported targets.
Read More for the details.
AWS Database Migration Service (AWS DMS) has expanded functionality by adding support for Babelfish for Aurora PostgreSQL as a target. Babelfish for Aurora PostgreSQL is a new translation layer for Amazon Aurora PostgreSQL-Compatible Edition that enables Aurora to understand commands from applications written for Microsoft SQL Server. Using AWS DMS, you can now perform full load migrations to Babelfish for Aurora PostgreSQL with minimal downtime.
Read More for the details.
Microsoft Purview data sharing for in-place data sharing for Azure Data Lake Storage (ADLS Gen2) and Azure Blob Storage is now in public preview.
Read More for the details.
When you deploy an application on Kubernetes, it runs as a service account — a system user understood by the Kubernetes control plane. The service account is the basic tool for configuring what an application is allowed to do, analogous to the concept of an operating system user on a single machine. Within a Kubernetes cluster, you can use role-based access control to configure what a service account is allowed to do (“list pods in all namespaces”, “read secrets in namespace foo”). When running on Google Kubernetes Engine (GKE), you can also use GKE Workload Identity and Cloud IAM to grant service accounts access to GCP resources (“read all objects in Cloud Storage bucket bar”).
How does this work? How does the Kubernetes API, or Cloud Storage know that an HTTP request is coming from your application, and not Bob’s? It’s all about tokens: Kubernetes service account tokens, to be specific. When your application uses a Kubernetes client library to make a call to the Kubernetes API, it attaches a token in the Authorization header, which the server then validates to check your application’s identity.
How does your application get this token, and how does the authentication process work? Let’s dive in and take a closer look at this process, at some changes that arrived in Kubernetes 1.21 that will enhance Kubernetes authentication, and how to modify your applications to take advantage of the security capabilities.
Let’s spin up a pod and poke around. If you’re following along, make sure that you are doing this on a 1.20 (or lower) cluster.
What are these files? Where did they come from? They certainly don’t seem like something that ships in the Debian base image:
ca.crt is the trust anchor needed to validate the certificate presented by the Kubernetes API Server in this cluster. Typically, it will contain a single, PEM-encoded certificate.namespace contains the namespace that the pod is running in — in our case, default.
token contains the service account token — a bearer token that you can attach to API requests. Eagle-eyed readers may notice that it has the tell-tale structure of a JSON Web Token (JWT): <base64>.<base64>.<base64>.
An aside for security hygiene: Do not post these tokens anywhere. They are bearer tokens, which means that anyone who holds the token has the power to authenticate as your application’s service account.
To figure out where these files come from, we can inspect our pod object as it exists on the API server:
The API server has added… a lot of stuff. But the relevant portion for us is:
When the pod was scheduled, an admission controller injected a secret volume into each container in our pod.The secret contains keys and data for each file we saw inside the pod.
Let’s take a closer look at the token. Here’s a real example, from a cluster that no longer exists.
As mentioned earlier, this is a JWT. If we pop it in to our favorite JWT inspector, we can see that the token has the following claims:
Breaking them down:
iss (“issuer”) is a standard JWT claim, meant to identify the party that issued the JWT. In Kubernetes legacy tokens, it’s always hardcoded to the string “kubernetes/serviceaccount”, which is technically compliant with the definition in the RFC, but not particularly useful.sub (“subject”) is a standard JWT claim that identifies the subject of the token (your service account, in this case). It’s the standard string representation of your service account name (the one also used when referring to the serviceaccount in RBAC rules): system:serviceaccount:<namespace>:<name>. Note that this is technically not compliant with the definition in the RFC, since this is neither globally unique, nor is it unique in the scope of the issuer; two service accounts with the same namespace and name but from two unrelated clusters will have the same issuer and subject claims. This isn’t a big problem in practice, though.kubernetes.io/serviceaccount/namespace is a Kubernetes-specific claim; it contains the namespace of the serviceaccount.kubernetes.io/serviceaccount/secret.name is a Kubernetes-specific claim; it names the Kubernetes secret that holds the token.kubernetes.io/serviceaccount/service-account.name is a Kubernetes-specific claim; it names the service account.kubernetes.io/serviceaccount/service-account.uid is a Kubernetes-specific claim; it contains the UID of the service account. This claim allows someone verifying the token to notice that a service account was deleted and then recreated with the same name. This can sometimes be important.
When your application talks to the API server in its cluster, the Kubernetes client library loads this JWT from the container filesystem and sends it in the Authorization header of all API requests. The API Server then validates the JWT signature and uses the token’s claims to determine your application’s identity.
This also works for authenticating to other services. For example, a common pattern is to configure Hashicorp Vault to be able to authenticate callers using service account tokens from your cluster. To make the task of the relying party (the service seeking to authenticate you) easier, Kubernetes provides the TokenReview API; the relying party just needs to call TokenReview, passing the token you provided. The return value indicates whether or not the token was valid; if so, it also contains the username of your serviceaccount (again, in the form system:serviceaccount:<namespace>:<name>).
Great. So what’s the catch? Why did I ominously title this section “legacy” tokens? Legacy tokens have downsides:
Legacy tokens don’t expire. If one gets stolen, or logged to a file, or committed to Github, or frozen in an unencrypted backup, it remains dangerous until the end of time (or the end of your cluster).
Legacy tokens have no concept of an audience. If your application passes a token to service A, then service A can just forward the token to service B and pretend to be your application. Even if you trust service A to be trustworthy and competent today, because of point 1, the tokens you pass to service A are dangerous forever. If you ever stop trusting service A, you have no practical recourse but to rotate the root of trust for your cluster.
Legacy tokens are distributed via Kubernetes secret objects, which tend not to be very strictly access-controlled, and means that they usually aren’t encrypted at rest or in backups.
Legacy tokens require extra effort for third-party services to integrate with; they generally need to explicitly build support for Kubernetes because of the custom token claims and the need to validate the token with the TokenReview API.
These issues motivated the design of Kubernetes’ new token format called bound service account tokens.
Launched in Kubernetes 1.13, and becoming the default format in 1.21, bound tokens address all of the limited functionality of legacy tokens, and more:
The tokens themselves are much harder to steal and misuse; they are time-bound, audience-bound, and object-bound.
They adopt a standardized format: OpenID Connect (OIDC), with full OIDC Discovery, making it easier for service providers to accept them.
They are distributed to pods more securely, using a new Kubelet projected volume type.
Let’s explore each of these properties in turn.
We’ll repeat our earlier exercise and dissect a bound token. It’s still a JWT, but the structure of the claims has changed:
Time-binding is implemented by the exp (“expiration”), iat (“issued at”), and nbf (“not before”) claims; these are standardized JWT claims. Any external service can use its own clock to evaluate these fields and reject tokens that have expired. Unless otherwise specified, bound tokens default to a one-hour lifetime. The Kubernetes TokenReview API automatically checks if a token is expired before deciding that it is valid.
Audience binding is implemented by the aud (“audience”) claim; again, a standardized JWT claim. An audience strongly associates the token with a particular relying party. For example, if you send service A a token that is audience-bound to the string “service A”, A can no longer forward the token to service B to impersonate you. If it tries, service B will reject the token because it expects an audience of “service B”. The Kubernetes TokenReview API allows services to specify the audiences they accept when validating a token.
Object binding is implemented by the kubernetes.io group of claims. The legacy token only contained information about the service account, but the bound token contains information about the pod the token was issued to. In this case, we say that the token is bound to the pod (tokens can also be bound to secrets). The token will only be considered valid if the pod is still present and running according to the Kubernetes API server — sort of like a supercharged version of the expiration claim. This type of binding is more difficult for external services to check, since they don’t have (and you don’t want them to have) the level of access to your cluster necessary to check the condition. Fortunately, the Kubernetes TokenReview API also verifies these claims.
Bound service account tokens are valid OpenID Connect (OIDC) identity tokens. This has a number of implications, but the most consequential can be seen in the value of the iss (“issuer”) claim. Not all implementations of Kubernetes surface this claim, but for those that do (including GKE), it points to a valid OIDC Discovery endpoint for the tokens issued by the cluster. The upshot of this is that the external services do not need to be Kubernetes-aware in order to authenticate clients using Kubernetes service accounts; they only need to support OIDC and OIDC Discovery. As an example of this type of integration, the OIDC Discovery endpoints underlie GKE Workload Identity, which integrates the Kubernetes and GCP identity systems.
As a final improvement, bound service account tokens are deployed to pods in a more scalable and secure way. Whereas legacy tokens are generated once per service account, stored in a secret, and mounted into pods via a secret volume, bound tokens are generated on-the-fly for each pod, and injected into pods using the new Kubelet serviceAccountToken volume type. To access them, you add the volume spec to your pod and mount it into the containers that need the token.
Note that we have to choose an audience for the token up front, and that we also have control over the token’s validity period. The audience requirement means that it’s fairly common to mount multiple bound tokens into a single pod, one for each external party that the pod will be communicating with.
Internally, the serviceAccountToken projected volume is implemented directly in Kubelet (the primary Kubernetes host agent). Kubelet handles communicating with kube-apiserver to request the appropriate bound token before the pod is started, and periodically refreshes the token when its expiry is approaching.
To recap, bound tokens are:
Significantly more secure than legacy tokens due to time, audience, and object binding, as well as using a more secure distribution mechanism to pods.
Easier to iterate with for external parties, due to OIDC compatibility.
However, the way you integrate with them has changed. Whereas there was a single legacy token per service account, always accessible at /var/run/secrets/kubernetes.io/serviceaccount/token, each pod may have multiple bound tokens. Because the tokens expire and are refreshed by Kubelet, applications need to periodically reload them from the filesystem.
Bound tokens have been available since Kubernetes 1.13, but the default token issued to pods continued to be a legacy token, with all the security downsides that implied. In Kubernetes 1.21, this changes: the default token is a bound service account token. Kubernetes 1.22 finishes off the migration by promoting bound service account tokens by default to GA.
In the next sections, we will take a look at what these changes mean for users of Kubernetes service account tokens, first for clients, and then for service providers.
In Kubernetes 1.21, the default token available at /var/run/secrets/kubernetes.io/serviceaccount/token is changing from a legacy token to a bound service account token. If you use this token as a client, by sending it as a bearer token to an API, you may need to make changes to your application to keep it working.
For clients, there are two primary differences in the new default token:
The new default token has a cluster-specific audience that identifies the cluster’s API server. In GKE, this audience is the URL https://container.googleapis.com/v1/projects/PROJECT/locations/LOCATION/clusters/NAME.
The new default token expires periodically, and must be refreshed from disk.
If you only ever use the default token to communicate with the Kubernetes API server of the cluster your application is deployed in, using up-to-date versions of the official Kubernetes client libraries (for example, using client-go and rest.InClusterConfig), then you do not need to make any changes to your application. The default token will carry an appropriate audience for communicating with the API server, and the client libraries handle automatically refreshing the token from disk.
If your application currently uses the default token to authenticate to an external service (common with Hashicorp Vault deployments, for example), you may need to make some changes, depending on the precise nature of the integration between the external service and your cluster.
First, if the service requires a unique audience on its access tokens, you will need to mount a dedicated bound token with the correct audience into your pod, and configure your application to use that token when authenticating to the service. Note that the default behavior of the Kubernetes TokenReview API is to accept the default Kubernetes API server audience, so if the external service hasn’t chosen a unique audience, it might still accept the default token. This is not ideal from a security perspective — the purpose of the audience claim is to protect yourself by ensuring that tokens stolen from (or used nefariously by) the external service cannot be used to impersonate your application to other external services.
If you do need to mount a token with a dedicated audience, you will need to create a serviceAccountToken projected volume, and mount it to a new path in each container that needs it. Don’t try to replace the default token. Then, update your client code to read the token from the new path.
Second, you must ensure that your application periodically reloads the token from disk. It’s sufficient to just poll for changes every five minutes, and update your authentication configuration if the token has changed. Services that provide client libraries might already handle this task in their client libraries.
Your application uses an official Kubernetes client library to read and write Kubernetes objects in the local cluster: Ensure that your client libraries are up-to-date. No further changes are required; the default token already carries the correct audience, and the client libraries automatically handle reloading the token from disk.
Your application uses Google Cloud client libraries and GKE Workload Identity to call Google Cloud APIs: No changes are required. While Kubernetes service account tokens are required in the background, all of the necessary token exchanges are handled by gke-metadata-server.
Your application uses the default Kubernetes service account token to authenticate to Vault: Some changes are required. Vault integrates with your cluster by calling the Kubernetes TokenReview API, but performs an additional check on the issuer claim. By default, Vault expects the legacy token issuer of kubernetes/serviceaccount, and will reject the new default bound token. You will need to update your vault configuration to specify the new issuer. On GKE, the issuer follows the pattern https://container.googleapis.com/v1/projects/PROJECT/locations/LOCATION/clusters/NAME.
Currently, Vault does not expect a unique audience on the token, so take care to protect the default token. If it is compromised, it can be used to retrieve your secrets from Vault.
Your application uses the default Kubernetes service account token to authenticate to an external service: In general, no immediate changes are required, beyond ensuring that your application periodically reloads the default token from disk. The default behavior of the Kubernetes TokenReview API ensures that authentication keeps working across the transition. Over time, the external service may update to require a unique audience on tokens, which will require you to mount a dedicated bound token as described above.
Services that authenticate clients using the default service account token will continue to work as clients upgrade their clusters to Kubernetes 1.21, due to the default behavior of the Kubernetes TokenReview API. Your service will begin receiving bound tokens with the default audience, and your TokenReview requests will default to validating the default audience. However, bound tokens open up two new integration options for you.
First, you should coordinate with your clients to start requiring a unique audience on the tokens you accept. This benefits both you and your clients by limiting the power of stolen tokens:
Your clients no longer need to trust you with a token that can be used to authenticate to arbitrary third parties (for example, their bank or payment gateways).You no longer need to worry about holding these powerful tokens, and potentially being held responsible for breaches. Instead, the tokens you accept can only be used to authenticate to your service.
To do this, you should first decide on a globally-unique audience value for your service. If your service is accessible at a particular DNS name, that’s a good choice. Failing that, you can always generate a random UUID and use that. All that matters is that you and your clients agree on the value.
Once you have decided on the audience, you need to update your TokenReview calls to begin validating the audience. In order to give your clients time to migrate, you should conduct a phased migration:
Update your TokenReview calls to specify both your new audience and the default audience in the spec.audiences list. Remember that the default audience is different for every cluster, so you will either need to obtain it from your client, or guess it based on the kube-apiserver endpoint they provide you. As a reminder, for GKE cluster, the default audience is https://container.googleapis.com/v1/projects/PROJECT/locations/LOCATION/clusters/NAME. At this point, your service will accept both the old and the new audience.
Have your clients begin sending tokens with the new audience, by mounting a dedicated bound token into their pods and configuring their client code to use it.
Update your TokenReview calls to specify only your new audience in the spec.audiences list.
Second, if you have certain requirements, you can consider integrating with Kubernetes using the OpenID Connect Discovery standard. If instances of your service integrate with thousands of individual clusters, need to support high authentication rates, or aim to federate with many non-Kubernetes identity sources, you can consider integrating with Kubernetes using the OpenID Connect Discovery standard, rather than the Kubernetes TokenReview API.
This approach has benefits and downsides: The benefits are:
You do not need to manage Kubernetes credentials for your service to authenticate to each federated cluster (in general, OpenID Discovery documents are served publicly).Your service will cache the JWT validation keys for federated clusters, allowing you to authenticate clients even if kube-apiserver is down or overloaded in their clusters.This cache also allows your service to handle higher call rates from clients, with lower latency, by taking the federated kube-apiservers off of the critical path for authentication.Supporting OpenID Connect gives you the ability to federate with additional identity providers beyond Kubernetes clusters.
The downsides are:
You will need to operate a cache for the JWT validation keys for all federated clusters, including proper expiry of cached keys (clusters can change their keys without advance warning).You lose some of the security benefits of the TokenReview API; in particular, you will likely not be able to validate the object binding claims.
In general, if the TokenReview API can be made to work for your use case, you should prefer it; it’s much simpler operationally, and sidesteps the deceptively difficult problem of properly acting as an OpenID Connect relying party.
Read More for the details.
Last October we rolled out a new web experience to help you better understand the products and solutions best suited for your projects, connect you directly to the developer documentation for each product to get started quickly, and help you visualize usage and associated costs to have a better idea of what to expect before getting started. Now we’re rolling out a new solution finder to make it even easier for you to understand what products you need to build what you want.
Once on the Google Maps Platform website, you can access Solution Finder three ways: hover over the Resources tab in the top navigation bar, click “find your solution” on the solutions navigation, or click the Solution Finder card on the main hero section of the site. Then, all you need to do is select the use case and industry that applies to your project and you’ll automatically see the products and solutions right for you.
Happy mapping!
For more information on Google Maps Platform, visit our website.
Read More for the details.
AWS IoT Greengrass is an Internet of Things (IoT) edge runtime and cloud service that helps customers build, deploy, and manage device software. We are excited to announce our version 2.6 release, which adds edge support for MQTT version 5, an updated device-to-device communication specification that includes many additional feature improvements over the MQTT version 3.1.1 protocol.
Read More for the details.
AWS CloudFormation announces the general availability (GA) of AWS CloudFormation Guard 2.1 (cfn-guard), which enhances Guard 2.0 with new features. CloudFormation Guard is an open-source domain-specific language (DSL) and command line interface (CLI) that helps enterprises keep their AWS infrastructure and application resources in compliance with their company policy guidelines. CloudFormation Guard provides compliance administrators with a simple, policy-as-code language to define rules that can check for both required and prohibited resource configurations. It enables developers to validate their templates (CloudFormation Templates, K8s configurations, and Terraform JSON configurations) against those rules.
Read More for the details.
QuickSight Authors can now try, learn and experience Q before signing up. Authors can choose from six different sample topics to explore relevant dashboard visualizations and ask questions about data in the context of exploration to fully explore Q’s capability before signing up. This feature makes it easy for authors to understand and learn about Q before signing up.
Read More for the details.
Amazon EventBridge cross-Region routing allows customers to consolidate events from numerous regions into one central Region. This makes it easier for customers to centralize their events in the destination Region and write code that reacts to them or replicate events from source to destinations Regions to help synchronize data across Regions. Today, we are excited to announce availability of cross-Region routing in AWS GovCloud (US) Regions.
Read More for the details.
Journeys in Amazon Pinpoint now allows customers to define a schedule for channel communications based on day of the week, and day of the year. In addition, Amazon Pinpoint has added two new journey sending limits to help customers control the volume of communications sent to a user. Amazon Pinpoint journeys are multi-step campaigns that send users on communication paths based on their actions or attributes. Journeys can use multiple channels including: SMS, email, push, and voice. Journeys are intended for customers who have user engagement use cases, and want to send targeted communications that drive high-value user actions.
Read More for the details.
AWS Well-Architected Tool now integrates with AWS Organizations enabling cloud architects to share their workloads and custom lenses more broadly across their organization. AWS Organizations is an account management service that allows customers to consolidate multiple AWS accounts into a single, centrally managed organization. This update will increase efficiency and make it easier to share lenses and workloads with multiple accounts.
Read More for the details.
Amazon Connect now allows you to further personalize, the automated self-service customer experience using Amazon Lex intent confidence scores as a branch within your flows. Amazon Lex allows customers to create intelligent chatbots that turn their Amazon Connect flows into natural conversations. By branching flows on Lex confidence scores, you can present the right solutions to your customers to help solve their issues faster. For example, when a confidence score is high you may want to present customers a with a self-service option immediately rather than requesting additional information or transferring them to an agent. This new functionality can be set up using the “Check contact attributes” flow block.
Read More for the details.
Amazon Connect now allows you to further personalize the automated, self-service customer experience by leveraging Amazon Lex customer sentiment analysis as a branch within your flows. Amazon Lex allows customers to create intelligent chatbots that turn their Amazon Connect flows into natural conversations. With this launch, you can now build flows based on whether the customer expresses positive or negative utterances to your Lex bot. For example, you may want customers who express positive sentiment to be presented with additional upsell opportunities or you may want customers who express negative sentiment to be put directly in queue to speak with an agent. The new functionality can be setup using the “Get Customer Input” or the “Check contact attributes” flow blocks. In addition, all Lex related attributes (e.g., Intent, Slots, Sentiment) within flow blocks are now consolidated under one “type” within attribute selection to help simplify building your Lex experience within flows.
Read More for the details.
With the release of new granular permission APIs, Amazon FinSpace customers can now fully manage user access within their FinSpace environment using the AWS SDK and CLI. This allows customers to integrate configuration of FinSpace access controls into their identity orchestration workflows to keep FinSpace in sync with their organization’s access policies.
Read More for the details.
Azure Active Directory (Azure AD) authentication for Azure Monitor Application helps ensure that only authenticated telemetry is ingested in your Application Insights resources.
Read More for the details.
Amazon Connect Customer Profiles now allows you to automatically merge duplicate customer records on confidence scores. Each time the identity resolution feature finds duplicate records, it provides a confidence score on a scale of zero to one to represent the accuracy of a match, where a score of one represents most accurate match and zero represents least accurate match. You can select a threshold anywhere between zero to one to automatically merge duplicate records into a unified customer profile.
Read More for the details.