AWS – Amazon Kinesis Data Firehose is now available in Israel (Tel Aviv) AWS Region
Starting today, customers can use Amazon Kinesis Data Firehose in the Israel (Tel Aviv) AWS Region.
Read More for the details.
Starting today, customers can use Amazon Kinesis Data Firehose in the Israel (Tel Aviv) AWS Region.
Read More for the details.
You can now access Claude 2, the latest version of Anthropic’s large language model (LLM), on Amazon Bedrock. Claude 2 can take up to 100,000 tokens in each prompt, meaning it can work over hundreds of pages of text, or even an entire book. Claude 2 can also write longer documents—on the order of a few thousand tokens—compared to its prior version, giving you even greater ways to develop generative AI applications using Amazon Bedrock.
Read More for the details.
Editor’s note: The post is part of a series showcasing our partner solutions that are Built with BigQuery.
In a world where data-driven decision-making is the key to success, have you ever considered the impact that weather can have on your organization? Weather-related economic and insured loss on an annual basis has been measured to be in excess of $600 billion. While there aren’t any solutions on the market that let us control the weather, being able to predict, mitigate and capitalize on weather risk is another story.
Most businesses don’t fully realize the effect of changing and anomalous weather patterns on their business, or lack the resources to integrate weather data into their models. And that’s no surprise. Complex weather analysis in the world of big data can be overwhelming, but done right, it can not only offer opportunities to mitigate operational or supply chain interruptions, but also uncover new opportunities that can be harnessed to give you a competitive edge. Weather Source makes weather analytics simple and accessible – by providing globally uniform data for generating insights and business intelligence that your organization can act on, fast and at scale.
By far, the biggest challenge we see organizations facing, ahead of unlocking insights from weather data, is the complexity of data sets. Real and reliable weather information management often requires high-performance distributed systems that are costly and require skilled personnel to run. This, as well as a lack of time, manpower, or expertise can hinder businesses from utilizing weather and climate data effectively. While weather data analytics can be difficult if organizations try to do it alone, Weather Source has done the heavy lifting by offering a single source of truth for all things weather and climate.
Weather Source, backed by Google’s serverless enterprise data warehouse BigQuery, solves the challenges of weather data analytics with a powerful and scalable data warehousing solution fit for a wide range of industries. BigQuery handles resource provisioning, upgrades, security, and infrastructure management, allowing users to focus on data analysis, while Weather Source makes weather and climate data easily and instantly accessible to businesses globally, enabling insights and business intelligence.
We are excited to announce we are making Weather Source datasets available in Analytics Hub and on the Google Cloud Marketplace. This will make it easier and faster for customers to find, purchase, and deploy directly into their BigQuery environment, without having to move data or maintain expensive TEL processes, all at the scale that consumers need.
The foundation of Weather Source is the proprietary OnPoint Grid, which consists of millions of grid points covering every land mass on the globe and up to 200 miles offshore. At each grid point, data is collected from a multitude of advanced weather sensing technologies including airport reporting stations, satellites, radar systems, and trusted weather models from ECMWF and NOAA. By integrating and unifying input data from these diverse sources, Weather Source is able to provide accurate and comprehensive weather information that is both temporally and spatially complete, as well as globally uniform.
Businesses across industries have found success with Weather Source by using it to quantify the historical impact of weather and climate on various business KPIs, and then create predictive models to improve business operations, increase sales, reduce waste, predict risks, and prevent future losses – just to name a few outcomes.
Weather Source works with leading organizations across industries and use cases. A few examples include:
Healthcare
A large pharmaceutical company used Weather Source data in BigQuery to create an influenza forecast. By correlating historical weather data with historical flu transmission data, the customer was able to identify which weather parameters (temperature, humidity, and precipitation) and at what severity level, resulted in increased flu transmission. After the historical analysis was complete, a predictive flu forecast model was created that used forecast data to trigger advertising campaigns in areas that were forecast to have a flu outbreak. Its algorithms predicted when and where the flu would strike with up to 97% accuracy.
FinTech
A large quantitative firm used Weather Source historical data to quantify the impact of a snow forecast on the sale of a large ski resort’s season passes. The analysis focused on anomalous early starts or late ends to the ski season and determined that early forecasts of snow resulted in increased purchases of season passes and late season forecasts of snow resulted in higher-than-normal early purchases of season passes for the next season.
Retail
A large tire manufacturer used Weather Source historical actuals and historical forecast data to understand how many tires would be sold on the first snow forecast, and whether purchasing behavior changed if it actually didn’t snow. Using historical weather information correlated with historical tire-sale information, the analysis revealed that 88% of annual snow tire sales were on the first snow forecast and the purchasing volume did not change if it didn’t snow. The results of the analysis also showed that the manufacturer sold considerably fewer tires — an average of 2% per month — after the initial snow forecast. As a result of the analysis, the manufacturer began to focus considerable marketing efforts on the first snow forecast in locations where it sold tires.
Energy and Utilities
A large energy company uses OnPoint Weather in BigQuery for energy-demand forecasts. The company used OnPoint Climatology (the statistical representation of weather over time) as a baseline for energy demand during average conditions. By differencing historical actuals from the climatology data, the company is able to identify the departures to normal (anomalies). By correlating the anomalies to energy demand, it can then understand historic energy demand during normal or average conditions, and more importantly during anomalous weather conditions.
Unlike other providers that rely solely on singular weather sensing inputs (i.e., airport reporting stations) and then use simple interpolation methods to extend the data to your location of interest, Weather Source approaches weather in a markedly different way. On a daily basis, Weather Source ingests multiple terabytes of data from thousands of weather sensing inputs, then unifies the inputs on a high resolution grid, and maps the weather information to precise business locations with low latency – and BigQuery is the enabler behind it:
BigQuery’s scalability and serverless architecture allow Weather Source to deliver petabyte-sized data at scale, meeting the needs of users on demand.
The ability to store, explore, and run queries on generated data from servers, sensors, and other devices using BigQuery provides the flexibility to transform data and gain business intelligence quickly.
Analytics Hub enables businesses to easily subscribe to Weather Source datasets and securely access the latest Weather Source data in their own BigQuery instance as a linked dataset.
From there, BigQuery gives data analysts access to ML and geospatial capabilities via SQL commands to query any location of interest (i.e., latitude / longitude coordinate, ZIP or Postal Code, etc.) and resource (i.e., past, present or forecast weather data) and combine them with their own data. This enables them to quantify the impact of weather and climate and create models to predict and mitigate risk and prevent future losses and business disruption.
Do you want to start gaining some control over the uncontrollable? With Weather Source making weather and climate data easily accessible, and BigQuery analyzing and generating insights within a secure cloud platform, the forecast for your organization looks promising. Learn more about Weather Source.
Google is helping tech companies like Weather Source build innovative applications on Google’s data cloud with simplified access to technology, helpful and dedicated engineering support, and joint go-to-market programs through the Built with BigQuery initiative. Participating companies can:
Accelerate product design and architecture through access to designated experts who can provide insight into key use cases, architectural patterns, and best practices.
Amplify success with joint marketing programs to drive awareness, generate demand, and increase adoption.
BigQuery gives ISVs the advantage of a powerful, highly scalable data warehouse that’s integrated with Google Cloud’s open, secure, sustainable platform. And with a huge partner ecosystem and support for multi-cloud, open source tools and APIs, Google provides technology companies the portability and extensibility they need to avoid data lock-in.
Read More for the details.
In this post I will be looking at how we can use Google Workflows to orchestrate tasks and processes. I specifically want to detail how to process data within a Workflow. This is useful when looking to parse incoming data for information relevant to one or more next stages within the Workflow.
Google Workflows allows the orchestration of hosted or custom services like Cloud Functions, Cloud Run, BiqQuery or any HTTP-based API, in a defined order of steps.
Therefore, Workflows are useful where we want to control the flow of execution of steps.
Workflows are also useful when we are looking to pass information like event data, error codes, response body, onto the other orchestrated services.
But how do we capture and examine the incoming data to the Workflow and use it in the Workflow?
Let’s consider a simple online flower store. In this scenario, our online store accepts orders for flowers – name, productID, quantity – and generates an orderID.
To process each order successfully in the system, we want to capture the orderID of an order of flowers, and pass that value on to any other services in our online store process.
The sample order data we are looking to process is a simple one value array as follows:
To achieve this using Google Workflows we have set up the following architecture:
First we will publish our flower order to a Pub/Sub topic. This PubSub topic will then trigger a Workflow, using Eventarc.
Then the input data – our flowers orders information – in the Pub/Sub message is processed by the Workflow.
This can then be passed as input to other services orchestrated by the Workflow, for example a Cloud Function or a Cloud Run service
Let’s set this up by first creating the Pub/Sub topic, then the Workflow and finally the Eventarc trigger to link them together.
First let’s enable the relevant apis in cloud shell:
Create the pub/sub topic
where new_pusub is the name of your Pub/Sub topic
To create a Workflow, we need a configuration file. This is where all the logic for the Workflow steps and data processing we require is written.
Create a file called workflow.yaml and paste in the following content. Save the file.
The Workflows service receives incoming events and converts it to a JSON object (following the CloudEvents specification). This is passed to the Workflow execution request as a runtime argument, and used in the Workflows workspace.
Note the following key steps in our Workflow:
params – runtime arguments passed to the Workflow as part of the execution request. This names the variable that stores the data – in this case ‘event’.
init_vars – defining global variables
decode_pubsub_message – decoding and parsing the incoming event data. This is a key step for our use case and is described below.
finish_workflow – return output
This is built into the Decode_pubsub_message step in a few stages. Let’s go through each one:
Firstly, the ‘inbound_message’ variable is assigned the message data value of the decoded inbound event data ${base64.decode(event.data.message.data)};
The JSON format of the event is documented in the CloudEvents – JSON event format document.
Then the ‘full_msg’ variable is assigned the decoded json strings from the ‘inbound_message’ variable: ${json.decode(text.decode(inbound_message))}
Lastly the ‘input’ variable is assigned the orderID value that we want in the Workflow. Query the message array – ${‘inbound_message} for the first value – flowers[0] , and then for the orderID value: ${full_msg.flowers[0].orderID)}
This can then be used as input to steps listed in the Workflow. Here, we simply return this value as output of the Workflow ( ${input} ).
So let’s create the Workflow
where new-workflow is the workflow name
OK, now we have looked at the required logic, and built our simple architecture, let’s see this in action.
Create a service account used to invoke our workflow:
Enable the relevant permissions:
where ${PROJECT_ID} is the ID of your GCP project
Create an EventArc trigger that will invoke the Workflows execution when we publish a message to the PubSub topic.
where ${PROJECT_ID} is the ID of your GCP project
Now we have everything we need, let’s run it.
Publish a message in the PubSub topic:
From the console open Workflows / new-workflow / Executions. Here we can see the successful execution of the Workflow:
We can also see the output of the message in the latest execution by clicking the execution ID link:
Note the output on the right hand side of the panel, showing our total flower order, and the separate orderID (001233).
So, to wrap up, we have passed input data to a Workflow, looked at the logic required to extract the input event data within the Workflow,, and finally returned this as output. The ability to share data in the Workflow workspace across all the steps within extends the ability of Cloud Workflow to orchestrate complex solutions, with end-to-end observability.
Interested in finding out more about Workflows? Check out this Google Codelab Triggering Workflows with Eventarc; There are also lots of Workflow code examples on the All Workflows code samples page. Finally, a comprehensive tutorial on building event driven systems with Workflows and EventArc is available on YouTube Build an event-driven orchestration with Eventarc and Workflows.
Read More for the details.
In the rapidly changing world of technology, it is essential for professionals in the DevOps, IT Ops, Platform Engineering, and SRE domains to stay up-to-date on the latest innovations and best practices. Google Cloud Next, is the ideal place to do just that! Think of it as a golden opportunity to gain insights, expand your knowledge, and connect with like-minded peers. Read on for five compelling reasons why attending Next ‘23 is a must for operations professionals this year.
A recent IDC survey1 found that IT operations is the area with the most potential to benefit from generative AI assistance. From automating routine tasks to predicting and preventing potential issues, generative AI could revolutionize the way IT teams work. At Next ‘23, you’ll have the chance to delve into the latest AI breakthroughs and explore how they can help supercharge IT operations. Our expert speakers will share real-world success stories of building an AIOps platform on Google Cloud, revealing the immense potential AI holds for optimizing workflows, enhancing system reliability, and driving efficiency in IT environments.
Platform engineering has emerged as a crucial discipline for organizations aiming to build robust, scalable, and efficient software delivery platforms. Next ‘23 is a great place for professionals to dive deep into the principles and practices of Platform Engineering. Through sessions such as a panel with industry-leading thought leaders, and platform engineering customer success stories, you’ll gain the expertise you need to architect and build your own software delivery platform for your developers.
Google Kubernetes Engine fans, you don’t want to miss out on this opportunity to gain insights into the latest GKE features, best practices, and real-world use cases from your peers. As the first managed Kubernetes offering, GKE has won the hearts of countless customers, and at Next ‘23 this year, you’ll get the chance to discover why! GKE customers like ANZ Bank, Equifax, Ordaōs Bio, Etsy, and Moloco will share their success stories. More than that, we’ll also unveil the latest innovations that we’re adding to GKE to help you run modern workloads.
Learning from peers is invaluable for technology professionals. At Next ‘23, you’ll be exposed to customer success stories about how leading organizations have leveraged Google Cloud’s solutions to drive business growth and innovation. Hear from Charles Schwab on their best practices operating Google Cloud services; learn from SAP about how they control cost and build financial resilience with automation; listen to Wayfair on how they cut costs by 64% by moving from open source to Google Cloud operational tools. Snap will share their approach to observability; Priceline will discuss how they optimize Kubernetes for reliability and cost-efficiency; and Uber will show you how they build an AIOps platform with Google Cloud services.
Networking and community building are an integral part of the Google Cloud Next experience, and this year, it gets even better! For the first time since the pandemic, our premier event is going back to its roots as an in-person gathering. Engage with your peers at the Innovator Hive’s specialized zone for the DevOps and IT Ops professionals. Get your hands on the interactive demos, try the micro-challenges, throw yourself into immersive learning, play the games, and meet with Google experts and your peers for architecture deep dives. We’re confident these interactions will open doors to new perspectives, ideas, and future partnerships.
Google Cloud Next is not just another tech event; it’s an immersive experience that has the power to transform your career and ignite your passion for all things cloud and technology. From exploring the latest GKE innovations to discovering the magic of AI in IT operations, from embracing platform engineering to learning from your peers, Next ‘23 is brimming with opportunities to expand your horizons and elevate your skills.
So, don’t miss your chance to be a part of this extraordinary event. Mark your calendar, pack your enthusiasm, and join us at Google Cloud Next. Together, we’ll unleash the full potential of cloud technology and pave the way for a brighter, more innovative future.
1: US – Generative AI, IDC, April 2023; N=200; Base=All Respondents; Notes: Managed by IDC’s Global Primary Research Group; Data Not Weighted. Use caution when interpreting small sample sizes.
Read More for the details.
Go is the world’s leading programming language for cloud-based development, used by millions of developers to build and scale their cloud applications and business-critical cloud infrastructure. Whether it’s building CLIs, web applications, or cloud and network services, developers find Go easy to learn, easy to maintain, and packed with useful features such as built in concurrency and a robust standard library.
And now, getting started with Go on Google Cloud is a little bit easier. Following Go’s recent announcement of gonew, an experimental tool for instantiating new projects in Go from predefined templates, Google Cloud is releasing four templates for gonew that developers can use to bootstrap their Go applications across several Google Cloud products. This includes common use cases such as creating a simple HTTP handler Cloud Function, subscribing to a Cloud Pub/Sub topic, or creating an HTTP Server on Cloud Run.
httpfn: A basic HTTP handler (Cloud Function)
pubsubfn: A function that is subscribed to a PubSub topic handling a Cloud Event (Cloud Function)
microservice: An HTTP server that can can be deployed to a serverless runtime (Cloud Run)
taskhandler: An basic app that handles tasks from requests (App Engine)
Make sure you have Go installed on your machine, if not you need to download/install from here .
Start by installing gonew using go install:
$ go install golang.org/x/tools/cmd/gonew@latest
To generate a basic HTTP Handler Cloud Function, instantiate the existing httpfn template by running gonew in your new project’s parent directory. As of now, gonew’s syntax expects two arguments: first, the path to the template you wish to invoke, and second, the module name of the project you are creating. For example:
$ gonew github.com/GoogleCloudPlatform/go-templates/functions/httpfn yourdomain.com/httpfn
Deploy this new go module to Cloud Functions by following the steps listed here, you can also try this in the Cloud Shell editor.
We have big plans for Go on Google Cloud, with several new enhancements on our roadmap. In the meantime, please try out these new templates, deploy them to Google Cloud using the instructions provided and let us know how we can make them better. Please use this to report any issues.
Read More for the details.
Enhanced Security and Compliance (ESC) add-on helps simplify the complexity of meeting security and regulatory requirements for Azure Databricks customers.
Read More for the details.
Starting today, Local Write Forwarding is generally available for Amazon Aurora MySQL-Compatible Edition 3 (with MySQL 8.0 compatibility). This new capability makes it simple to scale read workloads which require read after write consistency. Customers can now issue transactions containing both reads and writes on Aurora read replicas and the writes will be automatically forwarded to the single writer instance for execution. Applications requiring read scale can utilize up to 15 Aurora Replicas for scaling reads without the need to maintain complex application logic that separates reads from writes. Check out this blog to find out how local write forwarding can help reduce the complexity of your application code.
Read More for the details.
Starting today, Amazon Aurora MySQL-Compatible Edition 3 (with MySQL 8.0 compatibility) will support MySQL 8.0.28. In addition to several security enhancements and bug fixes, MySQL 8.0.28 includes several improvements, such as Instant DDL support for Rename column operations, support for multi-threaded DDL operations, support for TLS v1.3 protocol, and performance schema monitoring enhancements.. For more details, refer to the Aurora MySQL 3 and MySQL 8.0.28 release notes.
Read More for the details.
Amazon Connect now supports custom flow block titles that make it easy to identify and distinguish blocks within your flow. Custom flow block titles can be set within the flow designer UI or via API. For example, you could rename a “Play Prompt” flow block to “Welcome message” or a “Get customer input” flow block to “Hotel booking Lex bot”. Custom defined flow block titles also show up in CloudWatch logs making it easier to diagnose where errors may be occurring.
Read More for the details.
Today, AWS IoT Core announced the support for new algorithms for certificate signing and key generation, expanding the list of already supported asymmetric X.509 client certificate signature schemes. AWS IoT Core is a managed service that allows customers to connect billions of Internet of Things (IoT) devices to AWS and uses X.509 certificates as one of the means to authenticate client and device connections to AWS cloud. The support for Rivest Shamir Adleman Signature Scheme with Appendix based on the Probabilistic Signature Scheme (RSASSA-PSS) signing and P-521 elliptic curve key algorithms, provide developers more flexibility to strengthen the security posture of their IoT solutions and comply with organization’s specific cryptographic standard compliance requirements.
Read More for the details.
In this blog post, we’re highlighting Decathlon Digital for the DevOps achievements that earned them the ‘Aligning to accelerate’ award in the2022 DevOps Awards. If you want to learn more about the winners and how they used DORA metrics and practices to grow their businesses, start here.
Founded in Lille, France in 1976, Decathlon is a company whose international and distributed team is dedicated to making sport accessible to the many through digital & technology. With over 1,750 stores in 70 countries and regions, we are the largest multisport goods retailer in the world.
As an international leader in the sports retail industry, Decathlon Digital had already acknowledged the strengths of operating in the cloud. We are already part of teams that used good Agile practices for more regular and better software delivery.
While this seems like a cloud success story, we started noticing areas that needed improvement, specifically around aligning teams and technology. We found it essential to incorporate an agile practice that doesn’t merely focus on individual achievements, but rather promotes well-being and psychological safety through effective teamwork. Our goal was to create a synergistic environment that fosters collaboration and alignment, empowering teams to deliver maximum value to our products for our customers and sports enthusiasts swiftly and skillfully.
When we came across the Accelerate State of DevOps Report, its promises of enhanced performance, job satisfaction, and improved teamwork resonated with us. We saw this study as a pragmatic way to help our team members grow while maintaining agility.
However, we wanted to avoid a top-down approach that would force changes upon teams without their understanding or requiring unnatural efforts. We had witnessed enough “miracle” solutions that, at best, fizzled out and, at worst, demotivated everyone. Our previous successes were driven by a sense of purpose, so we aimed to continue embracing the essence of agility instead of just going through the motions — no Cargo Cult mentality here.
When Decathlon discovered Google’s Accelerate program — and specifically a workshop run by our Premier Partner, Zenika — we found that the DevOps practices they taught could help our teams improve production speed, efficiency, and quality. To achieve the benefits we learned about through Accelerate, we knew we needed to:
Better understand our own processes and identify areas of improvement
Embrace a culture of sharing learnings and best practices
To facilitate DORA adoption by people and teams and finally change the culture and mindset, we had to build an approach for and by teams that:
Brings meaning
Explains DORA mechanics and metrics
Arouses interest in the teams and gives rise to initiatives and improved outcomes
We therefore implemented an approach, engaging teams, centered on three major pillars of the value stream: initiate, invest and improve.
Through the training and insights Zenika shared in their Accelerate workshop, our teams learned about the DevOps principles that could help us figure out how to improve our processes. We determined that the four key aspects we needed to focus on were lean management, leadership and culture, technology and architecture, and lean product development. The transformation process that followed took 6 months, and included not just developers and tech people, but everyone in the value chain, including product managers, designers, communications, and business executive leads.
After this first step of bringing teams together and spreading learnings from Accelerate, the next step was to better understand our own technology and processes so that we could develop an informed transformation strategy. This led to the development of Delivery Metrics, an automated measurement tool that uses BigQuery to collect data from every tool used in delivery. Once this data has all been collected,Google Data Studio makes it all available through a dashboard that can graph out and illustrate results. With this system, teams had a centralized, complete, and synchronized view that they could use to see what was working and where they could make improvements.
With these first steps underway, the third step was putting it all into action with an eye toward continuous improvements. At least once a month, all 21 teams share the metrics from Delivery Metrics and discuss their wins and learnings to inspire improvement action plans and to share the best practices they’ve developed. This not only helps other teams, but by having a repository of best practices, onboarding new team members is much quicker and more thorough.
One area that has really benefited from this three-step plan is the movement toward automation. By automating repetitive tasks, the potential for human error is greatly reduced and teams can spend less time on deployment and testing, and more on innovating.
It also unlocked two best practices that greatly improve team efficiency and productivity. The first is focusing on small-batch work while allowing the technical teams to handle the larger ones, letting them spend more time and energy on making sure potential weak points were as error-free as possible. This is also the thought process behind the other best practices — putting work in progress limits on the number of objectives, items embedded in sprints, and so on.
Alignment has created huge transformations across the business, taking a process that originally only involved four product teams and expanding it out to 21 teams — including 15 which are fully involved in cultivating best practices. These learnings then spread out to the larger team, to the point that 70 teams now measure themselves with Delivery Metrics and at least 45 are actively tracking and creating reports on best practices.
And because this is all built off of a culture of data sharing without blaming, teams aren’t afraid to be transparent and show where they are having difficulty. Teams can now give product management and the business more predictability than the old method of subjective timing and velocity estimates. It also gives teams a more informed starting point when investigating metrics that are too hard to automate, like change failure rate and MTTR.
Here are some of the quantifiable results of how aligning through principles introduced in Accelerate has already helped our teams:
Deployment frequency: After a few months, our teams improved from one or two deliveries a month to averaging more than four a week.
Lead time for changes: Our best teams have improved the lead time from more than 15 days to less than two days.
Change failure rate: This stayed at a continuous level less of than 15%.
Time to restore service: Decreased from more than five days to less than one day.
Stay tuned for the rest of the series highlighting the DevOps Award Winners and read the 2022 State of DevOps report to dive deeper into the DORA research.
Read More for the details.
Amazon Inspector now provides enhanced vulnerability intelligence as a part of its findings. The enhanced vulnerability intelligence includes names of known malware kits used to exploit a vulnerability, mapping to MITRE ATT&CK® framework, the date Cybersecurity and Infrastructure Security Agency (CISA) added the vulnerability to Known Exploited Vulnerabilities Catalog (KVEC), Exploit Prediction Scoring System (EPSS) score, and evidence of public events associated with a vulnerability. This expands the currently provided vulnerability intelligence such as Common Vulnerability Scoring System (CVSS) score and known public exploit information. Inspector collects this information from internal Amazon research, CISA, and our partner, Recorded Future. You can access the enhanced vulnerability intelligence in the finding details within in the Amazon Inspector console
Read More for the details.
Amazon Connect now supports archiving and deleting flows from the flow designer UI, making it easier to manage flows that are not in use or no longer needed. For example, flows used only during certain times of the year can be archived when not in use and then unarchived when needed. When a flow has been archived, you can then permanently delete the flow so it is no longer available within your list of flows.
Read More for the details.
Amazon Connect now supports restricting the use and access of attributes to a single flow. Now, you can granularly control when an attribute is associated with a contact (and shows up in the contact record) or if it can only be accessed by a specific flow (used only when that flows is executing a customer experience). For example, if your flow is automatically authenticating your customer’s identity using personally identifiable information (PII), you can use flow attributes to prevent the PII information from showing up in contact records or to an agent.
Read More for the details.
Amazon Connect flow designer now includes a toolbar with shortcuts to new editing capabilities such as undo (including a history of previous actions) and redo, along with existing shortcuts such as copy and paste. You can also now add notes to a flow, allowing you to document things like what the flow is doing or a to-do list of what updates you want to make. You can attach notes to a specific flow block and search notes using the toolbar.
Read More for the details.
Today, AWS Clean Rooms launches two new capabilities that give customers flexibility to generate richer insights: custom analysis rule and analysis templates. These capabilities enable customers to bring their own custom SQL queries into an AWS Clean Rooms collaboration based on their specific use cases. With the custom analysis rule, customers can create their own queries using advanced SQL constructs, as well as review queries prior to their collaboration partners running them. This workflow gives customers built-in control of how their data is used in collaborations upfront, in addition to reviewing query logs after analyses are complete. Using analysis templates, customers can create queries with parameters that provide reusability and flexibility to those running queries in a collaboration. This helps customers expand and automate types of analyses they run frequently with multiple partners, minimize the need to write new SQL code when analyzing collective data sets.
Read More for the details.
Amazon Connect scheduling now allows contact center managers to automatically generate agent schedules with a combination of fixed and flexible working days each week, i.e. agents will have certain mandatory work days while other days are scheduled based on demand. Before this launch, managers had to manually adjust a subset of agent schedules to align with their flexible work contracts and regional labor laws. With this launch, the system automatically proposes flexible schedules allowing managers to create more optimized labor/union compliant agent schedules, freeing up valuable time for more important tasks.
Read More for the details.
Amazon Connect scheduling now allows managers to generate agent schedules with an appropriate number of activities including breaks or meals, based on the duration of agent shifts. Before this launch, schedules were generated with a fixed set of shift activities, leading to numerous shifts needing time consuming manual adjustments based on agent’s work duration. With this launch, Amazon Connect scheduling automatically generates the required number of breaks and meals according to configured inputs reflecting shift durations and labor/union labor rules, saving time for managers.
Read More for the details.
Amazon Connect scheduling now offers new time off balance and group allowance features empowering contact center managers and agents to handle time offs more efficiently. Before this launch, managers had to manually cross-verify time off balances before approving or declining requests, and agents had to contact their managers via email or third party tools to request or change their time off schedule. With this launch, managers can easily import agent time off balances and group allowances in bulk from third party HR systems (for e.g., 120 hours vacation time, 40 hours sick time), and select either an automated or manual approval workflow for their groups. Agents can request time off and receive automatic approvals (or declines) based on their time off balances and group allowances in addition to other time off rules.
Read More for the details.