Azure – Public Preview: Azure Chaos Studio now available in
Azure Chaos Studio is now available in East Asia region.
Read More for the details.
Azure Chaos Studio is now available in East Asia region.
Read More for the details.
Amazon QuickSight now offers the hide collapsed column control to help users better manage collapsed columns in their pivot tables. Authors can use this control to automatically hide collapsed row header fields, eliminating the need for unnecessary scrolling and making pivot tables look more compact and organized. To learn more, click here.
Read More for the details.
You can now run OpenSearch and OpenSearch Dashboards version 2.5 in Amazon OpenSearch Service. With OpenSearch 2.5, OpenSearch Service adds several new features and enhancements such as support for Security Analytics, support for Point in Time Search, improvements to observability and geospatial functionality.
Read More for the details.
Want to know what Service Directory is and why you should be using it in your Google Cloud environment?
Service Directory is a managed service that gives you a single place to publish, discover, and connect services.
Among the many reasons to use Service Directory, here are six of my favorite key benefits:
Service Directory provides you a single place that makes it easy to get visibility and control into your services. This means you have quick access to your services to do things like enrich service data by using (names, annotations and key value pairs) and apply traffic management policies with, Traffic Director. You can also query your service using DNS, HTTP and gRPC which means you have multiple ways to interact with your service.
Service Directory supports multiple environments, such as Google services, 3rd party services, On-prem services, multi cloud services and more. Your administrator does not need to deal with the complexity that comes with using multiple applications to track your services.
Traffic Director is a managed control plane for application networking which can also integrate with Service Directory. Once configured, the Traffic Directory service will query, Service Directory, for info on the registered service and how to reach it. This simple service binding allows Traffic Directory to route traffic and apply policies to services you have registered.
What this does is eliminate the need for complex configuration that would otherwise be needed for each environment the service was running in e.g., Compute Engine, Google Kubernetes Engine, load balancers, multi cloud or on-prem. Refer to the Standardize traffic management: Service Directory and Traffic Director blog for more.
The configuration is simple.
Register the service with Service Directory.
Bind the service to Traffic Directory backend service.
For details on setting up the integration, check out the Set up integration with Service Directory document.
Cloud DNS is Google Cloud’s highly scalable, managed DNS service. Cloud DNS integrates with Service Directory so that you can query your Service Directory namespace using a DNS name. Service Directory automatically updates Cloud DNS as service records change or new services are added to the Service Directory namespace.
The configuration requires the following:
Have an existing Service Directory namespace
Create a Cloud DNS zone
Select option to “Use a service directory namespace”
For details on setting up the integration, check out the Configure a Service Directory DNS zone document.
Private Network Access allows Google Cloud product access to HTTP(S) endpoints in customers’ private networks without using the internet. An example of this would be allowing Dialogflow to communicate with a VM server using private network access connectivity.
The configuration requires the following:
A VPC
Service Directory Project
A Google Cloud service project with the configuration that invokes private network access
For details on setting this up, check out the Configure private network access and Using Service Directory for private network access documentation.
Service Directory can be enabled in multiple ways including the console, gcloud commands and auto registration. In addition to the integration mentioned above, Service Directory integrates with several other Google Cloud services, including Load Balancers and Google Kubernetes Engine. This allows users to capitalize views to manage their services.
If you would like to check out the service, there is a hands-on lab available on the skillsboost platform called Service Directory: Qwik Start. This allows you to complete several tasks in a virtual environment. The lab covers the following activities:
Configuring Service Directory, with a namespace, service, and endpoint
Configuring a Service Directory DNS zone
Using Cloud Logging with Service Directory
To learn more about enabling and using Service Directory check out the following:
Skills Boost: Service Directory: Qwik Start
Blog: Standardize traffic management: Service Directory and Traffic Director
Documentation: Service Directory
Want to ask a question, find out more or share a thought? Connect with me on Linkedin and send me a message.
Read More for the details.
Here at Sumitovant, we’re developing next-generation therapies for treating cancer, life-threatening immune disorders in infants, women’s health issues, and other medical conditions. Thankfully, increasing access to anonymized medical data, pharmaceutical studies, clinical trial results, and other third-party data sources gives our medical teams and physician customers more information than ever before about treatments’ efficacy and adoption. However, as we’ve added data sources, we’ve learned that more data doesn’t automatically translate into better insights.
Because Sumitovant is a biopharmaceutical company, the scientific process directs everything we do. The proof or disproof of every hypothesis involves a sequence of ever-more-specific questions answered with available data. Because each question depends on the previous inquiry’s answer, it’s not possible to compile a complete list up-front. This meant that my team had to run a custom query to answer every question every researcher had as they refined their hypotheses.
For years, our back-and-forth query process worked, but as we added data sources, the process became too time-consuming. We had to write increasingly complex queries, and researchers waited several days to get answers to their questions. Realizing our ad hoc query process was no longer sustainable or scalable, we looked for a solution that enabled researchers to answer their own questions using raw data while meeting our global data security and privacy regulations.
We evaluated solutions and chose Looker because it:
Reduces the time to develop and scale flexible queries on massive data sets. In one day of development with Looker, we can accomplish what used to take two weeks of manual back-and-forth analysis.
Makes it easier to hook into and integrate more data sources, such as researchers’ Redshift and Athena cloud databases and our partners’ multicloud enterprise data lakes.
Supports our technology agnostic strategies by providing APIs that enable deep integration with our Infrastructure as Code data platforms.
Simplifies the creation of governed dashboards that give people permissioned access to data and the freedom to explore it on their own.
For our initial Looker use case, I built dashboards that hook into raw third-party electronic health record data for a health economics and outcomes research (HEOR) team. The dashboards meet the diverse needs of the team’s researchers, pharmacists, medical professionals, and data scientists, and they give them answers in seconds rather than days. Enabling each team member to drill down into data themself is also vital for greater exploration into treatment outcomes and generating data for publications that physicians rely on for treatment decisions.
The HEOR team’s early success with Looker is driving broader organic adoption across Sumitovant and its subsidiaries. That’s because in addition to saving time, Looker scales. I can easily build new dashboards that hook into raw data from Sumitovant, our subsidiaries, and our partners—and provide filters so that people can query the data for their specific disease research purposes. I recently created a natural language processing framework with Looker that uses topic modeling to analyze field notes from physician conversations about prostate cancer and women’s health treatments. This Looker dashboard allows our field medical team to quantify and identify the most pressing topics by understanding what physicians are saying across thousands of unstructured text comments and quotes.
Looker dashboards are a critical innovation for our company and our partners. Researchers can answer questions as they arise, work seamlessly with fewer delays, and accelerate the generation of data that providers rely on to improve patient outcomes and save lives.
Read More for the details.
Sift is one of the leaders in digital trust and safety, that empower digital disruptors of Fortune 500 companies to unlock new revenue without risk using machine learning to provide fraud detection services. Sift dynamically prevents fraud for many abuse categories through industry-leading ML-based technology, expertise, and an impressive global data network of over 70 billion monthly events.
Sift’s platform offers customers to configure workflows using an intuitive UI to automate fraud detection, accommodating highly critical business actions like blocking a suspicious order, applying additional friction (such as requesting MFA) to a transaction, or processing a successful transaction. Customers can build and manage their business logic using workflows within the Sift UI, automate decision-making and continue to streamline manual review. Sift’s customers process tens of millions of events in a single day through these workflows.
Below is an example of the Sift UI and how workflows enable customers to make risk-based decisions on user events, such as the creation of new incoming orders. In the example below, the workflow logic evaluates the payment abuse score of the incoming request and then redirects based on the level of severity. For example, as shown in route 3 (at the bottom of the image) if such a score is greater than 50 an email based 2 factor authentication is triggered to validate the incoming request if it’s greater than 90 it will be labeled as a fraudulent request and blocked.
When changes are being made to existing workflows, customers want to be confident that these changes will positively affect their core KPIs (user insult rate, fraud cost, monthly active users, customer lifetime value, etc.). Productionizing a workflow route that has a negative effect can reduce the ROI and potentially deny valid requests or non-fraudulent customers, which must be avoided at all costs.
The workflow backtesting architecture powered by BigQuery provides that functionality in a robust and scalable manner by enabling customers to:
Test changes for a route (as shown in the image above) within a running workflow
Self-serve and run “what-if” experiments with no code through the Sift UI
Enable fraud analysts to gauge a workflow’s performance for a route before publishing it in production
Here’s a high-level flow diagram of the fully managed and serverless Google Cloud building blocks utilized when issuing a Workflow Request:
Whenever the workflow service receives a request, it pushes the message containing the whole workflow request into a Cloud Pub/Sub topic. A Dataflow job gets the message and processes it through a parser (workflow request parser) that extracts all available fields from the message. This process enables Sift to transform a complex workflow request into a flattened, spreadsheet-like structure that is very convenient for later use. After that, a Dataflow job creates records in BigQuery (via Storage Write API) following a schema-agnostic design.
Conceptually, backtesting is implemented as a dynamically generated SQL query(s). In order to process a single backtesting call, Sift needs to run three or more queries that depend on each other and should be processed in a strictly defined order as a logical unit of work. The Sift console is utilized to perform backtesting async API calls. The orchestration service is a simple web application based on Spring Cloud that exposes an endpoint that accepts JSON-formatted requests. Every request includes all backtesting queries packed along with all related query parameters and pre-generated JobIDs. It parses the request and executes all queries against BigQuery in the correct order. For every single step, it creates a status record into a log table that API queries for context on the status of the readiness of the data. The process can be depicted in the next diagram:
Every workflow request may contain tens of thousands of columns/fields that customers may decide to use for any backtesting request. The table sizes are in terabytes and have billions of records each. Furthermore, often attributes are being added or removed prior to testing a new workflow, hence the need for a schema-agnostic design allowing for:
The introduction of new features without needing to modify the table’s structure
Changes to be implemented declaratively through modification of metadata vs. schemas
Deep customization and scalability allowing for more than ten thousand features of the same data type for customers that use many custom fields
As illustrated in the schema above, data are grouped into tables arranged by data type rather than a workflow-specific layout. Joining that data is accomplished by utilizing an associative metadata table, which can be thought of as (fact table) linking to multiple data type tables (dimension tables) in the context of a traditional data warehousing design.
Data volumes are large, and schemas frequently change, making indexing labor-intensive and less applicable. During the evaluation of the initial workflow architecture, Sift performed comprehensive testing on deeply nested queries with large aggregations to identify the most suitable schema design allowing for performance and flexibility. Columnar storage formats for the data fit well into this idea as it minimizes IO necessary for loading data into memory and further manipulation via complex dynamically queries. Before finalizing the design, Sift performed extensive benchmarking, running logically similar queries against comparable volumes of data managed by several data warehouse solutions and benchmarking the BQ-powered solution. The conclusion was that BQs Dremel Engine and the cluster-level file system (Colossus) provided their use case’s most performant and scalable architecture.
During several sessions of load testing and working through a set of “production-like” use cases with production volumes of data and throughput, the team reached satisfactory average response times of around 60 seconds for the single backtesting request and a max response time approaching 2 minutes for complex requests. Load testing also helped to estimate the necessary number of BQ Slots required to supply expected workload(s) with enough computation resources. Below is an example of the initial Jmeter test report (Configuration: Randomly chosen clientId/workflow configId/routeId):
Backtesting period: 45d
BQ Computation resources: 3000 BQ slots
Planned workload to reach: 30 requests per minute
Test duration: 30 mins
Additionally, the team started to experiment with the “BigQuery Slot Estimator” functionality, allowing for:
Visualization of slot capacity and utilization data for the workflow-specific project
Identification of periods of peak utilization when the most slots are used
Exploration of job latency percentiles (P90, P95) to understand query performance related to Workflow Backtesting Sequences
What-if modeling of how increasing or reducing slots might affect performance over time.
Assessing the automatically-generated cost recommendations based on historical usage patterns
Below is an example of utilizing BQ’s Slot Estimator to model how performance might change at different capacity levels, modeled at -100 and +100 slots of the current capacity:
From crypto and fintech companies to on-demand apps and marketplaces, customers of Sift see less fraud and more growth. Below is a small subset of examples on how Sift helps their customers reduce payment friction and focus on user growth.
Google’s data cloud provides a complete platform for building data-driven applications like the workflow backtesting solution developed by Sift. Simplified data ingestion, processing, and storage to powerful analytics, AI, ML, and data sharing capabilities are integrated with the open, secure, and sustainable Google Cloud platform. With a diverse partner ecosystem, open-source tools, and APIs, Google Cloud can provide technology companies with the portability and differentiators they need to serve the next generation of customers.
Learn more about Sift on Google Cloud. Learn more about Google Cloud’s Built with BigQuery initiative.
We thank the Sift and Google Cloud team members who co-authored the blog: Pramod Jain Senior Staff Software Engineer, Sift, Eduard Chumak Staff Software Engineer, Sift, Christian Williams, Principal Architect, Cloud Partner Engineering, Raj Goodrich – Cloud Customer Engineer at Google Cloud
Read More for the details.
Casper Labs announced a collaboration with Google Cloud that will allow developers to launch public and/or private Casper nodes directly from Google Cloud. This enables a much more seamless and highly secure process for the millions of developers who want to build in blockchain environments without having to learn new, highly specialized programming languages. Additionally, Google Cloud will provide its scalable and reliable infrastructure to developers building on the Casper Protocol.
As blockchain technology matures, a growing number of businesses are embracing it as a key way to drive new efficiencies and realize cost savings.
According to a recent Casper Labs study, 87% of executives polled in the United States, United Kingdom and China reported plans to invest in a blockchain solution in 2023. This is due in no small part due to recent innovations that help organizations overcome the so-called Blockchain Adoption Trilemma, which previously held that it was impossible for any blockchain to be simultaneously a) decentralized, b) scalable, and c) secure.
Thanks to the rise of proof-of-stake blockchains like Casper, new models have emerged that enable a more scalable and secure architecture that no longer forces a compromise on decentralization.
Another trend facilitating these growing adoption rates is the rise of WebAssembly (WASM) as a baseline technology for newer blockchains, including Casper. WASM (created by W3C) makes application development in blockchain environments far more accessible and interoperable to the millions of developers worldwide who specialize in languages like Java, Javascript, C++ and Rust. Previously, any blockchain-based build required a high degree of specialized developer knowledge, which made it a much more challenging option for most organizations.
Casper is a permissionless, decentralized public blockchain based on WASM that was built explicitly to foster enterprise adoption of blockchain technology. Beyond its more accessible model, Casper is the first and only blockchain to offer native upgradable smart contracts. This means that organizations can have the option to securely and consistently update software code even after it is running on Casper. This gives organizations the control and flexibility to use industry best practices, such as continuous deployment and continuous integration, which are already in use in their IT departments. Casper is also highly configurable and allows organizations to support public, private, and/or hybrid deployments.
Casper is also noteworthy for the presence of Casper Labs, a software development and professional services firm that supports organizations building on the Casper network. Unlike most blockchains that follow a more traditional open-source project, Casper Labs provides around-the-clock support and bespoke software development for enterprise organizations. Recently, Casper Labs helped patent management company IPwe execute the largest-ever blockchain deployment, featuring more than 25 million patents being added as custom NFTs to the Casper Blockchain.
Developers who want to start building on Casper can find a comprehensive series of tutorials here.
The Casper Association also recently announced a $25 million grant program to support projects and developers building on Casper. Interested participants can apply here.
Read More for the details.
Today, we are excited to announce the general availability of Dataplex data lineage — a fully managed Dataplex capability that helps you understand how data is sourced and transformed within the organization. Dataplex data lineage automatically tracks data movement across BigQuery, BigLake, Cloud Data Fusion (Preview), and Cloud Composer (Preview), eliminating operational hassles around manual curation of lineage metadata.
With rising data volume spread across data silos, it can be challenging for organizations to ensure users have a self-service mechanism to discover, understand and trust the data. Organizations constantly struggle with questions such as:
Is the data extracted from an authoritative source?
What is the impact if I drop this table?
The data in this table seems corrupted – where did this data come from, and when was it last refreshed?
How is sensitive information being moved or copied? Is it in adherence to data governance practices?
To answer the above questions, organizations need to track how data is sourced and transformed, which can be complex and requires significant effort.
Dataplex data lineage describes each lineage relationship by detailing what happened and when it happened in an interactable lineage graph, providing data observability.
Data analysts who want to know if a table originates from an authoritative source can now answer this in a self-service manner with a simple look-up of lineage for the concerned table — available in Dataplex and in BigQuery for in-context analysis.
Data engineers can reduce time to identify and resolve data issues through root cause analysis using the operational metadata trace asserting a lineage relationship. Data lineage also aids deterministic change management by providing the ability to evaluate the impact of a change and collaborate with the corresponding stakeholders to minimize any adverse impact.
Finally, data lineage provides a map of data movement which can become the foundation for data governance practice. It enables data stewards and owners to evaluate and enforce adherence to governance requirements, especially when tracking the movement of sensitive information.
Dataplex data lineage provides APIs for extensibility so that organizations can report lineage from various systems and have a single map of how data entries are related.
L’Oréal, the world’s largest cosmetics company, is on a mission to ‘create the beauty that moves the world.’ “Dataplex data lineage helps us understand how data moves across our organization,” said Sébastien Morand, Head of Data Engineering team, L’Oréal. “As a fully managed solution, it becomes the main entry point to diagnose data issues and evaluate the impact of a change or incident — providing insight on what happened and when it happened, including reference to the execution metadata. Directly integrated into our beauty tech data platform, data lineage helps us reduce data issues and also enables us to mitigate issues faster when it does happen.”
“At Wayfair, we treat data-as-a-product and are building a robust data platform that provides self-service access and compliance constructs,” said Vinit Rajopadhye, Associate Director on Data Infrastructure & Data Enablement at Wayfair. “We are excited about Dataplex data lineage as it helps our data consumers trust data based on where it originates and the transformations applied.”
Hurb is an online travel agency in Brazil with a mission to optimize travel through technology. “Hurb has a rapidly growing data platform, with new data assets created and registered daily to support business decision-making and Machine Learning models,” said Vinícius dos Santos Mello, Senior Data Engineer. “Thanks to Dataplex data lineage features, we have end-to-end data observability across data in BigQuery. We can proactively address schema changes, data quality issues, and asset depreciation that could otherwise negatively affect the business.”
You can get started with Dataplex data lineage by enabling the Data Lineage API on your project. You can learn more here.
Additional Resources:
Read More for the details.
Research supported by Google and conducted by Implement Consulting shows that digital technologies are critical to facilitating energy and emissions savings. Startups working on groundbreaking technologies for adaptation and climate solutions have an important role to play in accelerating climate action. At Google, we want to foster a thriving ecosystem for startups advancing sustainability via AI-powered technology, and connect them with mentorship opportunities, technical expertise, and cloud technology that will help them grow.
This is why today we’re excited to welcome the 14 teams selected to the inaugural Google for Startups Accelerator: Climate Change in Europe.
Out of the hundreds of amazing applications we received since announcing the program, we selected startups that are tackling problems from crop insurance, to clothing resale platforms, to emissions reporting with technology. These groundbreaking companies are headquartered across seven different countries, focused on multiple verticals, and come with a diverse group of founder and executive backgrounds.
What do they have in common? They all leverage cloud technologies like AI, geospatial data and analytics to help businesses, communities and people be more sustainable.
Out of the hundreds of amazing applications we received since announcing the program, we selected startups that are tackling problems from crop insurance, to clothing resale platforms, to emissions reporting with technology. These groundbreaking companies are headquartered across seven different countries, focused on multiple verticals, and come with a diverse group of founder and executive backgrounds.
What do they have in common? They all leverage cloud technologies like AI, geospatial data and analytics to help businesses, communities and people be more sustainable.
Climate change adaptation startups
Selected projects are using big data and AI to promote adaptation to the effects of climate change and preserve food security. Israeli AgroScout helps farmers monitor, detect and report on crops and supply chains, to ensure the quality of the produce and reduce carbon footprint and targeted chemical applications. It will be participating alongside Tel Aviv-based Albo Climate that uses deep learning to map, measure and monitor carbon sequestration to make it scalable. Also destined for agricultural ecosystems, Dutch Agcurate works with satellite-driven rural intelligence, offering crop assurance products to farmers and agri-retailers.
“Albo Climate’s vision is to adapt the mitigation potential of nature-based climate solutions by monitoring and forecasting carbon sequestration in vast ecosystems with high accuracy and transparency. The Google Accelerator is an incredible opportunity for Albo to bolster our go-to-market strategy, boost our proprietary AI models, and hone our UI interface. Integrating Google’s virtual machines will contribute to automating Albo’s system architecture and prediction process, allowing our clients to access the AI models and use them directly on the cloud, perform queries, and get their results quickly and securely,” said Dr Jacques Amselem, CEO, Albo Climate.
Helping manage natural resources more sustainably, Scottish Earth Blox leverages Google Earth Engine to provide accurate geospatial data to partners that need to make an assessment of forests or landcover, while Single.Earth, from Estonia, uses methods based on AI to evaluate and quantify forest health. Berlin-based Blok-Z traces, verifies and matches renewable energy generation and consumption with blockchain.
“Our goal is to offer energy retailers a product differentiator that enables them to sell renewable energy with complete verification of its origin. This is an excellent opportunity to pick some of the best brains in tech and use their expertise to improve our user experience and platform’s capabilities,” said Selim Satici, CEO & Cofounder, Blokz UG. “We are especially interested in learning more about Google Cloud’s services and blockchain tools, which can help us improve our core infrastructure.”
Sustainable supply chain startups
Artificial intelligence is critical to setting up more sustainable supply chains. Headquartered in Amsterdam, Dayrize, which enables companies with large product ranges to rapidly conduct ESG impact assessments of consumer products, may find similar technical challenges to Spanish BCome, which empowers textile businesses to build greener value chains. In the fashion vertical as well, Belfast-based RESPONSIBLE provides a solution to avoid over-consumption, allowing consumers to buy and trade pre-loved streetwear.
“Our ultimate goal is to grow circularity in the fashion industry and keep garments out of landfills. We hope Google’s AI technology and expertise in e-commerce/shopping can help us deliver an MVP of an industry-changing product within the timeframe of the program,” said Mark Dowds, Co-founder & CEO, Responsible.
Environmental impact measurement startups
Startups in this cohort are also developing tools to assess environmental impact across different sectors. Danish Legacy, working to simplify the carbon accounting and impact assessments of real estate portfolios, might find opportunities to collaborate with companies like Mortar IO, based in London, which is helping customers identify the low risk and cost routes to sustainable retrofitting. The UK’s ESGgen allows businesses to audit their environmental, social and governance measures, while German ecolytiq helps financial institutions provide their customers with environmental reporting. Finally, Germany-based eevie rewards employees that participate in their company’s decarbonization campaigns.
The cohort will meet for a bootcamp this month in Munich, working closely with Google teams and other subject matter experts to address product, engineering, business development, and funding. Much of the program will focus on providing startups with:
Tailored training support from Google mentors and industry experts, including in-person and virtual activities, one-on-one mentoring, and group learning sessions.
Dedicated Google Cloud technical expertise on the innovations that are helping solve some of climate change’s toughest challenges — cloud computing, AI, machine learning and geospatial analysis — to help these early stage participants accelerate their work.
Exposure to potential partners, venture capital firms and investors interested in climate change solutions.
Product credits via the Google for Startups Cloud Program, with the first year of Google Cloud and Firebase usage covered with credits up to $100,000 and an additional 20% costs covered up to $100,000 on year two1.
After 12 weeks, the program will culminate in a demo day on June 1, 2023. You can find more about the participants on the Google for Startups website.
1. Subject to terms and conditions
Read More for the details.
Accelerated Connections is a new product that enhances Accelerated Networking enabled vNICs, enabling customers to increase connections per second and improve performance in multiple active connection scenarios.
Read More for the details.
Today, AWS announced the opening of a new AWS Direct Connect location in Auckland, New Zealand. By connecting your network to AWS at the new Auckland location, you gain private, direct access to all public AWS Regions (except those in China), AWS GovCloud Regions, and AWS Local Zones.
Read More for the details.
Amazon Route 53 Resolver endpoints now supports IPv6, allowing you to forward traffic to and resolve traffic from on-premises Domain Name System (DNS) servers with IPv6 addresses.
Read More for the details.
We are excited to announce support for customer-level metrics when running interactive Spark workloads via managed endpoints. Amazon EMR on EKS enables customers to run open-source big data frameworks such as Apache Spark on Amazon EKS. Amazon EMR on EKS customers can setup and use a managed endpoint (available in preview) to run interactive workloads using integrated development environments (IDEs) such as EMR Studio.
Read More for the details.
Amazon Elastic Kubernetes Service (EKS) customers can now create and manage clusters in the Asia Pacific (Melbourne) Region.
Read More for the details.
Starting today, Amazon Virtual Private Cloud (VPC) customers can create their own Prefix Lists in two additional AWS Regions: Europe (Zurich) and Middle East (UAE) Region.
Read More for the details.
Amazon SageMaker Data Wrangler reduces the time it takes to aggregate and prepare data for machine learning (ML) from weeks to minutes in Amazon SageMaker Studio. Data Wrangler enables you to access data from a wide variety of popular sources (Amazon S3, Amazon Athena, Amazon Redshift, Amazon EMR Presto, Snowflake) and over 40 other third-party sources. Starting today, you can connect to Amazon EMR Hive as a big data query engine to bring in very large datasets for ML.
Read More for the details.
Amazon Simple Email Service (SES) announces the addition of two new metrics to its Email Receiving service without additional charges, providing customers with greater visibility into message processing workflows. The move comes in response to customer frustration over the limited set of metrics available, which made it difficult for customers to test their setup and identify potential issues.
Read More for the details.
Today, we are excited to announce Systems Manager Favorites, a quick way for customers to find and execute their most important and frequently used documents and runbooks. Documents and runbooks define the actions that Systems Manager performs on your managed instances and other AWS resources. Now, you can select up to 20 of your favorite documents or runbooks per category that will appear in a centralized favorites tab in the Systems Manager Automation or Documents consoles.
Read More for the details.
AWS License Manager extends support for delegated administrator to Linux subscriptions discovery and governance capabilities. Using delegated administrator, your management account can give access to another account in your organization to track Linux subscriptions on AWS. Delegated administrator gives you the flexibility to separate subscription management from billing and other central management tasks.
Read More for the details.
Amazon Connect now provides a new API to programmatically access trailing 14 days of historical agent and contact metrics data. This new API extends the capabilities of current GetMetricData API, that provides historical metrics data only for the last 24 hours, provides new metrics (e.g., number of contacts disconnected or callbacks attempted) and the ability to filter metrics with more granularity. For example, using the GetMetricDataV2 API, businesses can now build custom analytics dashboards to understand how many contacts were disconnected by an agent or because the customer hung up.
Read More for the details.