Azure – Public preview: Intra-account container copy for Azure Cosmos DB
Intra-account container copy helps you create offline copies of containers within Azure Cosmos DB SQL API and Cassandra API accounts.
Read More for the details.
Intra-account container copy helps you create offline copies of containers within Azure Cosmos DB SQL API and Cassandra API accounts.
Read More for the details.
Develop your applications with more flexibility with retryable writes in Azure Cosmos DB for MongoDB.
Read More for the details.
Assess, get right-sized Azure recommendations for Azure migration targets, and migrate databases offline from on-premises SQL Server to Azure SQL Database.
Read More for the details.
New public preview features include reading Delta Lakes in fewer steps, debugging and monitoring training jobs, and performing data wrangling.
Read More for the details.
New GA features include the ability to automate auto-shutdown/auto-start schedules, configure and customize a compute instance, seamlessly build NLP/vision models, and assess AI systems.
Read More for the details.
Multivariate Anomaly Detection will allow you to evaluate multiple signals and the correlations between them to find sudden changes in data patterns.
Read More for the details.
Deploy Azure Database for PostgreSQL – Flexible Server workloads in two new China regions.
Read More for the details.
Starting today, you can request EC2 instances based on your workload’s network bandwidth requirements through attribute-based instance type selection.
Read More for the details.
Starting today, you can configure the set of instance types used when requesting EC2 capacity with attribute-based instance type selection.
Read More for the details.
Customers in the AWS Asia Pacific (Jakarta) Region can now use AWS Transfer Family.
Read More for the details.
Today, AWS announced the availability of next-generation General Purpose gp3 storage volumes for Amazon Relational Database Service (Amazon RDS). Amazon RDS gp3 volumes gives you the flexibility to provision storage performance independently of storage capacity, paying only for the resources you need. Every gp3 volume provides you the ability to select from 20 GiB to 64 TiB of storage capacity, with a baseline storage performance of 3,000 IOPS included with the price of storage. For workloads that need even more performance, you can scale up to 64,000 IOPS for an additional cost.
Read More for the details.
Amazon QuickSight now supports monitoring of SPICE consumption by sending metrics to Amazon CloudWatch. QuickSight developers and administrators can use these metrics to observe and monitor SPICE consumption and proactively monitor if a QuickSight account is reaching its SPICE capacity limit which might result in failed dataset ingestions. This allows them to provide their readers with a consistent and uninterrupted experience on QuickSight. For more information, visit here.
Read More for the details.
Millions of organizations use Chrome every day to keep their businesses productive and secure. However, many companies aren’t actively managing the browsers they deploy. Managing the browser is vital, because it allows you to customize it to the needs of your business, and help you protect against security risks, such as data loss and shadow IT.
Chrome Browser Cloud Management is our no-cost, cloud-based management tool that allows you to manage Chrome across operating systems and devices. Hundreds of policies and controls can be configured for Chrome, right from the cloud. It can help IT and security teams with:
Efficiency:Manage with ease across platforms on and off network
Security:Secure by default with even more control
Visibility: Use insights to drive IT decision making
Flexibility: Customize to meet company needs
For this post, let’s focus on the most recent capabilities and improvements we’ve made to Chrome Browser Cloud Management:
We recently launched Chrome Guide modules to help new admins get started with Chrome Browser Cloud Management – and the best part – they’re right in the Google Admin console! You can get step-by-step instructions on enrolling your first browser, setting policies and viewing reports. If you are just getting started, these are especially helpful. Stay tuned for more guides on advanced Chrome Browser Cloud Management workflows coming soon!
Now you can get even more information about specific extensions and apps that are deployed across your organization in one easy place. Some key details include all the public information about the extension, a breakdown of the extension usage, what versions are installed, the requested permissions, and the websites an extension is permitted to run on.
With the move to a new extension platform (Manifest v3) fast approaching, we also added manifest versions of extensions to this view. We’ll also be adding a warning icon to the Apps & Extension Usage Report for extensions that are still on Manifest v2 in early 2023 to support enterprises through this transition. This will provide additional visibility to IT teams to have a change management plan in place for extensions that are still Manifest v2.
Security is more top of mind than ever before – and Chrome Browser Cloud Management supports how your teams approach security. The security investigation tool in the Google Admin console allows you to see all Chrome events in an Audit log format and set up custom alerts triggered by events. We’ve recently allowed admins to configure alerts for Extension Requests, and more events are in development, including extension installs and inactive browser deletions.
A newer capability we launched this year enables organizations to integrate other security solutions to get additional protections and insights on top of Chrome. This includes real-time security insights and reporting options, where enterprises can see critical security events and information from Chrome, like password breaches, unsafe visits and more, within Chrome Browser Cloud Management.
These are integrated with a variety of security solutions, including Google’s BeyondCorp Enterprise, Google Chronicle and Google Cloud PubSub and leading security partners such as Splunk, Palo Alto Networks and Crowdstrike to make this data available where security teams need it.
One of the biggest benefits of Chrome Browser Cloud Management is all of the reports that are readily available to IT teams. Some customer favorites are Version Report, Apps & Extensions report and the managed browser list. We launched CSV export for these reports, which allows you to export your browser deployment data and make more data-driven decisions for your organization. We just increased the entry pagination limit in these reports to 150,000, which is extremely helpful if you have a lot of your browser data living in Chrome Browser Cloud Management.
To see all of these capabilities and more in action and get a sneak peek at what’s coming next, register for our Chrome Insider virtual event, where Chrome experts will share tips and demos.
To learn more about how you can use Chrome Browser Cloud Management in your organization and sign up to get started, visit our website. And if you’re wondering if it’s the right solution for your business, we have this handy assessment tool that recommends management options for your team.
Read More for the details.
Addresses are necessary to help find people and places, deliver goods, and in some cases, even to open bank accounts. Addresses with typos or misspellings can be difficult to spot. On the surface, addresses can seem simple and straightforward because we see and use them almost every day. But the reality is, addresses that are not corrected and formatted to local standards can lead to poor user experiences, failed deliveries, and costly extensive customer support.
That’s why we’re announcing the general availability release of Address Validation, a new API that can help improve user experiences by removing friction at account sign-up or checkout, and save time and money for your business by helping reduce the impact of invalid addresses on your operations.
How Address Validation works
Address Validation helps developers detect inaccurate addresses by identifying missing or unconfirmed address components. Using Google Maps Platform’s Places data and knowledge of localized address formats, the API then standardizes the input, providing everything from typo corrections to street name completion to appropriate locality-specific formatting, and more.
Address Validation also returns valuable metadata about the processed address, such as individual address components with an accuracy confirmation level, as well as Plus Code, geocode, and Place ID for the address. In certain geographies, Address Validation can also differentiate a residential address from a commercial address, which is important when it comes to delivering packages during business hours. Address Validation aggregates data from multiple sources, including postal service data. Address Validation API is CASS CertifiedTM from USPS®, which means developers can match against U.S. Postal Service® data.
Explore the demo and see how the API responds to common address mistakes.
Representative experience of using Address Validation at checkout and order confirmation
Benefits of using Address Validation
Address Validation can provide benefits to a number of different use cases across industries. Here are a few examples:
Retail/e-commerce companies can reduce friction at checkout by offering shoppers an easy way to correct their addresses. Doing so can help reduce failed or mis-deliveries, or other costly complications like canceled orders and chargebacks.
Transportation & logistics companies can assess address deliverability upon the receipt of an order and help ensure packages reach the right destinations with more precise address components, such as apartment numbers. Drivers can save time and better predict delivery challenges.
Financial Services companies can use proof of address to help authenticate new account holders. By verifying that a customer’s address exists at account creation, companies can potentially detect fraudulent sign-ups.
Along with other products like Geocoding and Place Autocomplete, Address Validation makes Google Maps Platform a more comprehensive address and delivery point validation service.
Customers who have already implemented Address Validation have seen positive results. Slerp, an online ordering engine, enables hospitality brands to transact directly with customers from their own websites. Addresses are critical for Slerp’s business because they support a range of order types–click and collect, on-demand delivery, and nationwide delivery.
With Address Validation, Google Maps Platform’s knowledge of the real world helps ensure addresses are as accurate as possible, so you can focus on building differentiating experiences and improving operational efficiencies for your apps, services, and business processes. To learn more about Address Validation and to get started, check out our website, demo, and tutorial docs.
For more information on Google Maps Platform, visit our website.
1Google Maps Platform is a non-exclusive Licensee of the United States Postal Service®.
2The following trademarks are owned by the United States Postal Service® and used with permission: United States Postal Service®, USPS®, CASS™, CASS Certified™.
Read More for the details.
Organizations are increasingly using machine learning pipelines to streamline and scale their ML workflows. However, managing these pipelines can be challenging when an organization has multiple ML projects and pipelines at different stages of development. To solve this, we need a way to build upon DevOps concepts and apply them to this ML-specific problem. In this post, we’ll share some best practices on how to manage the codebase for your ML pipelines.
The guidance we’re sharing is based on our work with top Google Cloud customers and partners. We’ll provide a few best practices based on pipeline implementation patterns we’ve seen, but we recognize that every company’s solution will depend on many distinct factors. As a result, we don’t aim to provide a prescriptive approach. With that, let’s dive in and see how you can manage the development lifecycle of your ML pipelines.
For any software system, developers need to be able to experiment and iterate on their code, while maintaining the stability of the production system. Using DevOps best practices, systems should be rigorously tested before being deployed, and deployments automated as much as possible. ML pipelines are no exception.
The typical process of executing an ML pipeline in Vertex AI looks like the following:
Write your pipeline code in Python, using either the Kubeflow Pipelines or TFX DSL (domain-specific language)
Compile your pipeline definition to JSON format using the KFP or TFX library
Submit your compiled pipeline definition to the Vertex AI API to be executed immediately
How can we effectively package up these steps into a reliable production system, while giving ML practitioners the capabilities they need to experiment and iterate on their pipeline development?
Step 1: Writing pipeline code
As with any software system, you will want to use a version control system (such as git) to manage your source code. There are a couple of other aspects you may like to consider:
Code reuse
Kubeflow Pipelines are inherently modular, and you can use this to your advantage by reusing these components to accelerate the development of your ML pipelines. Be sure to check out all the existing components in the Google Cloud library and KFP library.
If you create custom KFP components, be sure to share them with your organization, perhaps by moving them to another repository where they can be versioned and referenced easily. Or, even better, contribute them to the open source community! Both the Google Cloud libraries and the Kubeflow Pipelines project welcome contributions of new or improved pipeline components.
Testing
As for any production system, you should set up automated testing to give you confidence in your system, particularly when you come to make changes later. Run unit tests for your custom components using a CI pipeline whenever you open a Pull Request (PR). Running an end-to-end test of your ML pipeline can be very time-consuming, so we don’t recommend that you set these tests to run every time you open a PR (or push a subsequent commit to an open PR). Instead, require manual approval to run these to run on an open PR, or alternatively run them only when you merge your code to be deployed into a dedicated test environment.
Step 2: Compiling your pipeline
As you would with other software systems, use a CI/CD pipeline (in Google Cloud Build, for example) to compile your ML pipelines, using the KFP or TFX library as appropriate. Once you have compiled your ML pipelines, you should publish those compiled pipelines into your environment (test/production). Since the Vertex AI SDK allows you to reference compiled pipelines that are stored in Google Cloud Storage (GCS), this is a great place to publish your compiled pipelines at the end of your CD pipeline. Alternatively, publish your compiled pipeline to Artifact Registry as a Vertex AI Pipeline template.
It’s also worth compiling your ML pipelines as part of your Pull Request checks (CI) – they are quick to compile so it’s an easy way to check for any syntax errors in your pipelines.
Step 3: Submit your compiled pipeline to the Vertex AI API
To submit your ML pipeline to be executed by Vertex, you will need to use the Google Cloud Vertex AI SDK (Python). As we want to execute ML pipelines that have been compiled as part of our CI/CD, you will need to separate your Python ML pipeline and compilation code from your “triggering” code that uses the Vertex AI SDK.
You may like your “triggering” code to be run on a fixed schedule (e.g. if you want to retrain your ML model every week), or perhaps instead in response to certain events (e.g. on the arrival of new data into BigQuery). Both Cloud Build and Cloud Functions will let you do this, with benefits to both approaches. You may like to use Cloud Build if you are already using Cloud Build for your CI/CD pipelines, however you will need to build a container yourself containing your “triggering” code. Using a Cloud Function, you can just deploy the code itself and GCP will take care of packaging it into a Cloud Function.
Both can be triggered using a fixed schedule (Cloud Scheduler + Pub/Sub), or triggered from a Pub/Sub event. Cloud Build can provide additional flexibility for event-based triggers, as you can interpret the Pub/Sub events using variable substitution in your Cloud Build triggers, rather than needing to interpret the Pub/Sub events in your Python code. In this way you can set up different Cloud Build triggers to kick off your ML pipeline in response to different events using the same Python code.
If you just want to schedule your Vertex AI pipelines, you can also use Datatonic’s open-source Terraform module to create Cloud Scheduler jobs that don’t require the use of a Cloud Function or other “triggering” code.
In partnership with Google’s Vertex AI product team, Datatonic has developed an open-source template for taking your AI use cases to production with Vertex AI Pipelines. It incorporates:
Example ML pipelines for training and batch scoring using XGBoost and Tensorflow frameworks (more frameworks to follow!)
CI/CD pipelines (using Google Cloud Build) for running unit tests of KFP components, end-to-end pipeline tests, compiling and publishing ML pipelines into your environment
Pipeline triggering code that can be easily deployed as a Google Cloud Function
Example code for an Infrastructure-as-Code deployment using Terraform
Make scripts to help accelerate the development cycle
The template can act as the starting point for your codebase to take a new ML use case from POC to production. Learn how Vodafone is using the templates to help slash the time from POC to production from 5 months to 4 weeks for their hundreds of Data Scientists across 13+ countries. To get started, check out the repository on GitHub and follow the instructions in the README.
If you’re new to Vertex AI and would like to learn more about it, check out the following resources to get started:
Vertex AI Pipelines documentation
Intro to Vertex AI Pipelines codelab
Vertex AI Pipelines & ML Metadata codelab
Finally, we’d love your feedback. For feedback on Vertex AI, check out the Vertex AI support page. For feedback on the pipeline templates, please file an issue in the GitHub repository. And if you’ve got comments on this blog post, feel free to reach us.
Special thanks to Sara Robinson for sharing this great opportunity.
Read More for the details.
Today, we are announcing the general availability of AWS Wavelength on the Vodafone 4G/5G network in Manchester, United Kingdom. Wavelength Zones are now available in 2 locations in the U.K, including the previously announced Wavelength Zone in London.
Read More for the details.
In October, we announced the opening of a new Google Cloud region in Israel. The region, me-west1, joins our global network of cloud regions delivering high-performance, low-latency access to cloud services for customers of all sizes and across industries.
As we support customers in this new region, we want to help provide the confidence that when you use our services, you can have control, transparency, and are able to support your compliance and residency requirements. Today, at the Israel Cloud Summit, we are excited to announce the public Preview of Assured Workloads for Israel to help address these needs.
Assured Workloads is a Google Cloud service that helps customers create and maintain controlled environments that accelerate running secure and compliant workloads, including helping enforcement of data residency, administrative and personnel controls, and managing encryption keys.
The Preview of Assured Workloads for Israel provides:
Data residency in our Israel Cloud region
Cryptographic control over data, including customer-managed encryption keys. Data encrypted with keys stored within a region can only be decrypted in that region
Service usage restrictions to centrally administer which Google Cloud products are available within the Assured Workloads environment
Before Assured Workloads, customers had to manually define, create, and monitor control requirements across their environments. Customers had to continually audit environments to ensure that controls were not altered, bypassed, or tampered. Concerns remained about missed controls or new projects created outside the maintained compliance boundary. Assured Workloads can help address these concerns by automating the deployment, enforcement, and monitoring of key controls and can be configured in just a few clicks. Here’s how:
Assured Workloads functions at the folder level of an organization, allowing for specific controls to be applied to and enforced selectively for cloud workloads with and to help support compliance requirements. The first step in creating an Assured Workloads folder is to choose where data will be stored:
Selecting the Israel option provides access to the the Israel Regions and Support control, which allows customers to restrict storage of their data to the Israel region:
Controls are applied at the folder level, allowing you to apply them selectively, only for workloads where they are needed. Any subfolders and projects within the folder that is created will inherit the defined controls.
The public preview of Assured Workloads for Israel is now open. Interested customers can fill out the Assured Workloads for Israel preview access form. You can also read about Assured Workloads in our documentation and learn more by watching our Assured Workloads video walkthrough series.
Read More for the details.
You can now register domain names and auto-configure domain name systems (DNS) on Amazon Lightsail. Amazon Lightsail is the easiest way to get started with AWS for users who need a secure, high-performance and reliable virtual private server (VPS) solution with a simple management interface and predictable pricing. With the addition of domain registration, Lightsail users are able to create a unique online address for their website or web application to establish their own personal or business identity on the internet. With domain registration, users also get Lightsail’s DNS management functionality.
Read More for the details.
Starting today, AWS Firewall Manager supports Import Existing Network Firewall feature that enables customers to discover existing AWS Network Firewalls and bring them under the central management of AWS Firewall Manager. With this feature, you can see your security coverage provided by existing firewalls across AWS organizations and manage those firewalls without having to instantiate new ones.
Read More for the details.
You can now accelerate repeat queries in Amazon Athena with Query Result Reuse, a new caching feature released today. Repeat queries are SQL queries submitted within a short period of time and produce the same results as one or more previously run queries. In use cases like business intelligence, where interactive analysis in a dashboard can cause multiple identical queries to be run, repeat queries can increase time to insight as each query needs time to read and process data before returning results to the user.
Read More for the details.