Azure – Public preview: Azure Dedicated Host restart
Azure Dedicated Host restart is now in public preview.
Read More for the details.
Azure Dedicated Host restart is now in public preview.
Read More for the details.
Serverless SQL for Azure Databricks provides instant compute to users for their BI and SQL workloads without waiting for clusters to start up or scale out.
Read More for the details.
Update management center is the next iteration of the Azure Automation Update Management solution. It works out of the box, with zero onboarding steps and has no dependency on Azure Automation and Log Analytics.
Read More for the details.
Introducing the migration capability to move existing replications from classic to modernized experience for disaster recovery of VMware virtual machines, enabled using Azure Site Recovery.
Read More for the details.
Digital tools offered by cloud computing are fueling transformation around the world, including in Asia Pacific. In fact, IDC expects that total spending on cloud services in Asia Pacific (excluding Japan) will reach 282 billion USD by 2025.1 To meet growing demand for cloud services in Asia Pacific, we are excited to announce our plans to bring three new Google Cloud regions to Malaysia, Thailand, and New Zealand — on top of six other regions that we previously announced are coming to Berlin, Dammam, Doha, Mexico, Tel Aviv, and Turin.
When they launch, these new regions will join our 34 cloud regions currently in operation around the world — 11 of which are located in Asia Pacific — delivering high-performance services running on the cleanest cloud in the industry. Enterprises across industries, startups, and public sector organizations across Asia Pacific will benefit from key controls that enable them to maintain low latency and the highest security, data residency, and compliance standards, including specific data storage requirements.
“The new Google Cloud regions will help to address organizations’ increasing needs in the area of digital sovereignty and enable more opportunities for digital transformation and innovation in Asia Pacific. With this announcement, Google Cloud is providing customers with more choices in accessing capabilities from local cloud regions while aiding their journeys to hybrid and multi-cloud environments,” said Daphne Chung, Research Director, Cloud Services and Software Research, IDC Asia/Pacific.
From retail and media & entertainment to financial services and public sector, leading organizations come to Google Cloud as their trusted innovation partner. The new Google Cloud regions in Malaysia, Thailand, and New Zealand will help our customers continue to enable growth and solve their most critical business problems. We will work with our customers to ensure the cloud region fits their evolving needs.
“Kami was born out of the digital native era, where in order to scale globally we needed a partner like Google Cloud who could support us on our ongoing innovation journey. We have since delivered an engaging and dependable experience for millions of teachers and students around the world, so it’s incredibly exciting to hear about the new region coming to New Zealand. This investment from Google Cloud will enable us to deliver services with lower latency to our Kiwi users, which will further elevate and optimize our free premium offering to all New Zealand schools.” – Jordan Thoms, Chief Technology Officer, Kami
“Our customers are at the heart of our business, and helping Kiwis find what they are looking for, faster than ever before, is our key priority. Our collaboration with Google Cloud has been pivotal in ensuring the stability and resilience of our infrastructure, allowing us to deliver world-class experiences to the 650,000 Kiwis that visit our site every day. We welcome Google Cloud’s investment in New Zealand, and are looking forward to more opportunities to partner closely on our technology transformation journey.” – Anders Skoe, CEO, Trade Me
“Digital transformation plays a key role in helping Vodafone deliver better customer experiences and connect all Kiwis. We welcome Google Cloud’s investment in New Zealand and look forward to working together to offer more enriched experiences for local businesses, and the communities we serve,” said Jason Paris, CEO, Vodafone New Zealand
“Our journey with Google Cloud spans almost half a decade, with our most recent partnership and co-innovation initiatives paving the way for AirAsia and Capital A to disrupt the digital platform arena in the same vein as we did airlines. The announcement of a new cloud region that’s coming to Malaysia – and Thailand too if I may add – showcases Google Cloud’s continuous desire to expand its in-region capabilities to complement and support our aspiration of establishing the airasia Super App at the center of our e-commerce, logistics and fintech ecosystem, while enriching the local community and giving all 700 million people in Asean inclusivity, accessibility, and value. I couldn’t be more excited about this massive milestone and the new possibilities that Google Cloud’s growing network of cloud regions will create for us, our peers, and the common man.” – Tony Fernandes, CEO, Capital A
“Google Cloud’s world-class cloud-based analytics and artificial intelligence (AI) tools have enabled Media Prima to embed a digital DNA across our organization, deliver trusted and real-time news updates during peak periods when people need them the most, and implement whole new engagement models like content commerce, thereby allowing us to diversify our revenue streams and remain at the forefront of an industry in transition. By allowing us to place our digital infrastructure and applications even closer to our audiences, this cloud region will supercharge data-driven content production and distribution, and our ability to enrich the lives of Malaysians by informing, entertaining, and engaging them through new and innovative mediums.” – Rafiq Razali, Group Managing Director, Media Prima
“Google Cloud’s global network has been playing an integral role in Krungthai Bank’s adoption of advanced data analytics, cybersecurity, AI, and open banking capabilities to earn and retain the trust of the 40 million Thais who use our digital services to meet their daily financing needs. This new cloud region is a fundamentally important milestone that will help accelerate our continuous digital reinvention and sustainable growth strategy within the local regulatory framework, thereby allowing us to reach and serve Thais at all levels, including unbanked consumers and small business owners, no matter where they may be.” – Payong Srivanich, CEO, Krungthai Bank
“Having migrated our operations and applications onto Google Cloud’s superior data cloud infrastructure, we are already delivering more personalized services and experiences to small business owners, delivery riders, and consumers than ever before – and in a more cost efficient and sustainable way. With the new cloud region, we will be physically closer to the computing resources that Google Cloud has to offer, and able to access cloud technologies in a faster and even more complete way. This will help strengthen our mission: to build a homegrown ‘super app’ that assists smaller players and revitalizes the grassroots economy.” – Thana Thienachariya, Chairman of the Board, Purple Ventures Co., Ltd. (Robinhood)
These new cloud regions represent our ongoing commitment to supporting digital transformation across Asia Pacific. We continue to invest in expanding connectivity throughout the region by working with partners in the telecommunications industry to establish subsea cables — including Apricot, Echo, JGA South, INDIGO, and Topaz — and points of presence in major cities.
Learn more about our global cloud infrastructure, including new and upcoming regions.
1. Source: Asia/Pacific (Excluding Japan) Whole Cloud Forecast, 2020—2025, Doc # AP47756122, February 2022
Read More for the details.
Amazon Aurora Serverless v1 now supports PostgreSQL major version 11. PostgreSQL 11 includes improvements to partitioning, parallelism, and performance enhancements such as faster column additions with a non-null default.
Read More for the details.
Google is one of the largest identity providers on the Internet. Users rely on our identity systems to log into Google’s own offerings, as well as third-party apps and services. For our business customers, we provide administratively managed Google accounts that can be used to access Google Workspace, Google Cloud, and BeyondCorp Enterprise. Today we’re announcing that these organizational accounts support single sign-on (SSO) from multiple third-party identity providers (IdPs), available in general availability immediately. This allows customers to more easily access Google’s services using their existing identity systems.
Google has long provided customers with a choice of digital identity providers. For over a decade, we have supported SSO via the SAML protocol. Currently, Google Cloud customers can enable a single identity provider for their users with the SAML 2.0 protocol. This release significantly enhances our SSO capabilities by supporting multiple SAML-based identity providers instead of just one.
There are many reasons for customers to federate identity to multiple third-party identity providers. Often, organizations have multiple identity providers resulting from mergers and acquisitions, or due to differing IT strategies across corporate divisions and subsidiaries. Supporting multiple identity providers allows the users from these different organizations to all use Google Cloud without time-consuming and costly migrations.
Another increasingly common use case is data sovereignty. Companies that need to store the data of their employees in specific jurisdictional locations may need to use different identity providers.
Migrations are yet another common use case for supporting multiple identity providers. Organizations transitioning to new identity providers can now keep their old system active with the new one during the transition phase.
“The City of Los Angeles is launching a unified directory containing all of the city’s workforce. Known as “One Digital City,” the directory provides L.A. city systems with better security and a single source for authentication, authorization, and directory information,” said Nima Asgari, Google Team Manager for the City of Los Angeles. “As the second largest city in the United States, this directory comes at a critical time for hybrid teleworkers, allowing a standard collaboration platform based on Google Docs, Sheets, Slides, Forms, and Sites. From our experience, Google Cloud’s support of multiple identity providers has saved us from having to create a number of custom solutions that would require valuable staff time and infrastructure costs.”
To use these new identity federation capabilities, Google Cloud Administrators must first configure one or more identity provider profiles in the Google Cloud Admin console; we support up to 100 profiles. These profiles require information from your identity provider, including a sign-in URL and an X.509 certificate. Once these profiles have been created, they can then be assigned to the root level for your organization or to any organizational unit (OU). In addition, profiles can be assigned to a Group as an override for the OU. It is also possible to configure an Organizational Unit or group to sign in with Google usernames and passwords instead of a third-party IdP.
For detailed information on configuring SSO with third-party IdPs, see the documentation here.
Currently, SSO supports the popular SAML 2.0 protocol. Later this year, we plan on adding support for OIDC. OIDC is becoming increasingly popular for both consumer and corporate SSO. By supporting OIDC, Google Cloud customers can choose which protocol is best for the needs of their organization. OIDC works alongside the multi-IdP support being released now, so administrators can configure IdPs using both SAML and OIDC.
Read More for the details.
“Any device, anytime, anywhere.” A cohort of CIOs within Airbus believed that the cloud, combined with new ways of working, could provide the foundation for this vision. Google Workspace and Google Cloud have played a pivotal role in helping Airbus realize this new path, transforming security, data management, and collaboration along the way.
In adopting Google Workspace and Google Cloud Airbus needed to ensure a robust, zero-trust security model that works across the entire organization, even when employees are working outside the office. Google Workspace provides a single login that enables secure access to data, based on device and user information, as well as contextual inputs that inform the security risk of each login and user action. Airbus admins also use Google Workspace to define trust rules that govern what information and files can be shared within and outside the organization, making it easy for employees to comply with best practices from anywhere.
Encryption also plays a central role in keeping information secure and private. By default, Google Workspace uses the latest cryptographic standards to encrypt all data at rest and in transit. Google Workspace also offers client-side encryption, which Airbus uses for their most sensitive projects, giving them authoritative control over their data as the sole owner of their encryption keys.
And to ensure the organization is protected against hackers, Airbus has implemented sharding as a standard practice, thereby splitting data across multiple servers and data centers. Because the company works with incredibly sensitive information—including government and military information—the ability to locate data all within European data centers continues to be a necessity.
Managing an enormous volume and variety of data, Airbus needs to ensure complete compliance with internal policies, as well as with external standards, like the General Data Protection Regulation (GDPR). Given this context, Airbus requires a solution that has strong built-in governance controls. Airbus also leverages the Drive labels feature, along with manual classification, to ensure that every file added to Google Drive is tagged and labeled correctly. In turn, these labels define the loss-prevention policies assigned to each file.
Before the pandemic, nearly every employee spent their workdays at an Airbus facility. When remote work became mandatory, the company made the pivot to Google Chat and Google Meet as an essential part of supporting real-time and asynchronous collaboration. Gmail also played a significant role in secure, anywhere-anytime communication, with its built-in anti-spam and anti-malware protections. Customizable filters let administrators protect against suspicious attachments, untrustworthy links, and countless other forms of malicious content. While Gmail blocks more than 99.9% of spam and phishing messages from ever reaching users’ inboxes, more advanced security measures like sandboxing can be put in place for specific use cases.
Google Cloud’s sustainability efforts are as equally important to Airbus as data security. Google Cloud has been working to keep its climate footprint and those who use its services as low as possible, with all of Google currently carbon neutral and with a goal to run on carbon-free energy 24/7 at all of our data centers by 2030. And with our smarter, more efficient data centers, we’re already on that path with more than six times the computing power for the same amount of electrical power we used 5 years ago.
By combining Google Workspace with Google Cloud, Airbus has been able to live up to its vision of “any device, anytime, anywhere.” The new model is a core foundation for its evolving future of work. Not only has Airbus adopted a zero-trust model across the organization, it’s also transformed how data is secured, managed, and accessed by employees working across a broad range of locations. The new flexible approach has also led to changes in how collaboration happens. Google is deeply gratified to have supported Airbus as they implement these changes and we continue to be proud that we’re running the cleanest cloud in the industry.
Want to learn more about how Google Workspace helps businesses like yours do more while keeping your data secure? Read this whitepaper to find out about our zero-trust model and other ways we protect organizations.
Read More for the details.
For over seven years, Functions-as-a-Service has changed how developers create solutions and move toward a programmable cloud. Functions made it easy for developers to build highly scalable, easy-to-understand, loosely-coupled services. But as these services evolved, developers faced challenges such as cold starts, latency, connecting disparate sources, and managing costs. In response, we are evolving Cloud Functions to meet these demands, with a new generation of the service that offers increased compute power, granular controls, more event sources, and an improved developer experience.
Today, we are announcing the general availability of the 2nd generation of Cloud Functions, enabling a greater variety of workloads with more control than ever before. Since the initial public preview, we’ve equipped Cloud Functions 2nd gen with more powerful and efficient compute options, granular controls for faster rollbacks and new triggers from over 125 Google and third-party SaaS event sources using Eventarc. Best of all, you can start to use 2nd gen Cloud Functions for new workloads, while continuing to use your 1st gen Cloud Functions.
Let’s take a closer look at what you’ll find in Cloud Functions 2nd gen.
Organizations are choosing Cloud Functions for increasingly demanding and sophisticated workloads that require increased compute power and more granular controls. Functions built on Cloud Functions 2nd gen have the following features and characteristics:
Instance concurrency – Process up to 1000 concurrent requests with a single instance. Concurrency can drastically reduce cold starts, improve latency and lower cost.
Fast rollbacks, gradual rollouts – Quickly and safely roll back your function to any prior deployment or configure how traffic is routed across revisions. A new revision is created every time you deploy your function.
6x longer request processing – Run your 2nd gen HTTP-triggered Cloud Functions for up to one hour. This makes it easier to run longer request workloads such as processing large streams of data from Cloud Storage or BigQuery.
4x larger instances – Leverage up to 16GB of RAM and 4 vCPUs on 2nd gen Cloud Functions, allowing larger in-memory, compute-intensive and more parallel workloads. 32GB / 8 vCPU instances are in preview.
Pre-warmed instances – Configure a minimum number of instances that will always be ready to go to cut your cold starts and make sure the your application’s bootstrap time doesn’t impact its performance.
More regions – 2nd gen Cloud Functions will be available in all 1st gen regions plus new regions including Finland (europe-north1) and Netherlands (europe-west4).
Extensibility and portability- By harnessing the power of Cloud Run’s scalable container platform, 2nd gen Cloud Functions let you move your function to Cloud Run or even to Kubernetes if your needs change.
As more workloads move to the cloud, you need to connect more event sources together. Using Eventarc, 2nd gen Cloud Functions supports 14x more event sources than 1st gen, supporting business-critical event-driven workloads.
Here are some highlights of events in 2nd gen Cloud Functions:
125+ Event sources: 2nd gen Cloud Functions can be triggered from a growing set of Google and third-party SaaS event sources (through Eventarc) and events from custom sources (by publishing to Pub/Sub directly).
Standards-based Event schema for consistent developer experience: These event-driven functions are able to make use of the industry-standard CloudEvents format. Having a common standards-based event schema for publishing and consuming events can dramatically simplify your event-handling code.
CMEK support: Eventarc supports customer-managed encryption keys, allowing you to encrypt your events using your own managed encryption keys that only you can access.
As Eventarc adds new event providers, they become available in 2nd gen Cloud Functions as well. Recently, Eventarc added Firebase Realtime Database, DataDog, Check Point CloudGuard, LaceWork and ForgeRock, as well as the Firebase Stripe / Revenuecat extensions as event sources.
You can use the same UI and gcloud commands as for your 2nd gen functions as for 1st gen, help you get started quickly from one place. That’s not to say we didn’t make some big improvements to the UI:
Eventarc subtask – Allows you to easily discover and configure how your function is triggered during creation.
Deployment tracker – Enables you to view the status of your deployment and spot any errors quickly if they occur during deployment.
Improved testing tab – Simplifies calling your function with sample payloads.
Customizable dashboard – Gives you important metrics at a glance and the accessibility updates improve the experience for screen readers.
As with 1st gen, you can drastically speed up development time by using our open source Functions Framework to develop your functions locally.
2nd gen Cloud Functions allows developers to connect anything from anywhere to get important work done. This example shows an end-to-end architecture for an event-driven solution that uses new features in 2nd gen Cloud Functions and Eventarc.
It starts with identifying the data sources to which you want to programmatically respond. These can be any of the 125+ Google Cloud or third-party sources supported by Eventarc. Then you’re able to configure the trigger and code the function while specifying instance size, concurrency and processing time based on your workload. Your function can process and store the data using Google Cloud’s AI and data platforms to transform data into actionable insights.
We built Cloud Functions to be the future of how organizations build enterprise applications. Our 2nd generation incorporates feedback we’ve received from customers to meet their needs for more compute, control and event sources with an improved developer experience. We’re excited to see what you build with 2nd gen functions. You can learn more about Cloud Functions in the documentation and get started using Quickstarts: Cloud Functions.
Read More for the details.
In today’s hybrid office environments, it can be difficult to know where your most valuable, sensitive content is, who’s accessing it, and how people are using it. That’s why Egnyte focuses on making it simple for IT teams to manage and control a full spectrum of content risks, from accidental data deletion to privacy compliance.
I used to be an Egnyte customer before joining the team, so I’ve experienced first-hand the transformative effects that Egnyte can have on a company. Because data is fundamental to a company’s success, we take the trust of our 16,000 clients very seriously. There is no room for error with a cloud governance platform, which means that the technology providers we work with can’t fail either. That’s why we work with Google Cloud.
Since Egnyte was founded in 2007, we have delivered our services to clients 24/7. We do this by running our own data centers: two in the USA and one in Europe. But as the company continued its steady growth, owning and managing these data centers became unsustainable. There’s a tremendous amount of work that goes into managing everything that we need from a data center. Not only were we constantly building, maintaining, and paying for all this infrastructure, but we’d have to constantly expand our data centers to accommodate our business growth. This caused a never-ending pipeline issue because we had to predict how many businesses we were going to win over the next 12 to 18 months. What if we planned to grow the business by 20%, and ended up growing by 25% instead?
We knew that being limited to our own data centers was going to negatively impact our business, so we looked for alternatives. To gain scalability and introduce another layer of reliability to our business, we decided to collaborate with a reputable cloud provider who could reliably back up our data. We examined the offerings of every cloud provider, and found that in every category that we analyzed, Google was hands-down the winner.
One of these categories is the reach of the network. With its own transoceanic fiber with points of presence in all markets where we’re currently doing business as well as markets where we intend to do business one day, Google network is second to none. Another important criteria for us was flexibility in the product offering, so we could better consider the financial risks of this large-scale data migration. For a while, we needed to pay for both our new cloud infrastructure and our old on-premises one while they overlapped during the migration, but Google Cloud made it easier for us to plan for this.
By December 2021, we had completed our full migration to Google Cloud. This significant migration was completed gradually and without disrupting our services at any point. Our close collaboration with the Google Cloud team is one of the big reasons we completed this so successfully. Google Cloud was able to anticipate some of the problems we’d likely be facing and helped us overcome them along the way.
We were able to shut off our last data center in February 2022, and the beneficial changes to the business are already obvious. Capacity planning, which used to be our biggest challenge on-premises, is now a problem of the past. The ability to spin up new resources on Google Cloud means we no longer need to buy additional resources a year in advance and wait for them to be shipped.
Using Google Cloud means that we no longer rely on aging infrastructure, which is a very limiting factor when you’re developing and engineering a platform as complex as Egnyte. Our entire platform is now always operating on the latest storage, processing, network, and services available on Google Cloud.
Additionally, we have services embedded on our infrastructure such as Cloud SQL, Cloud Bigtable, BigQuery, Dataflow, Pub/Sub, and Memorystore for Redis, which means we no longer need to build services from scratch or shop, install, and build them into the product and company work flow. There’s a long list of Google Cloud services that have significantly simplified our processes and that now support our flagship products, Egnyte Collaborate and Secure and Govern.
Looking ahead, we’ll continue to take advantage of what Google Cloud has to offer. Our migration has impacted not only our business but also our clients. We can offer even higher reliability and faster scalability to our clients whenever they need our platform to protect and manage critical content on any cloud or any app, anywhere in the world. We look forward to seeing what’s next.
Read More for the details.
For many Google Maps Platform developers, the map is a core part of their user experience that needs to strike a balance between the right look and just the right amount of information. For example, a travel app developer might want to show more restaurants and landmarks while hiding other points of interest that are less relevant to travelers, but they need to experiment to get the ratios just right for their user experience. In many cases, the best way to find the optimal set of customizations is to experiment and iterate.
Today we are launching version history for Cloud-based maps styling to make it easier to prototype and experiment with styles. This new feature is available to all developers in the Map Style Editor under the ‘Settings’ menu.
What is version history?
Each time you save or publish a map style in the Map Style Editor using version history, a new version of your work is automatically created.
This means now you can confidently change a map style and, if something isn’t quite right, switch to one of the previous versions and restore it with a click of a button. You can also duplicate any of the previous versions into a new map style, so you can have both an old and a new version that you can use to run controlled experiments on your app or website.
Using version history
To use version history, open a map style in the Map Style Editor in the Cloud console, update using any of the hundreds of available customizations, then click the ‘Save’ button to create a new draft, or the ‘Publish’ button to create a new published version. Your new version will appear in the ‘version history’ pane to the right of the map.
To switch versions, select the version you want from the list, click the ‘Restore’ button, then click the ‘Publish’ button to automatically apply the change across all your web and mobile apps that use a Map ID associated with that map style. You can also use any version as the starting point to create a new version with further customization.
Version history is created for all map style changes as of the release of this feature. Changes made before this release are not available in version history.
Creating great, custom map experiences
Last year, we announced general availability of Cloud-based maps styling that makes it easy to not only customize the map to your users’ needs but also decouple the design of the map from your code. You design the map in the Cloud console and deploy it to your app with a click of the “Publish” button.
Since this rollout, we have seen developers taking advantage of the cloud-based approach to prototype more engaging map experiences, experiment in production, and achieve a measurable increase in user engagement and conversion. Whether it’s searching for a vacation stay, hailing a ride, or locating a store, a well-designed map helps users achieve their goals faster and more easily.
With version history, you now have more control than ever to iterate, experiment, and create experiences that will delight your users.
To learn more, check out our developer documentation.
For more information on Google Maps Platform, visit our website.
Read More for the details.
You can now use Amazon SageMaker Model Building Pipelines with AWS Resource Access Manager (AWS RAM) to securely share pipeline entities across AWS accounts and access shared pipelines through direct API calls. A multi-account strategy helps achieve data, project, and team isolation while supporting software development lifecycle steps. Cross-account pipeline sharing can support a multi-account strategy without the added hassle of logging in and out of multiple accounts. For example, cross-account pipeline sharing can improve machine learning testing and deployment workflows by sharing resources across staging and production accounts.
Read More for the details.
AWS Glue now supports a new execution option that allows customers to reduce the costs of their pre-production, test, and non-urgent data integration workloads by up to 34%. With Flex, Glue jobs run on spare capacity in AWS.
Read More for the details.
AWS Direct Connect now supports connections to AWS Transit Gateway at speeds of 500 megabits per second (Mbps) and lower, providing more cost-effective choices for Transit Gateway users when higher speed connections are not required. With this change, customers using Direct Connect at connection speeds of 50, 100, 200, 300, 400, and 500 Mbps can now can connect to their Transit Gateway.
Read More for the details.
Amazon Relational Database Service (Amazon RDS) Custom for Oracle now supports the promotion of a managed replica that was created using the replica function. When you promote a managed replica, it is converted from a physical standby database and activated as a standalone read/write primary database instance.
Read More for the details.
Amazon DocumentDB (with MongoDB compatibility) is a database service that is purpose-built for JSON data management at scale, fully managed and integrated with AWS, and enterprise-ready with high durability.
Read More for the details.
The new Amazon S3 condition key enables you to write policies that help you control the use of server-side encryption with customer-provided keys (SSE-C). Using Amazon S3 condition keys, you can specify conditions when granting permissions in the optional ‘Condition’ element of a bucket or an IAM policy. One such condition is to require server-side encryption (SSE) using your preferred encryption method.
Read More for the details.
Amazon QuickSight launched a new look and feel for the dashboard experience. The new interface enhances the reader experience by improving the discoverability, predictability, and the overall polish of the dashboards. The new dashboard experience includes:
Simplified toolbar with updated icons for key actions for better visual clarity
Discoverable visual menu visible on-hover to improve the discoverability of drills, export, and filter restatements
New controls, menu, and submenus to provide a better visual experience
Non-blocking right pane for secondary experiences like filters, threshold alerts, and downloads to improve focus on the content of the dashboard
Read More for the details.
AWS IoT Greengrass is an Internet of Things (IoT) edge runtime and cloud service that helps customers build, deploy, and manage device software. We are excited to announce our version 2.7 release with the following features:
System Telemetry Enhancements – The Stream Manager agent component now has the ability (enabled by default) to send system telemetry metrics to Amazon EventBridge. System telemetry data is diagnostic data that can help you monitor the performance of critical operations on your AWS IoT Greengrass core devices. You can create projects and applications to retrieve, analyze, transform, and report telemetry data from your edge devices. Domain experts, such as process engineers, can use these applications to gain insights into their fleet health based on device data uploaded through Stream Manager to AWS Services such as Amazon Kinesis, Amazon Simple Storage Service (Amazon S3), AWS IoT Analytics, AWS IoT SiteWise, and more. For more information, see Gathering System Telemetry section in the developer guide.
Local Deployment Improvements – The new improvements now enable AWS IoT Greengrass nucleus to send near real time deployment status updates to AWS IoT Greengrass cloud service. For instance, using the ListInstalledComponents API, customers can now observe the status of locally deployed components for a connected device.
Additional Support for Client Certificates – Certificate signed by a custom certificate authority (CA), where the CA isn’t registered with AWS IoT, is now supported. This allows customer the flexibility to use a custom certificate authority that is not registered with AWS IoT. To use this feature, you can set the new greengrassDataPlaneEndpoint configuration option to iotdata. For more information, see Use a device certificate signed by a private CA.
Read More for the details.
Starting today, Amazon EC2 C6g and C6gd instances are available in Asia Pacific (Osaka) region. Additionally, M6gd instances are now available in Europe (Stockholm) region. C6g and C6gd instances are ideal for compute-intensive workloads such as high performance computing (HPC), batch processing, ad serving, video encoding, gaming, scientific modelling, distributed analytics, and CPU-based machine learning inference. M6gd instances are ideal for general purpose applications such as application servers, microservices, mid-size data stores, and caching fleets. C6gd and M6gd instances offer up to 50% more NVMe storage GB/vCPU over comparable x86-based instances and are ideal for applications that need high-speed, low latency local storage.
Read More for the details.