Azure – Generally available: Event Grid upgrade enhancements for AKS
You can now receive and programmatically handle AKS generated upgrade events in Event Grid
Read More for the details.
You can now receive and programmatically handle AKS generated upgrade events in Event Grid
Read More for the details.
Starting today, memory-optimized Amazon Compute Cloud (Amazon EC2) X2idn instances are available in Middle East (Dubai) and Europe (Zaragoza) regions. These instances, powered by 3rd generation Intel Xeon Scalable Processors and built with AWS Nitro System, are designed for memory-intensive workloads. They deliver improvements in performance, price performance, and cost per GiB of memory compared to previous generation X1 instances. These instances are SAP-certified for running Business Suite on HANA, SAP S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, SAP BW/4HANA, and SAP NetWeaver workloads on any database.
Read More for the details.
About three years ago, JCB, one of the biggest Japanese payment companies, launched a project to develop new high-value services with agility. We set up a policy of starting small from scratch without using the existing system, which we call the concept of “Dejima”, where we focused on improving various aspects such as team structure, risk management, and application and platform development process.
Until now, large Japanese enterprises have built decision-making systems focused on eliminating unnecessary business processes and efficiently increasing quarterly profits. As a result, we are seeing more organizational structures that make it difficult to take on new challenges or experiments with trial and error. We wanted to breathe a new life into this situation, and that is how the concept of Dejima came up. In the Edo period, Japan closed its national border to other countries under its national isolation policy. At the time, Dejima was the only area where special rules were applied and allowed people from different cultures to come and go, and trade. This special rule generated the culture of inclusion and led to Dejima’s prosperity. Like Dejima, we believe that creating an organization that is independent from other business practices can be effective in enabling digital transformation for the organization.
We have been able to make this transformation with the direct help of the Google Cloud and its products such as Google Kubernetes Engine (GKE), Cloud Spanner and Anthos Service Mesh, applying domain-driven design and microservice architecture. We named this the “JCB Digital Enablement Platform (JDEP),” which now hosts multiple business critical production services.
A key benefit of GKE is that the team can easily add resources and release them when they are finished, allowing them to be flexible to accommodate busy periods and off-seasons. Meanwhile, Anthos Service Mesh helps us manage complex environments easily. With containerization and managed services, we are prepared for the future for when more services go into production, as it would be easy to maintain and provide version upgrade support. At the same time, Cloud Spanner ensures that we maintain a 99.99% availability at all times.
Our initial motivation for introducing SRE practices was to break proverbial walls between business, development and operations, which was a success. Now we are focused on ensuring its reliability and maintaining customer satisfaction with our SRE practices.
To ensure the success of SRE practices that we implemented, there were a few categories we needed to address, from defining the organizational culture and practices to ensuring the policies attached to the new models created were practical enough to be implemented on the ground level. This is so that the Dejima concept remains sustainable for the long run.
Here, “appropriate” reliability is the key. According to the conventional way of thinking at JCB, “service failure must not occur” and “SLA should be maintained as high as possible.” We started by discussing what was the “appropriate reliability” that our customers really needed, but it was not as easy as we thought because the level of reliability for user satisfaction differed from application to application.
Eventually, the business, development and operations teams formulated specific SLIs and SLOs together, something we would never have been able to do if we discussed separately. This is because the business is required to compromise on lower service levels, since our reliability standard used to be too high. The collaboration of development and operations teams is necessary to understand how our system works upon our users’ interactions.
After Google Cloud helped us run a series of workshops where all teams participated, we saw change within the organization. The business team started evangelizing SRE to other members in the business department, and the development and operations teams started collaborating autonomously. We felt like we were working at Google speed, accomplishing so much in a short amount of time.
Understanding SRE as an entire company is necessary to progress. We are now working on creating internal training materials to spread the SRE concept throughout the company.
With the cooperation of Google Cloud, we have created a Team Charter that defines the team’s mission, values and engagement models. We also created policy documents that include Incident Response Policy, Postmortem Policy, On-call Policy, Toil Policy and Error Budget Policy, to eliminate ambiguity in day-to-day operations.
For example, when an incident occurs, we can identify exactly the level of importance, the roles that are assigned to each person, and in what order they need to follow. When to do a postmortem, who owns it? What to do if the error budget is exhausted? How do other teams reach out to SRE when they have problems? The written policy documents will dramatically improve efficiency and motivate teams to adopt a culture of learning from failures.
The format for such policies are written in Google’s SRE book, but when we adopt it, it needs to take into account the circumstances specific to our company. Simply copying an existing policy won’t work, which is why it’s important to formulate a policy that fits the situation each team is in.
Based on these policies, JCB’s SRE team has two sub-teams. One is called Sheriff which works as the platform SRE, and provides infrastructure services for the application team. The other is called the Diplomat which works as the embedded SRE, and participates in the application team to lead productionisation. There is also a team called Architecture that is separate from the SRE teams whose role is to consult SRE on system design and review architecture.
The SRE team was a single role when it was first launched, but now has two sub-teams. This is because as the number of application teams increases, the number of support tasks for the teams also increases, which can result in a shortage of resources to work on overall improvement. Securing people who are not interrupted from day-to-day support tasks and focusing on the main task improves efficiency.
Whereas both sub-teams share the on-call duty, some engineers are not allowed to do it by contract as they are not allowed to get paged. For those who cannot participate in on-call duties, we created what’s called a Toil Shift, which allows them to focus on resolving tickets in our backlogs instead.
This works well so far, but we will keep evolving as our business grows.
Read More for the details.
Private Service Connect (PSC) allows private communications between service consumers and service producers. In this blog we will discuss a few ways you can use PSC for private communication.
PSC comprises of several components explained below:
Consumers – Access managed services via private IP from within their own VPC.
Producers – Have the ability to expose services to consumers via service attachments.
Service attachments – These link to producer load balancers. Security can be applied with a consumer accept list. Consumers can configure endpoints linked to service attachment to establish private connection from within their VPC.
Endpoint – These are private IP addresses in a consumer VPC that are mapped to a service attachment and forward request to the attached service.
Backends – These use PSC Network Endpoint groups and reference a producer service attachment or a regional Google API.
Google API – These are service created by Google which are accessible via public API and reside on the Google Network
Published service – These are service that are not classified as Google APIs
PSC offer some benefit such as:
Private direct connectivity between consumer and producer managed service.
No overlapping IP constraints as NAT is used between the communications.
The ability to enforce authorization control.
Enhance line-rate performance by removing intermediate hops.
The diagram shows the options to connect to a producer using PSC. You can create an endpoint or backend to target the necessary services.
In this design the consumer initiates the request to the producer service. The producer and consumer can be in separate organizations, with their own VPC, IPs, and projects. The producer exposes the service via a service attachment and allows access based on the allow list option.
On the consumer side they create a PSC endpoint, assign a private IP address and link to the service attachment address. Once the connection is established, clients in the consumer network can access the service via the PSC endpoint address in their VPC.
See documentation About accessing published services through endpoints.
This is similar to the example above but the configuration on the consumer end is different.
On the consumer side they create a PSC Network Endpoint group (NEG), link it to the producer’s service attachment and expose the PSC NEG via a supported Load balancer type.
See documentation About Private Service Connect backends.
In this design the consumer enables global access on the endpoint which makes it available to resources in other regions. In this case on-premises clients are connected to Google Cloud via Cloud Interconnect in the us-east1 region. With global access configuration enabled they can send traffic to the endpoint located in us-west1 and connect to the producer service.
See documentation Global Access.
Private Service Connect continues to evolve. To learn more about PSC check out the following:
Hands-on labs – Codelabs
Documentation – https://cloud.google.com/vpc/docs/private-service-connect
YouTube Demo – https://youtu.be/8sGs3b5zFOE
Want to ask a question, find out more or share a thought? Please connect with me on Linkedin
Read More for the details.
Editor’s note: Bitly, the link & QR Code management platform, migrated 80 billion rows of link data to Cloud Bigtable. Here’s how and why they moved this data from a MySQL database, all in just six days.
Our goal at Bitly is simple: Make your life easier with connections. Whether you’re sharing links online, connecting the physical and digital worlds with custom QR Codes or creating a killer Link-in-bio, we’re here to help make that happen.
Bitly customers run the gamut from your local middle school to most of the Fortune 500 with every use case you can think of in play. On a typical day, we process approximately 360 million link clicks and QR Code scans and support the creation of 6-7 million links or QR Codes for more than 350,000 unique users.
At the heart of our platform is link data consisting of three major datasets for the approximately 40 billion active links in our system (and counting). For years we stored this data in a self-managed, manually sharded MySQL database. While that setup served us well over time, it did present some challenges as we looked to expand and move into the future.
First, performing operational actions like software and security upgrades or database host provisioning — all while keeping the databases 100% available for our customers — was challenging to say the least.
Additionally, the backup and restore process was both costly and time-consuming. Daily backups for the growing dataset took almost an entire day to complete. And if we ever had to restore the entire data set, it would have undoubtedly taken at least two people working for several days — not a fun prospect and thankfully something we never had to do.
While manual sharding for MySQL offers good key distribution across physical and logical shards, it’s pretty high-maintenance — we tried to avoid touching our sharding config because of how error-prone making changes was. And even with a forward-thinking shard allocation, we found this approach to have a limited lifespan.
And finally, when we turned our attention towards multi-region and global distribution, manually sharded MySQL proved to be a major hurdle, especially compared to the convenience of a managed service that handles geo-distribution for us.
As we thought about the future of our data and how to best meet our expansion goals, we determined that an update was necessary. We needed a system more set up for growth and increased reliability, preferably one with built-in functionality for replication and scaling. Investigation and research brought us to Cloud Bigtable, Google Cloud’s enterprise-grade NoSQL database service.
Before we dive into the Bigtable migration, it’s useful to understand the basics of our link management system. When a user shortens a long URL with Bitly, information is saved to the backend data stores through our API layer. This information consists of a Bitly short link and the corresponding destination link for proper routing, as well as some additional metadata. When a user clicks the Bitly short link, our services query these same backend data stores for the redirect destination for the request. This basic flow, which also includes other Bitly link options like custom back-halves, is the backbone of the Bitly link management platform.
As mentioned, after researching our options, the solution that best fit our needs was Bigtable. It offered the features we were looking for, including:
A 99.999% SLA
Limitless scale
Single-digit millisecond latency
A built-in monitoring system
Multi-region replication
Geo-distribution, which allows for seamless replication of data across regions and reduces latency
On-demand scaling of compute resources and storage, which adjusts to user traffic and allows our system to grow and scale as needed
Seamless integration with our general architecture; we use Google Cloud services for many other parts of our system, including the APIs that interact with these databases
A NoSQL database that doesn’t require relational semantics, as the datasets we’re migrating are indexed on a single primary key in our applications
We targeted three of our major, self-managed datasets to migrate. The larger two were organized in a sharded database architecture. The first thing we did for migration was prepare the new Bigtable database. We iterated over a schema design process and conducted a thorough performance analysis of Bigtable to ensure an uninterrupted user experience during and after the migration. After that, we made minor adjustments to our application code so that it could seamlessly integrate and interact with Bigtable. Finally, we implemented a robust post-migration disaster recovery process to mitigate any potential risks.
During the actual migration, we enabled our applications to start a “dual writes” phase. This involved concurrently writing new link data to both our existing MySQL and the new Bigtable tables. Once data started writing to our Bigtable instance, we ran our migration scripts. We used a Go script to walk each of the existing MySQL datasets and insert each row into Bigtable. This enabled us to clean up outdated information and backfill older records with newer field data.
In the process of migration, we were actually able to free up a huge amount of storage. Because of the elimination of an early feature of the Bitly platform, we were able to exclude from the migration to Bigtable a little less than half of the total data stored in MySQL. Since we were creating a completely clean dataset, we had the opportunity simply to skip those unneeded rows during the migration.
Altogether, the migration process walked through 80 billion MySQL rows, which resulted in just over 40 billion records finding their new home in Bigtable. In the end, our starting point with Bigtable is a 26 TB dataset, not including replication. A set of concurrent Go scripts running in parallel on a handful of machines allowed us to complete this migration project in six days. (Go rarely disappoints.)
Next up was the data validation and cutover period when we started returning data from Bigtable, but continued to write to MySQL as a precaution in case we needed to roll back at any point.
As we dove into the validation process, we compared the data between MySQL and Bigtable and noted any discrepancies whenever a link was clicked or created. After verifying that all our responses were stable, we proceeded with a gradual cutover process, rolling out in percentages until we reached 100% Bigtable for all writes and reads. After a comfortable run period, we’ll turn off the dual writes completely and finally decommission our workhorse MySQL hosts to live on a farm upstate.
Our data is our lifeline, and we’re doing everything we can to ensure it’s always protected. We put together a redundancy plan using both Bigtable backups as well as a process for keeping a copy of the data outside Bigtable for true disaster recovery.
The first line of defense involves a switchover to the backup Bigtable dataset in case we need it. Beyond that, we’ve implemented two more layers of defense to protect against instance failure, corrupted data, and any other data failure that would require a restore of one or more tables from backup.
For this process, we start by creating daily Bigtable backups of our tables that we store for a fixed number of days. Second, we execute a Dataflow job to export our data from Bigtable intoCloud Storage approximately every week. And, if the need arises, we can useDataflow to import our data back from Cloud Storage into a new Bigtable table.
While running the Dataflow jobs to export from Bigtable to Cloud Storage, we’ve seen an impressive export speed of 7-8 million rows read per second on average and up to 15 million per second at times. All the while, our production reads and writes continued without disruption. When we tested the Cloud-Storage-to-Bigtable restore job, the write speed, expectedly, increased with instance scale — at the maximum node quota for our regions, we observed an average of just under 2 million rows per second written to our new table.
As mentioned above, not only did Bigtable meet our technical requirements and operational needs, but we also chose Bigtable because it sets us up for future growth. Its ability to scale seamlessly over time while improving our system availability SLA was a major factor in our decision.
As we increase our scale by 5x, 10x, or more, it’s imperative that our data backbone scales accordingly and that the SLAs we provide to customers stay stable or even add another coveted “9”. We have big plans in the coming years and Bigtable will help us achieve them.
Interested in learning more? We found the following resources to be useful in our journey to Bigtable evaluation and ultimate adoption:
Evaluate Bigtable schema design for your relational database and application workloads
Understand how to increase availability and distribute data globally with Bigtable replication in any region combination of your choice
Learn more about Bigtable backups and importing/exporting data with Bigtable Dataflow templates
Bonus: If you’re interested in learning more of the nuts and bolts of how we migrated the data, I’ll be talking all about that very topic at GopherCon 2023 this September in San Diego!
Read More for the details.
Time-series forecasting is one of the most important models across a variety of industries, such as retail, telecom, entertainment, manufacturing. It serves many use cases such as forecasting revenues, predicting inventory levels and many others. It’s no surprise that time series is one of the most popular models in BigQuery ML. Defining holidays is important in any time-series forecasting model to accommodate for variations and fluctuations in the time-series data. In this blog post we will discuss how you can take advantage of the recent enhancements to define custom holidays and get better explainability for your forecasting models in BigQuery ML.
You could already specify HOLIDAY_REGION when creating a time-series model. The model would use the holiday information within that HOLIDAY_REGION to capture the holiday effect. However, we heard from our customers that they are looking to understand the holiday effect in detail — which holidays are used in modeling, what is the contribution of individual holidays in the model as well as the ability to customize or create their own holidays for modeling.
To address these, we recently launched the preview of custom holiday modeling capabilities in ARIMA_PLUS and ARIMA_PLUS_XREG. With these capabilities, you can now do the following:
Access to all the built-in holiday data by querying the BigQuery public dataset bigquery-public-data.ml_datasets.holidays_and_events_for_forecasting or by using the table value function ML.HOLIDAY_INFO. You can inspect the holiday data used for fitting your forecasting model
Customize the holiday data (e.g. primary date and holiday effect window) using standard GoogleSQL to improve time series forecasting accuracy
Explain the contribution of each holiday to the forecasting result
Before we dive into using these features, let’s first understand custom holiday modeling and why one might need it. Let’s say you want to forecast the number of daily page views of the Wikipedia page for Google I/O, Google’s flagship event for developers. Given the large attendance of Google I/O you can expect significantly increased traffic to this page around the event days. Given that these are Google-specific dates and not included in the default HOLIDAY_REGION, the forecasted page views will not provide a good explanation for the spikes around those dates. So you need the ability to specify custom holidays in your model so that you get better explainability for your forecasting. With custom holiday modeling features, you can now build more powerful and accurate time-series forecasting models using BigQuery ML.
The following sections show some examples of the new custom holiday modeling in forecasting in BigQuery ML. In this example, we explore the bigquery-public-data.wikipedia dataset, which has the daily pageviews for Google I/O, create a custom holiday for Google I/O event, and then use the model to forecast the daily pageviews based on its historical data and factoring in the customized holiday calendar.
“The bank would like to utilize a custom holiday calendar as it has ‘tech holidays’ due to various reasons like technology freezes, market instability freeze etc. And, it would like to incorporate those freeze calendars while training the ML model for Arima,” said a data scientist of a large US based financial institution.
BigQuery hosts hourly wikipedia page view data across all languages. As a first step, we aggregate them by day and all languages.
Now we do a regular forecast. We use the daily page view data from 2017 to 2021 and forecast into the year of 2022.
We can visualize the result from ml.explain_forecast using Looker Studio and get the following graph:
As we can see, the forecasting model is capturing the general trend pretty well. However, it is not capturing the increased traffic that are related to previous Google I/O events and not able to generate an accurate forecast for 2022 either.
As we can see from below, Google I/O happened during these dates between 2017 and 2022. We would like to instruct the forecasting model to consider these dates as well.
As we can see, we provide a full list of Google I/O’s event dates to our forecasting model. Besides, we also adjust the holiday effect window to cover four days around the event date to better capture some potential view traffic before and after the event.
After visualizing in Looker Studio, we get the following chart:
As we can see from the chart, our custom holiday significantly helped boost the performance of our forecasting model and now it is perfectly capturing the increase of page views caused by Google I/O.
You can further inspect the holiday effect contributed by each individual holidays by using ml.explain_forecast:
The results look similar to the following. As we can see, Google I/O indeed contributes a great amount of holiday effect to the overall forecast result for those custom holidays.
At the end, we use ml.evaluate to compare the performance of the previous model created without custom holiday and the new model created with custom holiday. Specifically, we would like to see how the new model performs when it comes to forecasting a future custom holiday, and hence we are setting the time range on the week of Google I/O in 2022.
We get the following result, which demonstrates the great performance boost of the new model:
In the previous example, we demonstrated how to use custom holidays in forecasting and evaluate its impact on a forecasting model. The public dataset and the ML.HOLIDAY_INFO table value function is also helpful for understanding what holidays are used to fit your model. Some gains brought by this feature are as follows:
You can configure custom holidays easily using standard GoogleSQL, enjoying BigQuery scalability, data governance, etc.
You get elevated transparency and explainability of time series forecasting in BigQuery.
Custom holiday modeling in forecasting models is now available for you to try in preview. Check out the tutorial in BigQuery ML to learn how to use it. For more information, please refer to the documentation.
Acknowledgements: Thanks to Xi Cheng, Haoming Chen, Jiashang Liu, Amir Hormati, Mingge Deng, Eric Schmidt and Abhinav Khushraj from the BigQuery ML team. Also thanks to Weijie Shen, Jean Ortega from the Fargo team of Resource Efficiency Data Science team.
Read More for the details.
New features now available in Public Preview enable you to create AI workflows that connect to various language models and data sources and use prompt flow and Azure OpenAI models to build LLM applications.
Read More for the details.
A new feature now available in GA enables you to create compute clusters in locations that are different from the location of the workspace.
Read More for the details.
Amazon Lex is a service for building conversational interfaces into any application, using voice and text. With Amazon Lex, you can quickly and easily build sophisticated natural language conversational bots (“chatbots”), virtual agents, and interactive voice response (IVR) systems. We are excited to announce the general availability of Analytics on Amazon Lex. Analytics gives access to rich insights and prebuilt dashboards related to Lex conversations, intents, slots, and utterances.
Read More for the details.
Amazon Connect now provides pre-defined Contact Lens conversational analytics metrics, available via the GetMetricDataV2 API, enabling contact center managers to analyze aggregate contact quality and agent performance. These new metrics allow managers to understand how many times an agent interrupted a customer, the talk-time for a contact overall, by agent, or by customer, and how long it took for an agent to first greet the customer on chat. For example, using these metrics, customers can create custom dashboards to analyze if an agent’s average handle time is high due to higher than expected agent talk time, and if so, provide agent coaching to improve.
Read More for the details.
On July 18, 2023 Amazon announced quarterly security and critical updates for Amazon Corretto Long-Term Supported (LTS) and Feature (FR) versions of OpenJDK. Corretto 20.0.2, 17.0.8, 11.0.20, 8u381 are now available for download. Amazon Corretto is a no-cost, multi-platform, production-ready distribution of OpenJDK.
Read More for the details.
For Azure partners and customers seeking to develop business continuity and disaster recovery solutions, crash-consistent VM restore points serve as feature-rich building blocks, offering an agentless solution available natively on the Azure platform.
Read More for the details.
Authentication with Microsoft Active Directory for Amazon Aurora with PostgreSQL compatibility is now available in the AWS GovCloud (US-East and US-West) Regions.
Read More for the details.
AWS Mainframe Modernization is now Payment Card Industry Data Security Standard (PCI DSS) compliant, enabling customers to use AWS Mainframe Modernization for applications that store, process, and transmit information for use cases such as payment processing that are subject to PCI DSS.
Read More for the details.
With Vector search, Developers can store, index, and deliver search applications over vector representations of organizational data including text, images, audio, video, and more.
Read More for the details.
At Inspire, we are announcing Extended Security Updates (ESUs) enabled by Azure Arc. Customers will be able to purchase and seamlessly deploy ESUs through Azure Arc in on-premises or multicloud environments, right from the Azure portal.
Read More for the details.
Starting today, Llama 2 foundation models from Meta are available in Amazon SageMaker JumpStart, a machine learning (ML) hub that offers pretrained models, built-in algorithms, and pre-built solutions to help you quickly get started with ML. You can deploy and use Llama 2 foundation models with a few clicks in SageMaker Studio or programmatically through the SageMaker Python SDK.
Read More for the details.
Data is critical to the decision-making process across most types of organizations, but connecting people to the data they need in their workflow can be a challenge. Our vision — to enable users to access and leverage trusted metrics based on accurate, up-to-date, data through the tools they already use — becomes even more real today as we announce general availability of Looker’s governed data connector for Power BI. This new integration enables Power BI users to access centrally defined metrics and data relationships from Looker’s semantic layer through Power BI Desktop.
Looker’s semantic layer serves as a business representation of the underlying data, separating visualization of that data from its original source, and enabling data engineers and analysts to create business logic and calculations for everyone in an organization to use and reuse. A semantic layer helps scale technical expertise, expands reusability and version control into their SQL, and handles changes and updates efficiently, while also empowering self-service. Users can tap into this layer to access centralized metrics and run queries against the database on-demand, all without knowing SQL themselves or navigating technical complexities of the data.
When a user connects to a Looker semantic model from Power BI, they’re accessing the dimensions, metrics, and join logic defined within Looker, which helps provide consistency across reports. For example, one user might define Total Revenue as a simple sum of sales, while another user might know that promotions and discounts need to be subtracted from the sum of sales when calculating Total Revenue. Defining revenue centrally, with filters and logic applied by default, helps reduce the potential for discrepancies and confusion. And if the definition of a metric changes over time, it can be updated once and propagated across all reporting upon refresh.
This integration marks Power BI as the third visualization tool the Looker semantic layer works with, in addition to Looker Studio and Connected Sheets. This connectivity is available as of Looker version 23.10 for Looker-hosted customers.
To get started, you will need to do the following:
Verify usage requirements and recommendations
Enable the connector within Looker
Download and save the connector file
Set up the connector as a custom connector in Power BI
Once the connector is set up, users can connect to a Looker instance from Power BI Desktop, select an Explore, and then explore that data set using Power BI’s drag-and-drop interface.
To learn more about Looker and this new offering, visit cloud.google.com/looker.
Read More for the details.
You may have your summer reading list. Now it’s time for your summer learning list! We’ve compiled the top no-cost courses and labs from Google Cloud Skills Boost to help you on your cloud career path. Build the in-demand cloud skills for top cloud roles, while also boosting your resume with credentials to validate your knowledge. Enrich your current position, or work towards your targeted destination.
Plus, visit The Arcade for additional opportunities to work on your technical abilities directly in Google Cloud around featured monthly topics. There is no cost to play the games! Keep reading to learn more.
These introductory courses take 90 – 120 minutes to complete. They consist of a mix of videos, documents and quizzes, and you’ll earn a shareable badge upon completion. Enhance your resume and LinkedIn profile by sharing your badges. To share, ensure that you’ve set your Google Cloud Skills Boost profile to ‘public’ in your settings and follow the steps in the video below.
#1 – Digital Transformation with Google Cloud (Introductory, 90 minutes) – If you’re new to cloud or are in a cloud-adjacent role, like sales, marketing or HR, be sure to check out this meaningful and impactful course. You’ll learn why cloud technology is transforming businesses, some fundamental cloud concepts and about the major cloud computing models.
#2 – Preparing for your Google Cloud Certification journey (Introductory through intermediate, 90 – 120 minutes) – Google Cloud certification has many benefits, including showing potential employers that you are proficient in a given technology; plus, certified practitioners report that they are more involved in decision making and earn higher pay. When choosing the best certification, start by looking at your current role, and then the role you’d like to have. Align with the certification that fits most closely. Check out these no-cost courses to help you prepare. And, evaluate which Google Cloud certification is right for you: Cloud Digital Leader, Associate Cloud Engineer, or one of nine Google Cloud Professional Certifications.
#3 – Innovating with Data and Google Cloud (Introductory, 120 minutes) – Learn about data and machine learning in the cloud. You’ll also be introduced to structured and unstructured data, databases, data warehouses and data lakes.
#4 – Infrastructure and Application Modernization with Google Cloud (Introductory, 120 minutes) – Learn about modernizing legacy and traditional IT infrastructure. This course covers compute options and the benefits of each, APIs, and Google Cloud solutions that can help businesses better manage their systems.
#5 – Understanding Google Cloud Security and Operations (Introductory, 90 minutes) – Learn about cost management, security, and operations in the cloud. Explore the choice between owned infrastructure and cloud services, responsibility of data security, and the best way to manage IT resources.
Want to get hands-on practice in Google Cloud? Whether you’re new to cloud, or an experienced professional who wants to expand your skill set, labs are a great way to get directly into Google Cloud technology and build your technical skills
#1 – A Tour of Google Cloud Hands-on Labs (Introductory, 45 minutes) – Identify key features of Google Cloud and learn about the details of the lab environment.
#2 – A Tour of Google Cloud Sustainability (Introductory, 60 minutes) – Find out why Google Cloud is the cleanest cloud in the industry by exploring and utilizing sustainability tools.
#3 – Google Cloud Pub/Sub: Qwik Start – Console (Introductory, 30 minutes) – Learn about this messaging service for exchanging event data among applications and services.
#4 – BigQuery: Qwik Start – Console – (Introductory, 30 minutes) – Query public tables and load sample data into BigQuery.
#5 – Predict Visitor Purchases with a Classification Model in BigQuery ML (Intermediate, 75 minutes) – Use data to run some typical queries that businesses would want to know about their customers’ purchasing habits.
Come play games with us in The Arcade for another chance to build your cloud skills — no quarters required! Every month we feature two games that include labs from Google Cloud Skills Boost. These give you experience that you can use immediately in a real-world cloud environment. You’ll earn digital badges, and accumulate points that you can use twice per year to claim Google Cloud swag. And there is never a cost to play, so keep coming back to learn more!
The items above are just a sampling of over 700 on-demand training opportunities available in the full catalog on Google Cloud Skills Boost. While everything we’ve shared above does not have a cost to complete, some trainings do require credits. For $299 a year, a Google Cloud Innovators Plus subscription gets you over $1,500 in benefits, including access to the entire Google Cloud Skills Boost catalog. It also includes up to $1000 Google Cloud credits, a certification voucher, special access to Google Cloud experts and execs, live learning events, and more.* Check it out to invest in your learning and accelerate your cloud career.
* subject to eligibility limitations
Read More for the details.
Risk managers know there is one assessment type that’s foundational for every risk management program: the vendor risk assessment. Understanding the risk posture of your vendors and third parties, including your cloud providers, is an important part of an effective risk management program. While collecting and analyzing information can often be time-consuming for risk managers, Google Cloud collaborates with third-party risk management (TPRM) providers to make the process easier.
These TPRM organizations provide independent due diligence services and platforms to help automate vendor risk management based on their inspection of security, privacy, business continuity, and operational resiliency controls, aligned with industry standards and regulation compliance. The ultimate goal is to help our customers scale and accelerate their assessments of Google Cloud.
We enable trusted TPRM providers, like CyberGRX, to examine the CyberGRX controls (such as privacy, operational, and management) and operations. Based on their observations, CyberGRX provides a validated cyber risk assessment of Google Cloud’s security posture. Like assessments performed by individual customers, the CyberGRX assessment of Google Cloud details our adherence to industry standards and the security protocols built into our infrastructure.
Using a standardized approach like this, CyberGRX can quickly provide access to a security assessment of Google Cloud. CyberGRX’s validation process focuses on measuring the accuracy of a third party, such as Google Cloud’s assessment answers. CyberGRX analysts and partners evaluate evidence provided by Google Cloud to confirm we have implemented certain critical controls as indicated by their assessment. The assessment of Google Cloud is available to organizations via the CyberGRX website.
CyberGRX’s assessment covers more than 200 controls, and integrates Google Cloud’s responses with analytics, threat intelligence, and risk models. Additionally, CyberGRX’s Framework Mapper provides further functionality by mapping the cyber risk assessment of Google Cloud to more than 20 commonly used industry frameworks and standards. This enables our customers to view the cyber risk assessment of Google Cloud against customers’ specific, local compliance regime requirements including the MITRE ATT&CK framework.
CyberGRX’s Framework Mapper has broad standards and requirements coverage, including:
The CyberGRX mapping technology enables customers to see a mapping that is based on their specific needs, aggregated into a single assessment. This saves customers time and effort by eliminating the need for customers to create and repeatedly perform customized assessments of Google Cloud. Customers can now map the cyber risk assessment of Google Cloud to the frameworks they’re accustomed to using.
MITRE ATT&CK is a strongly-supported knowledge base that helps model security adversarial behavior, tactics, and techniques which currently includes 13 tactics and 192 techniques.
In June 2022, Google Cloud announced our support and investment in a research partnership with MITRE Engenuity Center for Threat-Informed Defense, which included facilitating the mapping of the MITRE ATT&CK framework to Google Cloud security capabilities.
CyberGRX also recognizes the value of the MITRE ATT&CK framework and maps their foundational assessment to the MITRE ATT&CK framework. This allows organizations to review their security controls and gain visibility into gaps in their defenses. Security leaders can rapidly and easily identify critical problems for remediation.
There are multiple benefits to using the MITRE ATT&CK framework when accessing Google Cloud’s risk assessment through CyberGRX, including:
Uncovering previously unreported gaps by leveraging MITRE techniques to create kill chains or use cases.
Integrating results into internal risk and threat management programs that already align with MITRE ATT&CK.
Increasing credibility and defensibility to CyberGRX risk findings to support third-party decisions and relationships due to connection to MITRE-based analytics.
CyberGRX’s independent security assessment of Google Cloud is available to Google Cloud customers, and is an easy way for organizations to scale and accelerate their cloud assessments. CyberGRX provides a comprehensive and objective view of Google Cloud’s security posture based on a number of local compliance regime requirements and the MITRE ATT&CK framework. CyberGRX’s centralized assessment supports our customers’ annual vendor risk management processes and reduces the review time.
Read More for the details.