Azure – Generally available: Resource instance rules for access to Azure Storage
Resource instance rules enable secure connectivity to Azure Storage by only allowing access from specific resources of select Azure services.
Read More for the details.
Resource instance rules enable secure connectivity to Azure Storage by only allowing access from specific resources of select Azure services.
Read More for the details.
We are excited to announce two new widgets (Latest announcements and Recent AWS blog posts) are available on AWS Console Home. Using these widgets, you can more easily learn about new AWS capabilities and get the latest news about AWS launches, events, and more. The AWS blog posts and launch announcements shown are related to the services used in your applications.
Read More for the details.
We are excited to announce that you can now filter your data by its dimensions while using Amazon Lookout for Metrics. Amazon Lookout for Metrics uses machine learning (ML) to automatically monitor the metrics that are most important to businesses with greater speed and accuracy than traditional methods used for anomaly detection. The service also makes it easier to diagnose the root cause of anomalies like unexpected dips in revenue, high rates of abandoned shopping carts, spikes in payment transaction failures, increases in new user sign-ups, and many more.
Read More for the details.
Authenticate your Stream Analytics jobs to connect to Service Bus using system-assigned managed identities.
Read More for the details.
Native output connector for Azure Database for PostgreSQL allows you to easily build real time applications with the database of your choice.
Read More for the details.
You can now connect your Stream Analytics jobs running on a dedicated cluster to your synapse dedicated SQL pool using managed private endpoints.
Read More for the details.
Your Stream Analytics jobs get up to 45% performance boost in CPU utilization by default.
Read More for the details.
Amazon Connect Voice ID now detects fraud risk from voice-based deception techniques such as voice manipulation during customer calls, helping you make your voice interactions more secure. For example, Voice ID can detect, in real-time, if an imposter is using a speech synthesizer to spoof a caller’s voice and bluff the agent or Interactive Voice Response (IVR) system. When such fraud is detected, Voice ID flags these calls as high risk in the Amazon Connect agent application, enabling you to take additional security measures or precautions. This feature works out-of-the-box once Voice ID fraud detection is enabled for a contact, and no additional configuration is required.
Read More for the details.
AWS App Runner now supports Amazon Route 53 alias records for creating a root domain name. App Runner makes it easy for developers to quickly deploy containerized web applications and APIs to the cloud, at scale, and without having to manage infrastructure. When you create an App Runner service, by default, App Runner allocates a domain name to your service. If you have your own domain name, you can associate it to your App Runner service as a custom domain name. Now, you can use Amazon Route 53 alias record to create a root domain or subdomain for your App Runner service. For example, with alias records your App Runner service can now directly listen on example.com which was not possible with only CNAME record support which requires you to prepend a hostname such as acme.example.com.
Read More for the details.
An additional 5 AWS Controllers for Kubernetes (ACK) service controllers have graduated to generally available status. Customers can now provision and manage AWS resources using ACK controllers for Amazon Relational Database Service (RDS), AWS Lambda, AWS Step Functions, Amazon Managed Service for Prometheus (AMP), and AWS Key Management Service (KMS).
Read More for the details.
BigQuery is a leading data warehouse solution in the market today, and is valued by customers who need to gather insights and advanced analytics on their data. Many common BigQuery use cases involve the storage and processing of Personal Identifiable Information (PII)—data that needs to be protected within Google Cloud from unauthorized and malicious access.
Too often, the process of finding and identifying PII in BigQuery data relies on manual PII discovery and duplication of that data. One common way to do this is by taking an extract of columns used for PII and copying them into a separate table with restricted access. However, creating unnecessary copies of this data and processing it manually to identify PII increases the risks of failure and subsequent security events.
In addition, the security of PII data is often mandated by multiple regulations and failure to apply appropriate safeguards may result in heavy penalties. To address this issue, customers need solutions that 1) identify PII in BigQuery and 2) automatically implement access control on that data to prevent unauthorized access and misuse, all without having to duplicate it.
This blog will discuss a solution developed by Google Professional Services for leveraging Google Cloud DLP to inspect and classify sensitive data and suggest a solution for using these insights to automatically tag and protect data in BigQuery tables.
Automatic DLP can help to identify sensitive data, such as PII, in BigQuery. Organizations can leverage Automatic DLP to automatically search across their entire BigQuery data warehouse for tables that contain sensitive data fields and report detailed findings in the console (see Figure 1 below,) in Data Studio, and in a structured format (such as a BigQuery results table.) Newly created and updated tables can be discovered, scanned, and classified automatically in the background without a user needing to invoke or schedule it. This way you have an ongoing view into your sensitive data.
In this blog, we show how a new open source solution called BigQuery Auto Tagging Solution solves our second goal—automating access control on data. This solution sits as a layer on top of Automatic DLP and automatically enforces column-level access controls to restrict access to specific sensitive data types based on user-defined data classification taxonomies (such as high confidentiality or low confidentiality) and domains (such as Marketing, Finance, or ERP System.) This solution minimizes the risk of unrestricted access to PII and ensures that there is only one copy of data maintained with appropriate access control applied down to the column level.
The code for this solution is available on Github at GoogleCloudPlatform/bq-pii-classifier. Please note that while Google does not maintain this code, you can reach out to your Sales Representative to get in contact with our Professional Services team for guidance on how to implement it.
BigQuery and Data Catalog Policy Tags (now Dataplex) have some limitations that you should be aware of before implementing this solution to ensure that it will work for your organization:
Taxonomies and Policy Tags are not shared across regions: If you have data in multiple regions you will need to create or replicate your taxonomy in each region that you want to apply policy tags.
Maximum number of 40 taxonomies per project: If you require different taxonomies for different business domains or have replications to support multiple Cloud regions those will count against this quota.
Maximum number of 100 policy tags per taxonomy: Cloud DLP supports up to 150 infoTypes for classification, however, a single policy taxonomy can only support up to 100 including any nested categories. If you need to support more than 100 data types, you may need to split these across more than one taxonomy.
The solution is composed mainly of the following components: Dispatcher Requests topic, Dispatcher service, BigQuery Policy Tagger Requests topic, and BigQuery Policy Tagger service and logging components.
The Dispatcher Service is a Cloud Run service that expects a BigQuery scope to be expressed as inclusion and exclusion lists of projects, datasets, and tables. This Dispatcher service will query Automatic DLP Data Profiles to check if the tables in-scope have data profiles generated. For these tables, it will publish one request per table to the “BigQuery Policy Tagger Requests” PubSub topic. This topic enables rate limiting of BigQuery column tagging operations and apply auto-retries with backoffs.
The “BigQuery Policy Tagger” Service is also a Cloud Run service that receives the information of the DLP scan results of a BigQuery table. This service will determine the final InfoType of each column and apply the appropriate Policy Tags as defined in the InfoType – Policy Tags mapping. Only-one INFO_TYPE is selected and the function assigns the associated policy tag.
Lastly, all Cloud Run services maintain structured logs that are exported by a log sink to BigQuery. There are multiple BigQuery views that help with monitoring and debugging Cloud Run call chains and tagging actions on columns.
After deploying the solution, it can be used in two different ways:
Automatic DLP is configured to send a Pub/Sub notification on each inspection job completion. The Pub/Sub notification includes the resource name and it triggers the Tagger service directly.
In this scenario, the Dispatcher service is invoked on a schedule with a payload representing a BigQuery scope to list inspected tables by Automatic DLP and create a tagging request per table. You could use Cloud Scheduler (or any Orchestration tool) to invoke the Dispatcher service. If the solution is deployed within a VPC-SC perimeter, other schedulers that support VPC-SC should be used (such as Composer or Custom App.)
In addition, more than one Cloud Scheduler/Trigger could be defined to group projects/datasets/tables that have the same tagging schedule (such as daily or monthly.)
To learn more about Automatic DLP, see our webpage. To learn more about the BQ classifier, see the open source project on Github: GoogleCloudPlatform/bq-pii-classifier and get started today!
Read More for the details.
Amazon DynamoDB transactions enable coordinated, all-or-nothing changes to multiple items both within and across tables. The maximum number of actions in a single transaction has now increased from 25 to 100.
Read More for the details.
New Azure Data Explorer offering: the Kusto Emulator is a Docker Container encapsulating the Kusto Query Engine.
Read More for the details.
New features are now available in Stream Analytics no-code editor public preview including Azure SQL database available as reference data input and output sink, diagnostic logs available for troubleshooting, and new designed ribbon.
Read More for the details.
Dataflow is the industry-leading unified platform offering batch and stream processing. It is a fully managed service that comes with flexible development options (from Flex Templates & Notebooks to Apache Beam SDKs for Java, Python and Go) and a rich set of built-in management tools. It comes with seamless integrations with all Google Cloud products, such as Pub/Sub, BigQuery, VertexAI, GCS, Spanner, and BigTable, as well as third-party services and products, such as Kafka and AWS S3, to best meet your data movement use cases.
While our customers value these capabilities, they continue to push us to innovate and provide more value as the best batch and streaming data processing service to meet their ever-changing business needs.
Observability is a key area where the Dataflow team continues to invest more based on customer feedback. Adequate visibility into the state and performance of the Dataflow jobs is essential for business critical production pipelines.
In this post, we will review Dataflow’s key observability capabilities:
Job visualizers – job graphs and execution details
New metrics & logs
New troubleshooting tools – error reporting, profiling, insights
New Datadog dashboards & monitors
There is no need to configure or manually set up anything; Dataflow offers observability out of the box within the Google Cloud Console, from the time you deploy your job. Observability capabilities are seamlessly integrated with Google Cloud Monitoring and Logging along with other GCP products. This integration gives you a one-stop shop for observability across multiple GCP products, which you can use to meet your technical challenges and business goals.
Questions: What does my pipeline look like? What’s happening in each step? Where’s the time spent?
Solution: Dataflow’s Job graph and Execution details tabs answer these questions to help you understand the performance of various stages and steps within the job
Job graph illustrates the steps involved in the execution of your job, in the default Graph view. The graph gives you a view of how Dataflow has optimized your pipeline’s code for execution, after fusing (optimizing) steps to stages. TheTable view informs you more about each step and the associated fused stages and time spent in each step and their statuses as the pipeline continues execution. Each step in the graph displays more information, such as the input and output collections and output data freshness; these help you analyze the amount of work done at this step (elements processed) and the throughput for it.
Execution Details has all the information to help you understand and debug the progress of each stage within your job. In the case of streaming jobs, you can view the data freshness of each stage. The Data freshness by stages chart includes anomaly detection: it highlights “potential slowness” and “potential stuckness” to help you narrow down your investigation to a particular stage. Learn more about using the Execution details tab for batch and streaming here.
Questions: What’s the state and performance of my jobs? Are they healthy? Are there any errors?
Solution: Dataflow offers several metrics to help you monitor your jobs.
A full list of Dataflow job metrics can be found in our metrics reference documentation. In addition to the Dataflow service metrics, you can view worker metrics, such as CPU utilization and memory usage. Lastly, you can generate Apache Beam custom metrics from your code.
Job metrics is the one-stop shop to access the most important metrics for reviewing the performance of a job or troubleshooting a job. Alternatively, you can access this data from the Metrics Explorer to build your own Cloud Monitoring dashboards and alerts.
Job and worker Logs are one of the first things that you can look at when you deploy a pipeline. You can access both these log types in the Logs panel on the Job details page.
Job logs include information about startup tasks, fusion operations, autoscaling events, worker allocation, and more. Worker logs include information about work processed by each worker within each step in your pipeline.
You can configure and modify the logging level and route the logs using the guidance provided in our pipeline log documentation.
Logs are seamlessly integrated into Cloud Logging. You can write Cloud Logging queries, create log-based metrics, and create alerts on these metrics.
Questions: Is my pipeline slowing down or getting stuck? I want to understand how my code is impacting the job’s performance. I want to see how my sources and sinks are performing with respect to my job
Solution: We have introduced several new metrics for Streaming Engine jobs that help answer these questions. Notable metrics are listed below. All of these are now instantly accessible from the Job metrics tab.
The engineering teams at the Renault Group have been using Dataflow for their streaming pipelines as a core part of their digital transformation journey.
“Deeper observability of our data pipelines is critical to track our application SLOs,”said Elvio Borrelli, Tech Lead – Big Data at the Renault Digital Transformation & Data team. “The new metrics, such as backlog seconds and data freshness by stage, now provide much better visibility about our end-to-end pipeline latencies and areas of bottlenecks. We can now focus more on tuning our pipeline code and data sources for the necessary throughput and lower latency”.
To learn more about using these metrics in the Cloud console, please see the Dataflow monitoring interface documentation.
To learn how to use these metrics to troubleshoot common symptoms within your jobs, watch this webinar on Dataflow Observability: Dataflow Observability, Monitoring, and Troubleshooting
Problem: There are a couple of errors in my Dataflow job. Is it my code, data, or something else? How frequently are these happening?
Solution: Dataflow offers native integration with Google Cloud Error Reporting to help you identify and manage errors that impact your job’s performance.
In the Logs panel on the Job details page, the Diagnostics tab tracks the most frequently occurring errors. This is integrated with Google Cloud Error Reporting, enabling you to manage errors by creating bugs or work items or by setting up notifications. For certain types of Dataflow errors, Error Reporting provides a link to troubleshooting guides and solutions.
Problem: What part of my code is taking more time to process the data? What operations are consuming more CPU cycles or memory?
Solution: Dataflow offers native integration with Google Cloud Profiler, which lets you profile your jobs to understand the performance bottlenecks using CPU, memory, and I/O operation profiling support.
Is my pipeline’s latency high? Is it CPU intensive or is it spent time waiting for I/O operations? Or is it memory intensive? If so, which operations are driving this up? The flame graph helps you find answers to these questions. You can enable profiling for your Dataflow jobs by specifying a flag during job creation or while updating your job. To learn more see the Monitor pipeline performance documentation.
Problem: What can Dataflow tell me about improving my job performance or reducing its costs?
Solution: You can review Dataflow Insights to improve performance or to reduce costs. Insights are enabled by default on your batch and streaming jobs; they are generated by auto-analyzing your jobs’ executions.
Dataflow insights is powered by the Google Active Assist’s Recommender service. It is automatically enabled for all jobs and is available free of charge. Insights include recommendations such as enabling autoscaling, increasing maximum workers, and increasing parallelism. Learn more about Dataflow insights in the Dataflow Insights documentation.
Problem: I would like to monitor Dataflow in my existing monitoring tools, such as Datadog.
Solution: Dataflow’s metrics and logs are accessible in observability tools of your choice, via Google Cloud Monitoring and Logging APIs. Customers using Datadog can now leverage the out of the box Dataflow dashboards and recommended monitors to monitor their Dataflow jobs alongside other applications within the Datadog console. Learn more about Dataflow Dashboards and Recommended Monitors in their blog post on how to monitor your Dataflow pipelines with Datadog.
ZoomInfo, a global leader in modern go-to-market software, data, and intelligence, is partnering with Google Cloud to enable customers to easily integrate their business-to-business data into Google BigQuery. Dataflow is a critical piece of this data movement journey.
“We manage several hundreds of concurrent Dataflow jobs,” said Hasmik Sarkezians, ZoomInfo Engineering Fellow. “Datadog’s dashboards and monitors allow us to easily monitor all the jobs at scale in one place. And when we need to dig deeper into a particular job, we leverage the detailed troubleshooting tools in Dataflow such as Execution details, worker logs and job metrics to investigate and resolve the issues.”
Dataflow is leading the batch and streaming data processing industry with the best in class observability experiences.
But we are just getting started. Over the next several months, we plan to introduce more capabilities such as:
Memory observability to detect and prevent potential out of memory errors.
Metrics for sources & sinks, end-to-end latency, bytes being processed by a PTransform, and more.
More insights – quota, memory usage, worker configurations & sizes.
Pipeline validation before job submission.
Debugging user-code and data issues using data sampling.
Autoscaling observability improvements.
Project-level monitoring, sample dashboards, and recommended alerts.
Got feedback or ideas? Shoot them over, or take this short survey.
To get started with Dataflow see the Cloud Dataflow quickstarts.
To learn more about Dataflow observability, review these articles:
Using the Dataflow monitoring interface
Building production-ready data pipelines using Dataflow: Monitoring data pipelines
Beam College: Dataflow Monitoring
Beam College: Dataflow Logging
Beam College: Troubleshooting and debugging Apache Beam and GCP Dataflow
Read More for the details.
Integrating your ArgoCD deployment with Connect Gateway and Workload Identity provides a seamless path to deploy to Kubernetes on many platforms. ArgoCD can easily be configured to centrally manage various cluster platforms including GKE clusters, Anthos clusters, and many more. This promotes consistency across your fleet, saves time in onboarding a new cluster, and simplifies your distributed RBAC model for management connectivity. Skip to steps below to configure ArgoCD for your use case.
Cloud customers who choose ArgoCD as their continuous delivery tool may opt into a centralized architecture, whereby ArgoCD is hosted in a central K8s cluster and responsible for managing various distributed application clusters. These application clusters can live on many cloud platforms which are accessed via Connect Gateway to simplify ArgoCD authentication, authorization, and interfacing with the K8s API server. See the below diagram which outlines this model:
The default authentication behavior when adding an application cluster to ArgoCD is to use the operator’s kubeconfig for the initial control plane connection, create a local KSA in the application cluster (`argo-manager`), and escrow the KSA’s bearer token in a K8s secret. This presents two unique challenges for the pipeline administrator:
Principal management: GCP doesn’t provide a federated Kubernetes Service Account that is scoped across multiple clusters. We have to create one KSA for each cluster, managed by ArgoCD. Each KSA is local to the control plane and requires an individual lifecycle management process, e.g. credential rotation, redundant RBAC, etc.
Authorization: The privileged KSA secrets will be saved in the ArgoCD central cluster. This introduces operator overhead and risks to guard system security.
These two challenges are directly addressed/mitigated by using Connect Gateway and Workload Identity. The approach:
Let’s not create a cluster admin KSA at all – instead, we’ll use a GSA and map it to the ArgoCD server and application controller workloads in the `argocd` namespace (via Workload Identity). This GSA will be authorized as a cluster admin on each application cluster in an IAM policy or K8s RBAC policy.
The K8s secret will be replaced with a more secure mechanism that allows ArgoCD to obtain a Google Oauth token and authenticate to Google Cloud APIs. This eliminates the need for managing KSA secrets and automates rotation of the obtained Oauth token.
The ArgoCD central cluster and managed application clusters are mutually exclusive in dependency; that is to say, these clusters don’t need to be in the same GCP project, nor require intimate network relationships, e.g. VPC Peering, SSH tunnels, HTTP proxies, bastion host, etc.
Target application clusters can reside on different cloud platforms. The same authentication/authorization and connection models are extended to these clusters as well.
First, some quick terminology. I’m going to call one of the clusters `central-cluster`, and the other `application-cluster`. The former will host the central ArgoCD deployment, the latter will be where we deploy applications/configuration.
At least one bootstrapped GCP project containing a VPC network, subnets with sufficient secondary ranges to support a VPC native cluster, and required APIs activated to host a GKE and/or Anthos cluster.
Two clusters deployed with Workload Identity enabled and reasonably sane defaults. I prefer Terraform based deployments and chose to use the safer-cluster module.
If using Anthos clusters, it is assumed that they are fully deployed and registered with Fleet (thus they are reachable via Connect Gateway)
gcloud SDK must be installed, and you must be authenticated via gcloud auth login
Optional: set the PROJECT_ID environment variable in order to quickly run the gcloud commands below without substitution.
In this section, we set up a GSA which is the identity used by ArgoCD when it interacts with Google Cloud.
Create a Google service account and set required permissions
2. Create IAM policy allowing the ArgoCD namespace/KSA to impersonate the previously created GSA
In this section, we enroll a GKE cluster into a Fleet so it can be managed by ArgoCD via Connect Gateway.
1. Activate the required APIs to enable required APIs in the desired GCP Project
2. Grant ArgoCD permissions to access and manage the application-cluster
3. List application cluster(s) URI used for registration
4. Register the application-cluster(s) with the Fleet Project. We use gcloud SDK commands as an example here. If you prefer other tools like Terraform, please refer to this document.
5. View the application-cluster(s) Fleet membership
In this section, we discuss the steps to enroll a non-GKE cluster(e.g. Anthos-on-VMware, Anthos-on-BareMetal, etc) to ArgoCD.
Similar to the GKE scenario, the first step is to register the cluster to a Fleet project. Most recent Anthos versions already have the clusters registered when the cluster is created. If your cluster is not registered yet, refer to this guide to register it.
1. View the application-cluster(s) Fleet membership
2. Grant ArgoCD permissions to access and manage the application-cluster. The ArgoCD GSA from step 1 will need to be provisioned in `application-cluster` RBAC in order for ArgoCD to connect to and manage this cluster. Replace <CONTEXT> and <KUBECONFIG> with appropriate values.
Version 2.4.0 or higher is required. It is recommended that you use Kustomize to deploy ArgoCD to the central-cluster. This allows for the most succinct configuration patching required to enable Workload Identity and Connect Gateway.
1. Create a namespace to deploy ArgoCD in
2. GKE Cluster: If the central-cluster is hosted on GKE, annotate the argocd-server and argocd-application-controller KSAs with the GSA created in step 1. An example Kustomize overlay manifest to achieve this:
3. Anthos Cluster: If the central-cluster is hosted on an Anthos cluster, we need to configure Workload Identity as referenced here. The prerequisite is the central-cluster must be registered to a Fleet project. Based on this condition, the ArgoCD workloads can be configured to impersonate the desired GSA. An example Kustomize overlay manifest to achieve this can be found on this repo.
4. Optional: Expose the ArgoCD server externally (or internally) with a service type LoadBalancer or with an Ingress:
5. Deploy ArgoCD to the central-cluster argocd namespace created in step 1 using your preferred tooling. An example using Kustomize, where overlays/demo is the path containing overlay manifests:
You should now have a functioning ArgoCD deployment with its UI optionally exposed externally and/or internally. The following steps will add the application cluster as an ArgoCD “external cluster”, and confirm that the integration is working as expected.
1. Create a cluster type secret representing your application cluster and apply to the central cluster argocd namespace (e.g. kubectl apply -f cluster-secret.yaml -n argocd)
Note: This step is best suited for an IaC pipeline that is responsible for the application cluster lifecycle. This secret should be dynamically generated and applied to the central-cluster once an application cluster has been successfully created and/or modified. Likewise, the secret should be deleted from the central-cluster once an application cluster is deleted.
2. Login to the ArgoCD console to confirm that the application cluster has been successfully added. The below command can be used to acquire the initial admin password for ArgoCD
3. Push a test application from ArgoCD and confirm that the application cluster is synced successfully. See repository here for example applications you can deploy for a quick test.
The above steps demonstrate how to enable continuous deployment to distributed Kubernetes platforms using Connect Gateway and Workload Identity. This centralized architecture helps simplify and secure your configuration and application delivery pipelines, and promotes consistency across your fleet.
If you would like to step through a proof of concept that focuses on how to use ArgoCD and Argo Rollouts to automate the state of a Fleet of GKE clusters, please check out this post where you can construct key operator user stories.
Add a new application cluster to the Fleet with zero touch beyond deploying the cluster and giving it a particular label. The new cluster should install a baseline set of configurations for tooling and security along with any applications that align with the clusters label.
Add a new application to the Fleet that inherits baseline multi-tenant configurations for the dev team that delivers the application and binds Kubernetes RBAC to that team’s Identity Group.
Progressively rollout a new version of an application across groups, or waves, of clusters with a manual gate in between each wave.
Read More for the details.
One thing I love about cloud is that it’s possible to succeed as a cloud engineer from all kinds of different starting points. It’s not necessarily easy; our industry remains biased toward hiring people who check a certain set of boxes such as having a university computer science degree. But cloud in particular is new enough, and has such tremendous demand for qualified talent, that determined engineers can and do wind up in amazing cloud careers despite coming from all sorts of non-traditional backgrounds.
But still – it’s scary to look at all the experienced engineers ahead of you and wonder “How will I ever get from where I am to where they are?”
A few months ago, I asked some experts at Google Cloud to help me answer common questions people ask as they consider making the career move to cloud. We recorded our answers in a video series called Cracking the Google Cloud Career that you can watch on the Google Cloud Tech YouTube channel. We tackled questions like…
How do I go from a traditional IT background to a cloud job?
You have a superpower if you want to move from an old-school IT job to the cloud: You already work in tech! That may give you access to colleagues and situations that can level up your cloud skills and network right in your current position.
But even if that’s not happening, you don’t have to go back and start from square one. Your existing career will give you a solid foundation of professional experience that you can layer cloud skills on top of. Check out my video to see what skills I recommend polishing up before you make the jump to cloud interviews:
How do I move from a help desk job to a cloud job?
The help desk is the classic entry-level tech position, but moving up sometimes seems like an insurmountable challenge. Rishab Kumar graduated from a help desk role to a Technical Solutions Specialist position at Google Cloud. In his video, he shares his story and outlines some takeaways to help you plot your own path forward.
Notably, Rishab calls out the importance of building a portfolio of cloud projects: cloud certifications helped him learn, but in the job interview he got more questions about the side projects he had implemented. Watch his full breakdown here:
How do I switch from a non-technical career to the cloud?
There’s no law that says you have to start your tech career in your early twenties and do nothing else for the rest of your career. In fact, many of the strongest technologists I know came from previous backgrounds as disparate as plumbing, professional poker, and pest control. That’s no accident: those fields hone operational and people skills that are just as valuable in cloud as anywhere else. But you’ll still need a growth mindset and lots of learning to land a cloud job without traditional credentials or previous experience in the space.
Google Cloud’s Stephanie Wong came to tech from the pageant world and has some great advice about how to build a professional network that will help you make the switch to a cloud job. In particular, she recommends joining the no-cost Google Cloud Innovators program, which gives you inside access to the latest updates on Google Cloud services alongside a community of fellow technologists from around the globe.
Stephanie also points out that you don’t have to be a software engineer to work in the cloud; there are many other roles like developer relations, sales engineers and solutions architects that stay technical and hands-on without building software every day.
You can check out her full suggestions for transitioning to a tech career in this video:
How do I get a job in the cloud without a computer-related college degree?
No matter your age or technical skill level, it can be frustrating and intimidating to see role after role that requires a bachelor’s degree in a field such as IT or computer science. I’m going to let you in on a little secret: once you get that first job and add some experience to your skills, hardly anybody cares about your educational background anymore. But some recruiters and hiring managers still use degrees as a shortcut when evaluating people for entry-level jobs.
Without a degree, you’ll have to get a bit creative in assembling credentials.
First, consider getting certified. Cloud certifications like the Google Cloud Associate Cloud Engineer can help you bypass degree filters and get you an interview. Not to mention, they’re a great way to get familiar with the workings of your cloud. Google Cloud’s Priyanka Vergadia suggests working toward skill badges on Google Cloud Skills Boost; each skill badge represents a curated grouping of hands-on labs within a particular technology that can help you build momentum and confidence toward certification.
Second, make sure you are bringing hands-on skills to the interview. College students do all sorts of projects to bolster their education. You can do this too – but at a fraction of the cost of a traditional degree. As Priyanka points out in this video, make sure you are up to speed on Linux, networking, and programming essentials before you apply:
No matter your background, I’m confident you can have a fulfilling and rewarding career in cloud as long as you get serious about these two things:
Own your credibility through certification and hands-on practice, and
Build strong connections with other members of the global cloud community.
In the meantime, you can watch the full Cracking the Google Cloud Career playlist on the Google Cloud Tech YouTube channel. And feel free to start your networking journey by reaching out to me anytime on Twitter if you have cloud career questions – I’m happy to help however I can.
Read More for the details.
Amazon Connect now provides a new API to search for queues in your Amazon Connect instance. This new API provides a programmatic and flexible way to search for queues by name, description, or tags. For example, you can now use this API to search for all queues with “priority” in the description. To learn more about this new API, see the API documentation.
Read More for the details.
Amazon Connect now provides a new API to search for routing profiles in your Amazon Connect instance. This new API provides a programmatic and flexible way to search for routing profiles by name, description, or tags. For example, you can now use this API to search for all routing profiles containing “sales” in the description. To learn more about this new API, see the API documentation.
Read More for the details.
Amazon Managed Service for Prometheus now provides Alert Manager & Ruler logs to help customers troubleshoot their alerting pipeline and configuration in Amazon CloudWatch Logs. Amazon Managed Service for Prometheus is a fully managed Prometheus-compatible monitoring service that makes it easy to monitor and alarm on operational metrics at scale. Prometheus is a popular Cloud Native Computing Foundation open source project for monitoring and alerting that is optimized for container environments. The Alert Manager allows customers to group, route, deduplicate, and silence alarms before routing them to end users via Amazon Simple Notification Service (Amazon SNS). The Ruler allows customers to define recording and alerting rules, which are queries that are evaluated at regular intervals. With Alert Manager and Ruler logs, customers can troubleshoot issues in their alerting pipelines including missing Amazon SNS topic permissions, misconfigured alert manager routes, and rules that fail to execute.
Read More for the details.