Azure – Azure Security Center: Public preview updates for September 2021
Public preview enhancements and updates released for Azure Security Center in September 2021.
Read More for the details.
Public preview enhancements and updates released for Azure Security Center in September 2021.
Read More for the details.
Azure Form Recognizer now offers a pre-built version of the service in preview for document extraction as well as the following capabilities in preview: signature detection, hotel receipts processing, and deeper extraction of content from US driver’s licenses. In addition, the service is now easier to use with Form Recognizer Studio and new REST APIs.
Read More for the details.
Azure Availability Zones are now generally available in the Korea Central region. These three new zones provide you with options for additional resiliency and tolerance to infrastructure impact.
Read More for the details.
At Etsy.com, our mission is to Keep Commerce Human by providing a global marketplace that connects 5.2 million sellers with more than 90.5 million active buyers looking for unique items with a human touch. For our vibrant global community to continue to grow and thrive, interactions on the platform must be safe, private, and secure.
Like many other online businesses, Etsy saw a sharp increase in traffic over the past year as people turned to e-commerce during the COVID-19 pandemic. With this surge, we wanted to get ahead of any potential challenges that could impact our brand, revenue, and customers.
With increased traffic we observed elevated bot traffic attempting credential stuffing attacks. We anticipated that credential stuffers would try to use lists of compromised passwords from other companies’ data breaches and test those credentials on Etsy, since password reuse is common across many websites. We also thought attackers might attempt to abuse any unauthenticated forms, such as password reset forms and mailing list sign ups.
These are only a few examples of the types of fraud we look for, as these risks are always evolving and increasing in complexity. For that reason, we continually reevaluate our tooling and regularly research third party products to improve our security posture.
One of the most important products we’ve added since the pandemic began to protect against fraud is Google Cloud’s reCAPTCHA Enterprise, a frictionless bot management solution that works by classifying fraudulent HTTP requests. One reason we chose to implement reCAPTCHA Enterprise is because of the flexibility it grants us. Instead of dictating what action to take (ie. blocking the request), it provides an assessment which contains information classifying a particular user interaction. This data can be combined with our internal automated controls to make informed decisions in near real time.
While we primarily use risk scores, reCAPTCHA Enterprise also provides reason codes and additional features to securely check if a user has reused their password on another site that has been compromised. Because reCAPTCHA Enterprise protects over 5 million sites, the system is also able to recognize attack patterns that we may not be able to identify on our own. We can leverage all this information without causing friction for our users, as reCAPTCHA Enterprise doesn’t require any user interaction. This is a huge win for our security team because we can add reCAPTCHA Enterprise to any page without any design concerns and with minimal effort.
In the past, potentially malicious requests were shown a captcha challenge. reCAPTCHA V1 provided a text-based challenge while reCAPTCHA V2 is using an image-based challenge. Image recognition has improved recently and the current image captcha challenge may be obsolete in a few years. Therefore, we can use reCAPTCHA Enterprise to protect our web pages but maintain a frictionless customer experience.
reCAPTCHA Enterprise’s flexibility allows us to decide when to block suspicious behavior and keep this process invisible to our end users. If we need additional confirmation of a user’s intentionality on a web page, we can request email or SMS verification. This adaptability makes us the ultimate decision makers in using reCAPTCHA Enterprise however we want to our pages.
After making the choice to integrate reCAPTCHA Enterprise, we added it to several points of the user workflow, including signing in and opening a shop. We started by logging and storing the assessments in our databases, which then get replicated to Google’s BigQuery for later analysis. All this is done seamlessly without disrupting the user experience.
We immediately saw results. reCAPTCHA Enterprise provides graphs that tell us which parts of the website have the highest risk scores, indicating potential abuse and allowing us to prioritize appropriately. In addition, we combined the stored assessment data with our existing tools to lock out malicious users and thereby prevent any potential harm to our community. These data points are available to our Trust & Safety and Security teams, enabling them to help identify bot activity like credential stuffing.
In the case that an assessment incorrectly classifies a legitimate user as fraudulent, reCAPTCHA Enterprise provides an Annotation API where you can annotate previous assessments with additional information. This will train the underlying engine to better understand our traffic and improve the detection of inauthentic bot activity.
After adding reCAPTCHA Enterprise to our login flow, we saw dramatic results and it solidified our confidence in the tool. Once we had the basic structure down, we packaged the Etsy-specific code into a reusable library so that we could then quickly add reCAPTCHA Enterprise to other parts of the platform, such as conversations and the forgot password page. This allows us to quickly add protection to web pages and address attacks before they happen.
In addition, we are well adapted to rapidly defend against a bot attack should the need arise. Overall, we achieved many wins by using reCAPTCHA Enterprise as a tool in our bot management strategy and believe it to be a worthwhile investment in keeping the Etsy community safe.
Read More for the details.
The newly released 2021 Accelerate State of DevOps Report found that teams who excel at modern operational practices are 1.4 times more likely to report greater software delivery and operational performance and 1.8 times more likely to report better business outcomes. A foundational element of modern operational practices is having monitoring tooling in place to track, analyze, and alert on important metrics. Today, we’re announcing a new capability that makes it easier than ever to monitor your Google Kubernetes Engine (GKE) deployments: GKE workload metrics.
For applications running on GKE, we’re excited to introduce the preview of GKE workload metrics. This fully managed and highly configurable pipeline collects Prometheus-compatible metrics emitted by workloads running on GKE and sends them to Cloud Monitoring. GKE workload metrics simplifies the collection of metrics exposed by any GKE workload, such as a CronJob or a Deployment, so you don’t need to dedicate any time to the management of your metrics collection pipeline. Simply configure which metrics to collect, and GKE does everything else.
Benefits of GKE workload metrics include:
Easy setup: With a single kubectl apply command to deploy a PodMonitor custom resource, you can start collecting metrics. No manual installation of an agent is required.
Highly configurable: Adjust scrape endpoints, frequency and other parameters.
Fully managed: Google maintains the pipeline, lowering total cost of ownership.
Control costs: Easily manage Cloud Monitoring costs through flexible metric filtering.
Open standard: Configure workload metrics using the PodMonitor custom resource, which is modeled after the Prometheus Operator’s PodMonitor resource.
HPA support: Compatible with the Stackdriver Custom Metrics Adapter to enable horizontal scaling on custom metrics.
Better pricing: More intuitive, predictable, and lower pricing.
Autopilot support: GKE workload metrics is available for both GKE Standard and GKE Autopilot clusters.
Customers are already seeing the benefits of this simplified model.
“With GKE workload metrics, we no longer need to deploy and manage a separate Prometheus server to scrape our custom metrics – it’s all managed by Google. We can now focus on leveraging the value of our custom metrics without hassle!” – Carlos Alexandre, Cloud Architect, NOS SGPS S.A., a Portuguese telecommunications and media company.
Follow these instructions to enable the GKE workload metrics pipeline in your GKE cluster:
GKE workload metrics is currently available in Preview, so be sure to use the gcloud beta command.
See the GKE workload metrics guide for details about how to configure which metrics are collected as well as a guide for migrating to GKE workload metrics from the Stackdriver Prometheus sidecar.
Ingestion of GKE workload metrics into Cloud Monitoring is not currently charged, but it will be charged starting December 1, 2021. See more about Cloud Monitoring pricing.
Once GKE workload metrics are ingested into Cloud Monitoring, you can start using all of the great features of the service including global scalability, long-term (24 month) storage options, integration with Cloud Logging, custom dashboards, alerting, and SLO monitoring. These same benefits already exist for GKE system metrics, which are non-chargeable and are collected by default from GKE clusters and made available to you in the GKE Dashboard.
If you have any questions or want to provide feedback, please visit the operations suite page on the Google Cloud Community.
Read More for the details.
AWS Glue DataBrew, a visual data preparation tool that makes it easy for data analysts and data scientists to clean and normalize data for analytics and machine learning, is now available in the AWS Africa (Cape Town) Region. See where DataBrew is available by using the AWS Region Table.
Read More for the details.
Azure NetApp Files, one of the fastest growing bare-metal Azure services is now available to Azure customers directly from the Azure portal, CLI, API or with SDK – without having to go through waitlist approval process.
Read More for the details.
AWS Firewall Manager now enables you to configure logging for your AWS Network Firewalls provisioned using a Firewall Manager policy. When you set up a Firewall Manager policy for Network Firewall, you can now enable logging for all the accounts that are in scope of the policy and have the logs centralized under your Firewall Manager administrator account. This makes it easy to enable logging for AWS Network Firewall across multiple accounts and VPCs through a single Firewall Manager policy.
Read More for the details.
The COVID-19 pandemic has impacted all industries over the last year-and-a-half, and research institutions were no exception. In fact, making advancements in medicine and science became an even more urgent priority. Many private sector and government agencies around the globe turned to the cloud to help their remote employee base stay connected and collaborate with cloud tools like chat, video, large file sharing, live document editing, and more. But some scientific research still requires face-to-face collaboration in a lab environment.
This is why we wanted to dig deeper to understand how COVID may have impacted the progress researchers have been making in various critical fields, including medical research, geophysics, climate science, chemistry, computer engineering and more. We commissioned the Harris Poll to explore how the pandemic may have impacted academic researchers around the world. The study surveyed 1,591 respondents across the United States, Canada, Mexico, Argentina, Colombia, Brazil, France, Spain, Germany, United Kingdom, Singapore and Australia. All were employed in either a private or government lab, medical center, or PhD-level program.
Here are the four main takeaways:
Researchers across all 12 countries and age groups are struggling to manage their workloads without face-to-face interaction.The COVID-19 pandemic has taken a toll on productivity, especially in terms of innovation and collaboration. Globally, 67% of researchers reported making less progress in 2020 due to the pandemic. Eighty-five percent of respondents said they struggled to innovate effectively, and 77% said they struggled to test, compute, and collaborate effectively while working remotely. This was true across all types of institutions surveyed.
The pandemic accelerated the demand for collaboration and communication tools–both cloud-based and virtual–for the majority of researchers.An overwhelming majority of researchers (98%) said the pandemic accelerated their need for cloud-based tools. More specifically, they cited the “lack of collaboration tools to replace face-to-face meetings” as one of their biggest challenges. Due to this demand, and despite the lack of tools, the use of virtual collaboration and communication tools increased significantly. Respondents indicated that virtual meetings increased 91% and chat use increased 62% globally.
Usage of all disruptive technologies, including artificial intelligence (AI) and machine learning (ML), increased substantially during the pandemic. The vast majority of those surveyed (96%) reported increased usage of at least one of the following tools: cloud, data and analytics, digital productivity tools, or AI / ML. Of those, the cloud saw the largest increase in usage during the pandemic.
Globally, more than half of researchers surveyed anticipate an increased investment in cloud solutions due to the COVID-19 crisis. Sixty-one percent of researchers reported that their institutions were “not very” or “not at all” prepared for COVID conditions, though those already using the cloud were best prepared. More than 93% of researchers across all work environments agree that COVID-19 has deepened the current and future needs for cloud computing in their organizations. Just over half (52%) of respondents believe their organizations will increase their investment in cloud technologies in the next 12 months.
Survey data also reveal some differences among institutions and regions. For example, researchers employed by private laboratories were more likely than those in other types of research facilities to report an increased use of cloud. Regionally, organizations in Colombia, the U.S., and Australia were the most likely to increase their investment in cloud solutions. Overall, the survey revealed the dramatic impact of the pandemic on research and researchers everywhere, though it also documents how cloud-based tools and strategies helped organizations adapt.
With new SARS-CoV-2 variants making in-person work more difficult, there are number of actions research facilities–private or government-funded–can take in order to continue their important research:
Scale as needed. Cloud technologies can help research institutions implement a support-at-scale model under rapidly changing conditions. Flexible cloud infrastructures make data more accessible and secure so researchers can work together from anywhere, at any scale.
Leverage AI / ML. As the use of AI / ML tools for scientific research increases, research centers need to ensure the quality, safety, and efficiency of their workloads. Cloud provides a platform to quickly build and deliver modern applications and other connected experiences for clinical trials, research studies, patient care, and more.
Optimize data.As academic research centers expand and incorporate new sources of data across multiple clouds, the task of consolidating this data and information becomes a challenge. Cloud technologies enable researchers to break down data silos to gain insight across all teams and ensure a consistent data experience on a fully managed infrastructure.
Maximize ROI.Securing funding is not easy. In fact, 44% of respondents said funding decreased or was redirected during the pandemic. By processing documents and data quickly and securely, cloud-enabled automation tools help institutions increase the speed of decision-making, reduce costs of data entry, and accelerate discoveries.
To learn more about these findings and more, download our infographic. To ramp up your own research project with Google Cloud, apply now for free credits in select countries.
Research methodology: The survey was conducted online and by phone by the Harris Poll on behalf of Google Cloud, from April 22, 2021 to May 17, 2021, among 1,591 researchers in academic laboratories/medical centers, government laboratories, private sector laboratories, colleges/universities, or hospitals/health care systems. Researchers were based in the United States (n=501), Canada (n=100), Mexico (n=100), Colombia (n=100), Argentina (n=100), Brazil (n=77), France (n=105), Spain (n=100), Germany (n=102), UK (n=100), Singapore (n=100), Australia (n=105), and have to have been employed in lab or hospital/medical center or be a PhD student at least one year into program, and work as a researcher or be a decision maker for research facilities.
Read More for the details.
Azure NetApp Files backup expands the data protection capabilities of Azure NetApp Files by providing fully managed backup solution for long-term recovery, archive, and compliance.
Read More for the details.
Amazon CodeGuru is a developer tool powered by machine learning that provides intelligent recommendations for improving code quality and identifying an application’s most expensive lines of code.
Read More for the details.
Amazon CodeGuru is a developer tool powered by machine learning that provides intelligent recommendations for improving code quality and identifying an application’s most expensive lines of code.
Read More for the details.
Azure VMware Solution NFS datastores on Azure NetApp Files is currently in private preview and is coming soon. The solution provides more choice to optimize and scale storage for Azure VMware Solution environments.
Read More for the details.
Are you looking for the best way to migrate and replicate your mainframe data? Google Cloud and Confluent have teamed up to provide an end-to-end solution for connecting your mainframe application data with the advanced analytics capabilities of Google Cloud.
In this article, we will discuss how you can use Confluent Connect to replicate messages from IBM MQ and Db2 to Google Cloud. This allows you to work with your mainframe data in the cloud, and enables you to build new applications and analytical capabilities using Google Cloud’s machine learning solutions. You also benefit by reducing impact on your production mainframe workloads, and reducing general purpose compute costs. In other words, you can continue using your mainframe to run your mission-critical business workloads while setting your data in motion for innovation.
Here’s an example use case that demonstrates how using the Confluent MQ connector with Google Cloud can impact your bottom line. One of our customers is saving millions of dollars per year on mainframe cycles by leveraging z Integrated Information Processor (zIIP) engines for data processing.
Moving these workloads to zIIP, off of GP (general purpose) compute, and away from CHINIT (Channel Initiator) routes directly leads to reduced MSU licensing. As an example, a customer in the financial services industry saw a 50% reduction in CPU usage per message. These cost savings can enable you to direct budget resources toward differentiating activities, such as commercializing your valuable mainframe data to open up new revenue streams and improve customer service.
On the technical side, Confluent guarantees exactly-once message semantics, preserives message order and unleashes that data to be accessed by existing and new applications that need a high throughput, low latency event driven architecture. This means that you can rely on the accuracy and consistency of your data in Google Cloud as if you were querying it directly from your mainframe database.
Once you have this data in your Confluent cluster, you can leverage the combined capabilities of Confluent and Google Cloud. You can modernize the way your consumers access your data by providing a single, standard source of truth without impacting production services. Confluent integrates directly with Apigee, Google Cloud’s API platform for developing and managing APIs.
Because Confluent integrates with BigQuery, you can also leverage the advanced analytical capabilities of BigQuery ML and Vertex AI to realize value from your latent mainframe data, and build new systems of insight that were not possible on the mainframe. And most of all, you can open up new avenues for innovation by allowing consumers to access the data when they need it, speeding up time to value and enabling faster business decisions.
You now have a bridge to cloud for your mainframe application data. Get started by deploying Confluent from the Google Cloud marketplace.
Read More for the details.
The AWS Solutions team recently updated Amazon Workspaces Cost Optimizer, a solution that analyzes all of your Amazon WorkSpaces usage data and automatically converts the WorkSpace to the most cost-effective billing option (hourly or monthly), depending on your individual usage. This solution also helps you monitor your WorkSpace usage and optimize costs.
Read More for the details.
Azure Stream Analytics is a fully managed, real-time analytics service designed to help you analyze and process fast moving streams of data that can be used to get insights, build reports, or trigger alerts and actions. The service is now available in 2 new China regions.
Read More for the details.
Azure VMware Solution has now expanded availability to Brazil South and East US 2, this update is in addition to the existing availability across multiple Azure regions in US, Europe, Australia, Japan, UK, Canada, and Southeast Asia.
Read More for the details.
VMware HCX is the primary migration solution for organizations moving VMware workloads natively to a cloud service like Azure VMware Solution. The HCX Enterprise Edition, now generally available with Azure VMware Solution, is a premium HCX service that includes Replication Assisted vMotion and Mobility Optimized.
Read More for the details.
AWS License Manager announces Delegated Administrator support for Managed entitlements. This feature allows license administrators to manage and distribute licenses across their AWS accounts from a delegated account outside of the management account. Using delegated administrator, you can grant licenses from AWS Marketplace and Independent Software Vendors across your organization and benefit from the administrative capabilities previously afforded to the management account only.
Read More for the details.
Amazon Relational Database Service (Amazon RDS) for PostgreSQL now supports PostGIS major version 3.1. This new version of PostGIS is available on PostgreSQL versions 13.4, 12.8, 11.13, 10.18, and higher.
Read More for the details.