AWS – AWS Cloud WAN is now available in AWS Asia Pacific (Seoul) Region
Starting today, AWS Cloud WAN is available in the AWS Asia Pacific (Seoul) Region.
Read More for the details.
Starting today, AWS Cloud WAN is available in the AWS Asia Pacific (Seoul) Region.
Read More for the details.
Starting today, Amazon EC2 High Memory instances with 3TiB (u-3tb1.56xlarge), 6TiB (u-6tb1.56xlarge, u-6tb1.112xlarge), 9TiB (u-9tb1.112xlarge), and 12TiB of memory (u-12tb1.112xlarge) are available in Asia Pacific (Tokyo) region. Customers can start using these new High Memory instances with On Demand and Savings Plan purchase options.
Read More for the details.
AWS Backup now offers a new Backup Vault Lock console experience that provides you a more intuitive way to configure your vault lock details. AWS Backup Vault Lock allows you to deploy and manage your vault’s immutability policies, protecting your backups from accidental or malicious deletions. Depending on your data retention needs, with AWS Backup Vault Lock, you can set governance mode or compliance mode to configure your vault’s immutability policies with greater flexibility and multiple levels of security. Under governance mode, users with the appropriate role-based permissions can test and change retention policies or even remove the lock completely. In compliance mode, the user can specify a lock date after which the vault is locked immutably. Once locked, the acceptable retention periods cannot be changed and the lock cannot be disabled even by the root user. With this feature, the console also provides you with visibility into into your vaults’ lock status and facilitates reporting across all locked vaults.
Read More for the details.
In June, we announced the creation of Google Public Sector, a new Google division focusing on helping U.S. public sector entities—including federal, state, and local governments, and educational institutions—accelerate their digital transformations. It is our belief that Google Public Sector, and our industry partner ecosystem, will play an important role in applying cloud technology to solve complex problems for our nation.
Today, I’m proud to announce one of our first big partnerships following the launch of this new subsidiary, as Google Public Sector will provide up to 250,000 active-duty Army personnel of the U.S. Army workforce with the Google Workspace. The government has asked for more choice in cloud vendors who can support its missions, and Google Workspace will equip today’s military with a leading suite of collaboration tools to get their work done.
In the Army, personnel often need to work across remote locations, in challenging environments, with seamless collaboration key to their success. Google Workspace was designed with these challenges in mind and can be deployed quickly across a diverse set of working conditions, locations, jobs, and skill levels. And more than three billion users already rely on Google Workspace, which means that they’re familiar tools and require little training or extended ramp-up time for Army personnel—ultimately helping Soldiers and employees communicate better and more securely than ever before.
Working with Accenture Federal Services under the Army Cloud Account Management Optimization contract and our implementation partner SADA, we’re excited to help the Army deploy a cloud-first collaboration solution, improving upon more traditional technologies with unparalleled security and versatility. Google Workspace is not only “born in the cloud” with secure-by-design architecture, but also provides a clear path to future features and innovations.
One of the key reasons we are able to serve the U.S. Army is that Google Workspace recently received an Impact Level 4 authorization from the DoD. IL4 is a DoD security designation related to the safe handling of controlled unclassified information (CUI). That means government officials and others can use Google Workspace with more confidence and ease than ever before.
With Google Public Sector, we are committed to building our relationship with the U.S. Army and with other public sector agencies. In fact, we recently announced a partnership with Acclima to provide New York State with hyperlocal air quality monitoring and an alliance with Arizona State University to deliver an immersive online K-12 learning technology to students in the United States and around the world.
This is just the start. Google Public Sector is dedicated to helping U.S. public sector customers become experts in Google Cloud’s advanced cybersecurity products, protecting people, data, and applications from increasingly pervasive and challenging cyber threats. We have numerous training and certification programs for public sector employees and our partners in digital and cloud skills, and we continue to expand our ecosystem of partners capable of building new solutions to better serve U.S. public sector organizations.
Delivering the best possible public services means making life better and work more fulfilling for millions of people, inside and outside of government. We’re thrilled by what we are accomplishing at Google Public Sector, particularly with today’s partnership with the U.S. Army, and look forward to announcing even more great developments in the future.
Learn more about how the government is accelerating innovation at ourGoogle Government Summit, taking place in person on Nov. 15 in Washington D.C. Join agency leaders as they explore best practices and share industry insights on using cloud technology to meet their missions.
Read More for the details.
Azure Firewall Basic provides cost-effective, enterprise-grade network security for small and medium businesses (SMBs).
Read More for the details.
Amazon S3 Object Lambda now supports adding your own code to S3 HEAD and LIST API requests, in addition to S3 GET requests. With S3 Object Lambda, you can modify the data returned by S3 GET requests to filter rows, dynamically resize images, redact confidential data, and much more. Now, you can also use S3 Object Lambda to modify the output of S3 LIST requests to create a custom view of all objects in a bucket and S3 HEAD requests to modify object metadata such as object name and size. With this update, S3 Object Lambda now uses AWS Lambda functions to automatically process the output of S3 GET, HEAD, and LIST requests.
Read More for the details.
Amazon Nimble Studio now provides a Amazon Machine Image (AMI) running Windows Server 2022 in the AWS Marketplace that contains the same pre-installed software as our existing AMIs. Nimble Studio users can further customize this baseline AMI for studio specific workflows. Microsoft Windows Server 2022 helps studios take advantage of the latest Windows features and support of content creation software, such as Epic’s Unreal Engine and Adobe Creative Cloud. The complete list of features and improvements are available in the official Microsoft documentation for Windows Server 2022 here. The supported Windows Server 2022 AMI is an addition to our supported Windows Server 2019 AMI and Linux AMI.
Read More for the details.
Amazon Virtual Private Cloud (VPC) has introduced two new networking metrics; 1) Network Address Usage and 2) Peered Network Address Usage. These new metrics will help network administrators plan for expansion of their VPC architecture while proactively managing service quotas.
Read More for the details.
AWS IoT SiteWise has increased quota limits for Assets and Asset Models to support larger and more complex equipment representations, and allow customers to perform bulk operations on resources.
Read More for the details.
IAM Access Analyzer policy validation helps you author secure and functional policies. Now, we are extending policy validation to role trust policy to make it easier to author and validate the policy that determines who can assume a role. The new IAM console experience for role trust policy guides you to add each element of the policy, such as the list of available actions for role trust policies, and offers context specific documentation. As you are authoring your policy, IAM Access Analyzer policy validation evaluates the policy for any issues to make it easier for you to author secure policies. This includes new policy checks specific to role trust policies, such as validating the format of your identity provider. Prior to saving the policy, IAM Access Analyzer generates preview findings for the external access granted by the role trust policy. This helps you review external access, such as access granted to a federated identity provider, and ensure only the intended access is granted when the policy is created.
Read More for the details.
Over the years, vast amounts of satellite data have been collected and ever more granular data are being collected everyday. Until recently, those data have been an untapped asset in the commercial space. This is largely because the tools required for large scale analysis of this type of data were not readily available and neither was the satellite imagery itself. Thanks to Earth Engine, a planetary-scale platform for Earth science data & analysis, that is no longer the case.
The platform, which was recently announced as a generally available Google Cloud Platform (GCP) product, now allows commercial users across industries to operationalize remotely sensed data. Some Earth Engine use cases that are already being explored include sustainable sourcing, climate risk detection, sustainable agriculture, and natural resource management. Developing spatially focused solutions for these use cases with Earth Engine unlocks distinct insights for improving business operations. Automating those solutions produces insights faster, removes toil and limits the introduction of error.
The automated data pipeline discussed in this post brings data from BigQuery into Earth Engine and is in the context of a sustainable sourcing use case for a fictional consumer packaged goods company, Cymbal. This use case requires two types of data. The first is data that Cymbal already has and the second is data that is provided by Earth Engine and the Earth Engine Data Catalog. In this example, the data owned by Cymbal is starting in BigQuery and flowing through the data pipeline into Earth Engine through an automated process.
A helpful way to think about combining these data is as a layering process, similar to assembling a cake. Let’s talk through the layers for this use case. The base layer is satellite imagery, or raster data, provided by Earth Engine. The second layer is the locations of palm plantations provided by Cymbal, outlined in black in the image below. The third and final layer is tree cover data from the data catalog, the pink areas below. Just like the layers of a cake, these data layers come together to produce the final product. The goal of this architecture is to automate the aggregation of the data layers.
Another example of a use case where this architecture could be applied is in a methane emission detection use case. In that case, the first layer would remain the same. The second layer would be facility location details (i.e. name and facility type) provided by the company or organization. Methane emission data from the data catalog would be the third layer. As with methane detection and sustainable supply chain, most use cases will involve some tabular data collected by companies or organizations. Because the data are tabular, BigQuery is a natural starting point. To learn more about tabular versus raster data and when to use BigQuery versus Earth Engine, check out this post.
Now that you understand the potential value of using Earth Engine and BigQuery together in an automated pipeline, we will go through the architecture itself. In the next section, you will see how to automate the flow of data from GCP products, like BigQuery, into Earth Engine for analysis using Cloud Functions. If you are curious about how to move data from Earth Engine into BigQuery you can read about it in this post.
Cymbal has the goal of gaining more clarity in their palm oil supply chain which is primarily located in Indonesia. Their specific goal is to identify areas of potential deforestation. In this section, you will see how we can move the data Cymbal already has about the locations of palm plantations into Earth Engine in order to map those territories over satellite images to equip Cymbal with information about what is happening on the ground. Let’s walk through the architecture step by step to better understand how all of the pieces fit together. If you’d like to follow along with the code for this architecture, you can find it here.
Architecture
Step by Step Walkthrough
1. Import Geospatial data into BigQuery
Cymbal’s Geospatial Data Scientist is responsible for the management of the data they have about the locations of palm plantations and how it arrives in BigQuery.
2. A Cloud Scheduler task sends a message to a Pub/Sub topic
A Cloud Scheduler task is responsible for starting the pipeline in motion. Cloud Scheduler tasks are cron tasks and can be scheduled at any frequency that fits your workflow. When the task runs it sends a message to a Pub/Sub topic.
3. The Pub/Sub topic receives a message and triggers a Cloud Function
4. The first Cloud Function transfers the data from BigQuery to Cloud Storage
The data must be moved into Cloud Storage so that it can be used to create an Earth Engine asset.
5. The data arrives in the Cloud Storage bucket and triggers a second Cloud Function
6. The second Cloud Function makes a call to the Earth Engine API and creates an asset in Earth Engine
The Cloud Function starts by authenticating with Earth Engine. It then makes an APIcall creating an Earth Engine asset from the Geospatial data that is in Cloud Storage.
7. AnEarth Engine App (EE App) is updated when the asset gets created in Earth Engine
This EE App is primarily for the decision makers at Cymbal who are primarily interested in high impact metrics. The application is a dashboard giving the user visibility into metrics and visualizations without having to get bogged down in code.
8. A script for advanced analytics is made accessible from the EE App
An environment for advanced analytics in the Earth Engine code editor is created and made available through the EE App for Cymbal’s technical users. The environment gives the technical users a place to dig deeper into any questions that arise from decision makers about areas of potential deforestation.
9. Results from analysis in Earth Engine can be exported back to Cloud Storage
When a technical user is finished with their further analysis in the advanced analytics environment they have the option to run a task and export their findings to Cloud Storage. From there, they can continue their workflow however they see fit.
With these nine high-level steps, an automated workflow is achieved that provides a solution for Cymbal, giving them visibility into their palm oil supply chain. Not only does the solution address the company wide goal, it also keeps in mind the needs of various types of users at Cymbal.
We’ve just walked through the architecture for an automated data pipeline from BigQuery to Earth Engine using Cloud Functions. The best way to deepen your understanding of this architecture and how all of the pieces fit together is to walk through building the architecture in your own environment. We’ve made building out the architecture easy by providing a Terraform Script available on GitHub. Once you have the architecture built out, try swapping out different elements of the pipeline to make it more applicable to your own operations. If you are looking for some inspiration or are curious to see another example, be sure to take a look at this post which brings data from Earth Engine into BigQuery. The post walks through creating a Cloud Function that pulls temperature and vegetation data from the Landsat satellite imagery within the GEE Catalog from SQL in BigQuery. Thanks for reading.
Read More for the details.
Google Earth Engine (GEE) is a groundbreaking product that has been available for research and government use for more than a decade. Google Cloud recently launched GEE to General Availability for commercial use. This blog post describes a method to utilize GEE from within BigQuery’s SQL allowing SQL speakers to get access to and value from the vast troves of data available within Earth Engine.
We will use Cloud Functions to allow SQL users at your organization to make use of the computation and data catalog superpowers of Google Earth Engine. So, if you are a SQL speaker and you want to understand how to leverage a massive library of earth observation data in your analysis then buckle up and read on.
Before we get started let’s spend thirty seconds on setting geospatial context for our use-case. BigQuery excels at doing operations on vector data. Vector data are things like points, polygons, things that you can fit into a table. We use the PostGIS syntax so users that have used spatial SQL before will feel right at home in BigQuery.
BigQuery has more than 175+ public datasets available within Analytics Hub. After doing analysis in BigQuery users can use tools like GeoViz, Data Studio, Carto and Looker to visualize those insights.
Earth Engine is designed for raster or imagery analysis, particularly satellite imagery. GEE, which holds more than 70PB of satellite imagery, is used to detect changes, map trends, and quantify differences on the Earth’s surface. GEE is widely used to extract insights from satellite images to make better use of land, based on its diverse geospatial datasets and easy-to-use application programming interface (API).
By using these two products in conjunction with each other you can expand your analysis to incorporate both vector and raster datasets to combine insights from 70PB of GEE and 175+ datasets from BigQuery. For example, in this blog we’ll create a Cloud Function that pulls temperature and vegetation data from the Landsat satellite imagery within the GEE Catalog and we’ll do it all from SQL in BigQuery. If you are curious about how to move data from BigQuery into Earth Engine you can read about it in this post.
While our example is focused on agriculture this method can apply to any industry that matters to you.
Agriculture is transforming with the implementation of modern technologies. Technologies such as GPS and satellite image dissemination allow researchers and farmers to gain more information, monitor and manage agricultural resources. Satellite imagery can be a reliable source to track images of how a field is developing.
A common analysis of imagery used in agricultural tools today is Normalized Difference Vegetation Index (NDVI). NDVI is a measurement of plant health that is visually displayed with a legend from -1 to +1. Negative values are indicative of water and moisture. But high NDVI values suggest a dense vegetation canopy. Imagery and yield tend to have a high correlation; thus, it can be used with other data like weather to drive seeding prescriptions.
As an agricultural engineer you are keenly interested in crop health for all the farms and fields that you manage. The healthier the crop the better the yield and the more profit the farm will produce. Let’s assume you have mapped all your fields and the coordinates are available in BQ. You now want to calculate the NDVI of every field, along with the average temperature for different months, to ensure the crop is healthy and take necessary action if there is an unexpected fall in NDVI. So the question is how do we pull NDVI and temperature information into BigQuery for the fields by only using SQL?
Using GEE’s ready-to-go Landsat 8 imagerywe can calculate NDVI for any given point on the planet. Similarly, we can use the publicly available ERA5 dataset of monthly climate for global terrestrial surfaces to calculate the average temperature for any given point.
Cloud Functions are a powerful tool to augment the SQL commands in BigQuery. In this case we are going to wrap a GEE script within a Cloud Function and call that function directly from BigQuery’s SQL. Before we start, let’s get the environment set up.
Before you proceed we need to get the environment setup:
A Google Cloud project with billing enabled. (Note: this example cannot run within the BigQuery Sandbox as a billing account is required to run Cloud Functions)
Ensure your GCP user has access to Earth Engine, can create Service accounts and assign roles. You can sign up for Earth Engine at Earth Engine Sign Up. Verify if you have access, check if you can view the Earth Engine Code Editor with your GCP user.
At this point Earth Engine and BigQuery are enabled and ready to work for you. Now let’s set up the environment and define the cloud functions.
1. Once you have created your project in GCP, select it on the console and click on cloud-shell.
2. On cloud-shell, you will need to clone a git repository which contains the shell script and assets required for this demo. Run the following command on cloud shell,
3. Edit config.sh – In your editor of choice update the variables in config.sh to reflect your GCP project.
4. Execute setup_sa.sh. You will be prompted to authenticate and you can choose “n” to use your existing auth.
4. If the shell script has executed successfully, you should now have a new Service Account created, as shown in the image below
5. A Service Account(SA) in format <PROJECT_NUMBER>-compute@developer.gserviceaccount.com was created in the previous step, you need to sign up this SA for Earth Engine at EE SA signup. Check out the last line of the screenshot above it will list out SA name
The screenshot below shows how the signup process looks for registering your SA.
6. Execute deploy_cf.sh, it should take around 10 minutes for the deployment to complete.
You should now have a dataset named gee and table land_coords under your project in BigQuery along with the functions get_poly_ndvi_month and get_poly_temp_month.
You will also see a sample query output on the Cloud shell, as shown below
7. Now execute the command below in Cloudshell
and you should see something like this
If you are able to get a similar output to one shown above, then you have successfully executed SQL over Landsat imagery.
Now navigate to the BigQuery console and your screen should look something like this:
You should see a new external connection us.gcf-ee-conn, two external routines called get_poly_ndvi_month, get_poly_temp_month and a new table land_coords.
Next navigate to the Cloud functions console and you should see two new functions polyndvicf-gen2 and polytempcf-gen2 as shown below.
At this stage your environment is ready. Now you can go to the BQ console and execute queries. The query below calculates the NDVI and temperature for July 2020 for all the field polygons stored in the table land_coords
The output should look something like this:
When the user executes the query in BQ, the function get_poly_ndvi_month and get_poly_temp_month trigger remote calls to the cloud functions polyndvicf-gen2 and polytempcf-gen2 which would initiate the script on GEE. The results from GEE are streamed back to the BQ console and shown to the user.
You can now plot this data on a map in Data Studio or Geoviz and publish it to your users
Now that your data is within BigQuery, you can join this data with your private datasets or other public datasets within BigQuery and build ML models using BigQuery ML to predict crop yields, seed prescriptions.
The example above demonstrates how users can wrap GEE functionality within Cloud Functions so that GEE can be executed exclusively within SQL. The method we have described requires someone who can write GEE scripts. The advantage is that once the script is built, all of your SQL-speaking data analysts-scientists-engineers can do calculations on vast troves of satellite imagery in GEE directly from the BigQuery UI or API.
Once the data and results are in BigQuery you can join the data with other tables in BigQuery or with the data available through Analytics Hub. Additionally with this method, users can combine GEE data with other functionality such as geospatial functions or BQML. In future we’ll expand our examples to include these other BigQuery capabilities.
Thanks for reading, and remember, if you are interested in learning more about how to move data from BigQuery to Earth Engine together, check out this blog post. The post outlines a solution for a sustainable sourcing use case for a fictional consumer packaged goods company trying to understand their palm oil supply chain which is primarily located in Indonesia.
Acknowledgements: Shout out to David Gibson and Chao Shen for valuable feedback.
Read More for the details.
Migrating data to the cloud can be a daunting task. Especially moving data from warehouses and legacy environments requires a systematic approach. These migrations usually need manual effort and can be error-prone. They are complex and involve several steps such as planning, system setup, query translation, schema analysis, data movement, validation, and performance optimization. To mitigate the risks, migrations necessitate a structured approach with a set of consistent tools to help make the outcomes more predictable.
Google Cloud simplifies this with the BigQuery Migration Service – a suite of managed tools that allow users to reliably plan and execute migrations, making outcomes more predictable. It is free to use and generates consistent results with a high degree of accuracy.
Major brands like PayPal, HSBC, Vodafone and Major League Baseball use BigQuery Migration Service to accelerate time to unlock the power of BigQuery, deploy new use cases, break down data silos, and harness the full potential of their data. It’s incredibly easy to use, open and customizable. So, customers can migrate on their own or choose from our wide range of specialized migration partners.
BigQuery Migration Service automates most of the migration journey for you. It divides the end-to-end migration journey into four components: assessment, SQL translation, data transfer, and validation. Users can accelerate migrations through each of these phases often just with the push of a few buttons. In this blog, we’ll dive deeper into each of these phases and learn how to reduce the risk and costs of your data warehouse migrations.
Step 1: Assessment
BigQuery Migration Service generates a detailed plan with a view of dependencies, risks, and the optimized migrated state on BigQuery by profiling the source workload logs and metadata.
During the assessment phase, BigQuery Migration Service guides you through a set of steps using an intuitive interface and automatically generates a Google Data Studio report with rich insights and actionable steps. Assessment capabilities are currently available for Teradata and Redshift, and will soon be expanded for additional sources.
Step 2: SQL Translation
This phase is often the most difficult part of any migration. BigQuery Migration Service provides fast, semantically correct, human readable translations from most SQL flavors to BigQuery. It can intelligently translate SQL statements in high-throughput batch and Google-translate-like interactive modes from Amazon Redshift SQL, Apache HiveQL, Apache Spark SQL, Azure Synapse T-SQL, IBM Netezza SQL/NZPLSQL, MySQL, Oracle SQL/PL/SQL/Exadata, Presto SQL, PostgreSQL, Snowflake SQL, SQL Server T-SQL, Teradata SQL/SPL/BTEQ and Vertica SQL.
Unlike most existing offerings which parse Regular Expressions, BigQuery’s SQL translation is true compiler based, with advanced customizable capabilities to handle macro substitutions, user defined functions, output name mapping and other source-context-aware nuances. The output is detailed and prescriptive with clear “next-actions”. Data engineers and data analysts save countless hours leveraging our industry leading automated SQL translation service.
Step 3: Data Transfer
BigQuery offers data transfer service from source systems into BigQuery using a simple guided wizard. Users create a transfer configuration and choose a data source from the drop down list.
Destination settings walk the user through connection options to the data sources and securely connect to the source and target systems.
A critical feature of BigQuery’s data transfer is the ability to schedule jobs. Large data transfers can impose additional burdens on operational systems and impact the data sources. BigQuery Migration Service provides the flexibility to schedule transfer jobs to execute at user-specified times to avoid any adverse impact on production environments
Step 4: Validation
This phase ensures that data at the legacy source and BigQuery are consistent after the migration is completed. Validation allows highly configurable, and orchestrate-able rules to perform a granular per-row, per-column, or per-table left-to-right comparison between the source system and BigQuery. Labeling, aggregating, group-by, and filtering enable deep validations.
If you would like to leverage BigQuery Migration Service for an upcoming proof-of-concept or migration, reach out to your GCP partner, your GCP sales rep or check out our documentation to try it out yourself.
Read More for the details.
EyecareLive transforms the healthcare ecosystem with Enhanced Support, a support service from the Google Cloud Customer Care portfolio.
Telemedicine is now mainstream. It exploded during the COVID-19 pandemic. A 2022 survey by Jones Lang Lasalle (registration required) found that 38% of U.S. patients were using some form of telemedicine. This number is expected to grow as consumers are demanding more convenient and immediate access to care, and doctors are seeking efficiencies, cost savings, and to forge closer relationships with patients.
But because the eye-care field is so heavily regulated, optometrists and ophthalmologists face more technical hurdles to perform telemedicine than their peers in other medical practices.
To join the telemedicine revolution, a generic technology solution wouldn’t do. Eye-care professionals need a more carefully architected and rigorously secure platform – one that ensures a very high degree of compliance and privacy.
EyecareLive provides exactly that. Their comprehensive cloud-based solution was built specifically for eye-care telemedicine practices. They not only facilitate telemedicine visits with patients via video, but help providers stay in compliance with complex industry regulations.
What’s more, EyecareLive is the only platform in the industry that conducts vision screening using Food and Drug Administration (FDA)-registered tests to check a patient’s vision before connecting them to a doctor through a video call. The doctor can thus triage any issues immediately and quickly determine the right next steps for proper care. In addition, their platform digitally connects optometrists and ophthalmologists to the entire eye-care ecosystem, including other doctors for referrals, insurance companies, hospitals, pharmaceutical firms, pharmacies, and, of course, patients.
On top of all of this, the automated back office for their eye-care practices processes electronic health records (EHRs), clinical workflow, billing, coding, and more into one platform. EyecareLive streamlines operations and frees up doctors to focus on delivering the highest possible eye healthcare and on building stronger relationships with patients.
“Considering the number of plug-and-play services that Google has built into the Google Cloud Healthcare solutions, Google is basically supporting the entire healthcare industry from an infrastructure provider point of view.” — Raj Ramchandani, CEO, EyecareLive
EyecareLive is truly cloud first. They had operated entirely in the AWS cloud since opening their doors in 2017. Several years in, they decided to look for an additional cloud provider with broader support for digital health platforms. They specifically wanted to migrate to one they could rely on to deliver plug-and-play services, which would accelerate innovation of their platform. Rather than re-architecting for a new cloud, EyecareLive wanted a cloud platform that would offer compatible services they could use to meet their needs for reliability and availability.
“If we want to deploy a new conversational bot or build AI models that assist doctors to diagnose based on a retina image, Google Cloud provides these services which are reliable and tested by Google Cloud Healthcare solutions in many cases.” — Raj Ramchandani, CEO, EyecareLive
Versatility was another requirement. The EyecareLive platform must fulfill the demands of a variety of organizations — doctors, pharmaceutical companies, clinics, and others. EyecareLive also has an international deployment strategy that goes far beyond offering a domestic telehealth solution. Therefore EyecareLive needed a cloud functionality that extended into the broader global eye-care ecosystem.
EyecareLive chose Google Cloud. The most compelling reason was the deep industry expertise found in Google Cloud for healthcare and life sciences. This distinguished Google Cloud from all other possible cloud providers considered by EyecareLive. “We like Google Cloud because of the differentiations such as Google Cloud Healthcare solutions, computer vision, and AI models that can be used out of the box,” says Raj Ramchandani, CEO of EyecareLive. “We found these features more robust for our use cases on Google Cloud than any other.”
“Google is heavily into its Healthcare Cloud. That’s what differentiates it. We love that part because we can tap into innovative healthcare cloud functionality quickly.” — Raj Ramchandani, CEO, EyecareLive
As a cloud-born company, EyecareLive had an exceedingly tech-savvy team. But the migration was a complex one that involved migrating third-party software and networking products that were tightly integrated into EyecareLive’s own code. The team knew it needed expert help with the migration. What’s more, doctors, patients, and other users required 24/7 access to the platform, and any interruptions to availability or infrastructure hiccups during the migration would disrupt their online experiences. However, the EyecareLive team was already stretched by continuing to grow and innovate the business, so they asked Google Cloud for help.
EyecareLive purchased Enhanced Support, a support service offered by the Google Cloud Customer Care portfolio. Specifically designed for small and midsized businesses (SMBs). Enhanced Support gave EyecareLive unlimited, fast access to expert support from a team of experienced Google Cloud engineers during the intricate, multifaceted migration.
“It was my top priority to engage Google Cloud Customer Care to help us keep the platform always available for our doctors and users,” says Ramchandani. “The level of detail to the answers, the clarifications of having the Enhanced Support experts tell us to do it a certain way has been enormously helpful.”
For example, one of the valuable features delivered by Enhanced Support is Third-Party Technology Support, which gives EyecareLive access to experts with specialized knowledge of third-party technologies, such as networking, MongoDB, and infrastructure. This meant all components in EyecareLive’s infrastructure could be seamlessly migrated to Google Cloud, and afterward EyecareLive could lean on Enhanced Support experts to continue to troubleshoot and mitigate issues as necessary.
“The response times to the questions and issues we had when going live was fantastic. It was the best experience with a tech vendor we’ve had in a long time.” — Raj Ramchandani, CEO, EyecareLive
With Enhanced Support at their side, EyecareLive was able to get up and running quickly in preparation for their international expansion by using Google Cloud’s prebuilt AI models, load balancers, and networking technologies that were designed to be easily deployed across multiple regions throughout the globe. “We know exactly how to implement data locality to scale our deployment into different regions and into different countries, because we’ve learned that from the Google Cloud support team.” — Raj Ramchandani, CEO, EyecareLive
EyecareLive then proceeded to rapidly scale their business, knowing that Google Cloud would ensure they could meet compliance standards in whatever country or region they expanded into.
“Since we’ve moved to Google Cloud and chose Enhanced Support, we’ve had 100% availability. That’s zero downtime, which is incredible.” — Raj Ramchandani, CEO, EyecareLive
Enhanced Support also provided the capabilities for EyecareLive to:
Resolve issues and minimize any unplanned downtime to maintain a high-quality, secure experience for doctors and patients during and after migration
Acquire fast responses to questions from technical support experts
Learn from guidance from the Enhanced Support team beyond immediate technical issues
By working closely with the Google Cloud Enhanced Support team, EyecareLive was able to successfully migrate their platform.
“If you ask any of my engineers which cloud provider they prefer, they’d all respond ‘Google Cloud,’” says Ramchandani. “The documentation is there, the sample code is there, everything that we need to get started is available.”
EyecareLive was then able to go on to grow and scale their business in the cloud in the following ways:
Successfully managed a complex migration with minimal disruption and maximum availability, ensuring a consistent, secure, and compliant-ready experience for doctors and patients
Gained the trust of both doctors and patients – they know that EyecareLive protects their sensitive medical data
Kept EyecareLive agile and focused on innovating forward rather than building new features from scratch by supporting the team as they took advantage of Google’s tailored, plug-and-play technologies
Analyzed performance over time to plan for future growth by partnering with Enhanced Support for the long term
“We know we can rely on Google Cloud from a security point of view. We love the fact that Google Cloud Healthcare solution is HIPAA compliant. Those are the things that make us trust Google to do the right thing.” — Raj Ramchandani, CEO, EyecareLive
With the help of Enhanced Support, EyecareLive brings digital transformation to the eye-care in the healthcare industry by integrating the entire ecosystem of eye-care partners onto one platform making EyecareLive a leader in their industry.
Learn more about Google Cloud Customer Care services and sign up today.
Read More for the details.
Cybercrime costs companies 6 trillion dollars annually, with ransomware damage accounting for $20B alone1. A major source of attack vectors is vulnerabilities present in your open source software and vulnerabilities are more common in popular projects. In 2021, the top 10% of most popular OSS project versions are 29% more likely on average to contain known vulnerabilities. Conversely, the remaining 90% of project versions are only 6.5% likely to contain known vulnerabilities2. Google understands the challenges of working with open source software. We’ve been doing it for decades and are making some of our best practices available to customers through our solutions on Google Cloud. Below are three simple ways to get started and leverage our artifact management platform.
Using Google Cloud’s native registry solution: Artifact Registry is the next generation of Container Registry and a great option for securing and optimizing storage of your images. It provides centralized management and lets you store a diverse set of artifacts with seamless integration with Google Cloud runtimes and DevOps solutions, letting you build and deploy your applications with ease.
Shift left to discover critical vulnerabilities sooner: By enabling automatic scanning of containers in Artifact Registry, you get vulnerability detection early on in the development process. Once enabled, any image pushed to the registry is scanned automatically for a growing number of operating system and language package vulnerabilities. Continuous analysis updates vulnerability information for the image as long as it’s in active use. This simple step allows you to shift security left and detect critical vulnerabilities in your running applications before they become more broadly available to malicious actors.
Deployments made easy and optimized for GKE: With regionalized repositories, your images are well positioned for quick and easy deployment to Google Cloud runtimes. You can further reduce the start-up latency of your applications running on GKE with image streaming.
Our native Artifact Management solutions have tight integration with other Google Cloud services like IAM and Binary Authorization. Using Artifact Registry with automatic scanning is a key step towards improving the security posture of your software development life cycle.
Leverage these Google Cloud solutions to optimize your container workloads and help your organization shift security left. Learn more about Artifact Registry and enabling automated scanning.
These features are available now.
1. Cyberwarfare In The C-Suite
2. State of the software supply chain 2021
Read More for the details.
Shape your query plans without changing the code.
Read More for the details.
Azure Ultra Disk Storage provides high-performance along with sub-millisecond latency for your most-demanding workloads, now available in China North 3.
Read More for the details.
You can now set an EC2 Amazon Machine Image (AMI) to use Instance Metadata Service Version 2 (IMDSv2) by default. IMDSv2 is an enhancement to instance metadata access that requires session-oriented requests to add defense in depth against unauthorized metadata access. IMDSv2 requires a PUT request to initiate a session to the instance metadata service and retrieve a token. To set your instances as IMDSv2-only, you previously had to configure Instance Metadata Options during instance launch or update your instance after launch using the ModifyInstanceMetadataOptions API.
Read More for the details.
Amazon SageMaker Canvas now supports quicker set up of time-series forecasting models with simplified administration of required permissions. SageMaker Canvas is a visual point-and-click service that enables business analysts to generate accurate machine learning (ML) models for insights and predictions on their own — without requiring any machine learning experience or having to write a single line of code.
Read More for the details.
AWS customers have been using Amazon ECS and Amazon EKS to run and optimize their Microsoft Windows Server workloads. ECS and EKS-optimized Amazon Machine Images (AMIs) for Windows Server have traditionally leveraged Mirantis Container Runtime (formerly Docker Engine – Enterprise) as the default runtime framework. AWS has now migrated all the latest (September 2022) container-optimized Windows Server AMIs to use Docker Community Edition (CE). This change is to help customers avoid additional costs to get support, bug fixes and security patches for Mirantis Container Runtime with the Microsoft announcement to transfer support for Mirantis Container Runtime to Mirantis at the end of September 2022.
Read More for the details.