Azure – General availability: Azure Cosmos DB autoscale RU/s entry point is 4x lower
Set autoscale on your database or container with a new scale range of 100 RU/s – 1000 RU/s.
Read More for the details.
Set autoscale on your database or container with a new scale range of 100 RU/s – 1000 RU/s.
Read More for the details.
Assess, receive Azure recommendations, and then migrate your SQL Server databases using the Azure SQL Migration extension in Azure Data Studio.
Read More for the details.
What’s your role?
I lead the EMEA team of Cloud Solutions Architects, a team responsible for supporting & scaling the efforts of hundreds of customer sales engineers, specialists, partners and customers, across a range of industries. The goal is not simply to figure out technical solutions that have others stumped, but to share complex new initiatives with a range of stakeholders. We build solutions that, if we luck out, may evolve to become a new product feature but above all, help customers transform to digital organizations that gather, secure, and use data in richer ways.
Tell us about your path to Google.
Feeling stretched and challenged has been my happy place for as long as I can remember. I grew up in the London Borough of Hackney and studied Chemistry at University College, London. After a brief period as a licensed arsonist for the U.K. Ministry of Defense (I tested materials to analyze what was in the fumes they gave off and how long folks had to get away from it in the unfortunate case they caught fire…it took me a while to realize sometimes I was the only person in the building!) – I discovered the additional challenge of learning about computers, and how my fellow scientists could work with them. My curiosity took me to databases, networks, security and eventually platform architecture and the cloud.
Problem solving seems to be at the heart of all you do. What’s your process?
Usually when we start working with a customer, it’s a problem that their customer engineer can’t solve alone, or a specialist may need assistance with solving. When I’m brought in, I try to get right to the customer’s pain point, figure out how they talk about their problem. Once there’s some empathy going, we can talk about possible answers. When they start asking about seeing things, or how they might train their people to work better with a solution, I can tell they’re starting to see the solution.
We’re like builders, laying the foundations and building on top. But, instead of bricks, we work with customers in digital transformation. It helps to be comfortable feeling challenged.
Working with customers involves the most complexity I’ve ever encountered, my team thrives on it.
Do you have any passion projects?
On the side, I celebrate other kinds of complexity, writing short stories sparked by photographs that I’ve taken, and penning a technical blog called Grumpy Grace that displays my interests in the complexity of everything from analyzing my own Twitter feed to what the design of medieval castles tells us about contemporary computer security architectures
What has been unique about Google?
I’d never been interviewed by a woman until I got here. At my previous company, I was the only female solutions architect for two years, globally – so that was a welcome change. We still have work to do, but we’re on the right path.
What is your advice for women engineers?
My most frequent advice to women in engineering is to take the initiative at first meetings, whether with customers or partners. You have to qualify yourself, right at the start. Say your name, and what you’ve done, where you’ve worked, what you’ve built. Provide context and tell your story first so people don’t make assumptions.
Read More for the details.
Ucraft is an easy-to-use website builder for anyone who wants to create a unique and powerful website. We currently have around 1 million users in total, 450K active users and online business owners, bloggers, startups, designers, freelancers, and entrepreneurs grow their brands. AndGoogle Cloud has been there to help us grow our brand every step of the way.
Establishing a foothold in the website-builder market is a major undertaking for a startup like Ucraft. The field is extremely crowded, with dozens of entrenched players, and new vendors entering the fray seemingly every week. To gain market traction we needed to develop a unique and compelling offering. But like many startups, we had limited financial resources and had to spend wisely.
After considering the options, we selected Google Cloud to power our new business, in addition to becoming a Google Cloud Premier Partner. Google Cloud lets us avoid infrastructure cost and complexity, and free up technical staff and budget to focus on building differentiated capabilities to attract customers and grow the brand. Our ready-made page layouts and customizable templates, combined with our free hosting and affordable pricing, helped us stand out from the competition and rack up early customer wins.
Google Cloud lets us efficiently and cost-effectively evolve our platform from a turnkey managed service offering to a true Software-as-a-Service (SaaS) solution. In the early days we usedCompute Engine to develop and hone the platform and support our initial customer deployments. Compute Engine helped us minimize total cost of ownership by avoiding upfront capital outlays and aligning ongoing operation expenses with evolving business demands. We expanded incrementally, implementing a unique compute instance for each customer.
As time went by, we adoptedApp Engine to support the next phase of our growth. Google Cloud’s fully managed, serverless platform lets us increase automation and simplify operations even further. App Engine dynamically scales compute resources in response to real-time traffic demands, enabling us to improve service agility, avoid over-provisioning or under-provisioning resources, and further optimize costs.
Ultimately, we migrated to a microservices architecture powered byGoogle Kubernetes Engine (GKE) to support our rapidly expanding customer base and long-term vision and development plans. GKE provides the scalability, security, and stability needed to deliver a global SaaS solution. The product’s autoscaling capabilities and load-balancing functionality help us optimize performance and economics. Built-in security features make it easy to segregate customers and protect confidential data. And the high-availability architecture boosts reliability and customer satisfaction. In fact, thanks to Google Cloud we deliver the highest uptime of any website builder in the industry.
GKE speeds up development and accelerates the pace of innovation. It allowed us to efficiently add eCommerce functionality to our platform to move up the value stack and expand revenues. Now, in addition to building websites, our customers can easily set up online stores to sell products and services on the web.
Going forward, Google Cloud AI capabilities are helping us completely reimagine the user experience. We use the Google CloudNatural Language AI andVison AI APIs to reinvent the way you build websites. Ucraft artificial designer assistant guides you through the entire process, suggesting site designs, content, and page layouts based on your input and feedback.
Ucraft Next, our next-generation offering, takes ease-of-use to the next level. With Ucraft Next, anyone can create a modern website with unmatched speed and simplicity. Our intelligent assistants do all the work, automatically designing and building the site for you. Ucraft Next helps you get your website or eCommerce site up and running quickly and easily, so you can focus on running your business.
With so many platforms out there now, launching an eCommerce store quickly should be easy, but the truth is, it’s not. The problem is that no one has addressed the two things that create the most delay: design and content production. This is where the heavy lifting happens and it gets worse with the indecision created when trying to get things just right, so we fixed it.
UCraft Next is game-changing intelligent eCommerce that uses powerful AI to get your store launched faster than any other platform on the market. Our design and content bots do the hard work for you instantly and with incredible results. Your vision deserves its best shot at success. We make building and launching your online store something that happens in a matter of minutes, not weeks or months.
We’re excited about what we’ve already been able to accomplish by building on Google Cloud and are excited about the future. We are committed to creating a platform that lets any entrepreneur easily create a high-quality, high-impact online presence. It helps level the playing field so more people with great business ideas can begin to reach audiences everywhere with digital experiences that rival those of their much larger, established competitors.
If you want to learn more about how Google Cloud can help your startup, visit our pagehere to get more information about our program, and sign up for our communications to get a look at our community activities, digital events, special offers, and more.
Read More for the details.
Editor’s note: The Broad Institute of MIT and Harvard, a nonprofit biomedical research organization that develops genomics software, needs to keep pace with the latest scientific discoveries. Here’s how they use managed database services from Google Cloud to move fast and stay on the cutting edge.
The Broad Institute of MIT and Harvard is a nonprofit biomedical research organization that focuses on advancing the understanding and treatment of human disease. One of our major initiatives is developing genomics tools and making them available across the scientific ecosystem. The rapid pace of discovery means our data sciences team has to keep pace so that our software products enable the best research. Our ability to move fast is critical. And when we decided to pivot our focus during the pandemic to develop and process tens of millions of COVID-19 tests, speed was a driving factor. Fully managed database services and analytics solutions from Google Cloud helped us accelerate our pace of development.
One of our main products that uses Google Cloud services is Terra —a secure, scalable, open-source platform for biomedical research. We co-developed it with Microsoft and Verily to help researchers access public datasets, manage their private data, organize their research, and collaborate with others. After a long history of working with Google Cloud, it was natural for us to leverage Google Cloud services for the control plane of Terra.
For the backend, we use a number of cloud services including Cloud SQL for PostgreSQL and MySQL, as well as Firestore, to allow users to track their different data assets, methods, and research results, and to power the Terra control plane. Cloud SQL helps us accelerate development in two key areas. First, our developers can get these database services up and running quickly, without going through some centralized system that might become a bottleneck. And secondly, using Cloud SQL lowers our operational burden. We can keep managed services running and performing well using fewer of our own developers. Instead, these teams can focus on developing new features for users.
For much of our genomics analysis, we use BigQuery, Compute Engine and Dataproc, but understanding the detailed costs of that research has been challenging. Billing data can be exported into BigQuery, but the costs wouldn’t be attributed to the specific analyses being performed. However, by adding billing labels to each cloud resource used and joining that information with detailed metadata in our relational Cloud SQL databases we can provide extremely fine grained cost information. As a result, for example, we’re able to tell a researcher that their virtual machine spent 17 cents as part of a certain analysis, research project, or sample. With these insights, our researchers have visibility into their costs, and are able to decide where to focus their optimizations.
When the global pandemic hit, the Broad Institute volunteered to make our clinical testing and diagnostics facilities available to serve public health needs. We created a novel automation system for COVID-19 test processing that is scalable, modular, and high-throughput, in service of the public health needs of the Commonwealth of Massachusetts and surrounding areas. In the first several months of the pandemic, Broad processed more than 10 percent of all PCR tests in the United States, and today has processed more than 30 million tests, with turnaround times of less than 24 hours. Using serverless components with a Cloud SQL for PostgreSQL database at its core, we built a testing solution—going from an idea to launching our large-scale COVID-19 operation in just two weeks. On our first day, we delivered 140 tests. But a year later we were delivering up to 150,000 tests a day. That’s in part because our database solution was able to scale up really quickly.
With a few CLI commands, we enabled high availability and read replicas for our database, while backups and maintenance upgrades were handled automatically. This scalability made a big difference to us considering we were a small team working on very tight timelines.
Read More for the details.
Over the past decade, cybersecurity has posed an increasing risk for organizations. In fact, cyber incidents topped the recent Allianz Risk Barometer for only the second time in the survey’s history. The challenges in combating these risks only continue to grow. Adversaries tend to be agile and are consistently looking for new ways to land within your digital environments. They also drive attack vectors that work, which means enterprise risk leaders are now forced to look for new ways of securing infrastructure and data.
When cloud, in its various delivery models, was first introduced, it didn’t fit neatly into the security frameworks that had seemingly protected networks for many decades. Public cloud was the answer to ongoing IT challenges: scale, resources, security capabilities, and budget cycle limitations. Now, public cloud is meeting the increasing challenge of implementing cybersecurity controls and frameworks that are capable of protecting today’s global enterprise.
Cloud adoption – with all its scale and redistribution of longstanding security paradigms – is the optimal choice for infrastructure and security, particularly as organizations grapple with the need to engage in digital transformation. We assert that successful digital transformation is impossible without incorporating the use of the scale, security architecture, and resiliency of the cloud.
Consequently, cloud adoption becomes a necessary component of roadmap discussions and planning as your organization looks to reduce overall risk. Risk leaders and enterprise cybersecurity leaders must consider that moving data, digital processes, and priority workloads to the public cloud is a crucial step for meeting the current and future digital needs of the enterprise. Going forward, this digital transformation increasingly will include hybrid infrastructure environments composed of a combination of on-premises and cloud solutions.
As digital environments become more complex within a given organization, proactively countering adversaries becomes all the more difficult. It’s harder to implement, scale, and adhere to existing security and control frameworks. It’s also increasingly challenging to apply framework guidance to new applications, build and support infrastructure within a secure foundation, and maintain good cyber hygiene through the digital lifecycle.
As reported by TechTarget, the 2020 hack of the SolarWinds Orion IT performance monitoring system is a prime example. It grabbed headlines “not because a single company was breached, but because it triggered a much larger software supply chain incident.” This vulnerability in popular, commercially available, and widely utilized software compromised the data, networks, and systems of thousands of companies when a routine software update turned out to be backdoor malware.
A close look at the root problems behind high-profile security breaches reveals that it’s a lack of agility and an inability to scale resources that prohibit the modern security organization’s ability to respond quickly enough to counter new challenges. Look even closer and you’ll often find an insufficient implementation of best practices and ineffective solutions, leaving an organization continually chasing the next tool or solution and scrambling to stay ahead of emerging threats.
While the cost to individual businesses is high, most organizations struggle with the needed skills and resources to rigorously maintain data security basics and ensure readiness for inevitable attacks. The previous sentence is especially true when you consider that maintaining an effective state of cybersecurity readiness is a costly practice that requires the continual development of expertise, the evaluation of new tools, and an ongoing element of vigilance.
Threat visibility is a big part of the problem. You can’t protect your company from what you can’t see. For individual enterprises – with critical data workloads housed in a combination of on-premises servers, a variety of endpoints, and both private and public cloud instances – staying ahead in the ongoing battle requires a new approach.
The identification of actionable alerts and other data contributes to a better overall state of readiness. Thought leadership and discussions related to Autonomic Security Operations provide a promising outlook for security organizations willing to lean into the changing technology landscape – a landscape that now benefits from leveraging automation and machine learning currently used in security stacks. Reducing the chance of introducing vulnerabilities or missing-critical alerts starts with ensuring full visibility into an increasingly expanding and complex environment.
Industry megatrends are driving cloud adoption and with it a path to improved cybersecurity. Among these trends is the concept of shared fate as an evolution of the historical shared-responsibility model. Shared fate drives a flywheel of increasing trust which develops as more enterprises transition to the cloud. This compels an even higher security investment and a more vested interest from cloud service providers.
At Google Cloud, shared fate means we take an active stake in our customers’ security posture, offering capabilities and defaults that help ensure secure deployments and configurations in the public cloud. We also offer experience-based guidance on how to configure cloud workloads for security, and can assist with risk management, reduction, and transfer.
The Google Cloud Risk Protection Program represents the continuing evolution of the shared-fate model. The program offers a practical solution that provides the modern enterprise a snapshot comparison of its current security state against well-adopted cloud-security frameworks. It also give you an opportunity to explore cyber insurance designed to meet your needs from our partners Allianz and Munich Re.
When performed with diligence, cloud adoption can help increase your overall cybersecurity effectiveness. Using a hybrid approach – and steadily reducing which data assets remain on premises – can strengthen your overall security posture and reduce risks to the organization.
In comparison to the enterprise-by-enterprise security scramble to protect data and workloads in individual private clouds, global public cloud solutions like Google Cloud can be a force multiplier when adhering to established best practices. By that, we mean, quite literally, that you get more security at every touchpoint – from infrastructure and software to access and data security.
Strong security in the public cloud starts with the foundational pieces: the hardware and design elements. At Google, for example, we take a security-by-design approach within both the data center and purpose-built components themselves. Within Google Cloud, data is encrypted by default – both at rest and in transit. Google’s baseline security architecture adheres to the zero trust principles, meaning that every network, device, person, or service initially cannot be trusted.
Embarking on a zero trust architecture journey gives modern security practitioners the ability to methodically shut down traditional attack vectors. Zero trust also provides more granular visibility and control of rapidly expanding environments. The recent emphasis of its benefits, as the U.S. White House set forth through an executive order on increasing cybersecurity resilience, is an example of the wide-scale recognition by both government and industry on the benefits of this approach.
Since adopting a zero trust approach more than a decade ago, Google has achieved a recognizable level of maturity, reflected by our internal infrastructure and multiple enterprise offerings, enabling different aspects of the zero trust security journey.
Privacy frameworks, regulatory compliance, and data sovereignty are driving critical elements of the cloud adoption cycle. Cloud providers must ensure they have the necessary controls, attestations, and abilities to audit in order to provide organizations with the tools to preemptively satisfy regulatory and compliance mandates across the globe.
Now consistently expected to be part of a design feature that’s built into the cloud journey, it cannot simply be an add-on capability. The direction of this evolution promises to play more of a role in the future of cloud adoption, not less. Because this is an ongoing component of enterprise risk evaluation, your business must consider cloud providers that can partner on this critical aspect of the journey – and not leave you without the resources to respond to this growing critical need.
Digital transformation is difficult because the modern enterprise must build and design for both today and tomorrow. From a security perspective, the challenge has often been that security industry practitioners cannot always predict what the future will look like. That said, there are clear steps you can take to mitigate all-around risk throughout the process.
How you approach the cloud is, of course, integral to your journey, but it doesn’t need to be an all-or-nothing proposition. And although technology debt continues to persist with legacy systems, that doesn’t mean you shouldn’t begin to move forward.
Google Cloud enables you to modernize at your own pace and understand what’s realistic. We recommend you move what data you can to a more secure public cloud today, followed by a phased approach to move more in the months and years that follow. The key tenets of our approach to security in the public cloud include:
The security-by-design posture of Google Cloud can help modern-day enterprises scale security capabilities and reduce risk with an architecture built on zero trust principles.
The Google Cloud approach to security and resiliency includes a framework to help you protect against adverse cyber events by using our comprehensive suite of solutions.
Google Cloud can help ensure your organization adheres to the requirements of a growing and increasingly complex regulatory and compliance environment.
The ideal model of a future organization is one where cloud plays a major role in infrastructure design and architecture. Your organization should begin to view public cloud as an enabler of the business and a core component of digital transformation.
As you transition more data to the public cloud, it’s paramount that trust is ingrained in every step you take with your cloud service provider. Many service providers readily take on a shared responsibility with your organization when it comes to security. At Google, we take it several steps further with our shared fate model to help ensure data security in the public cloud. Your future and our’s are part of the same data security journey.
Read More for the details.
For 117 years, the U.S. Department of Agriculture’s Forest Service has been a steward of America’s forests, grasslands, and waterways. It directly manages 193 million acres and supports sustainable management on a total of 500 million acres of private, state, and tribal lands. Its impact reaches far beyond even that, offering its research and learning freely to the world.
At Google, we’re big admirers of the Forest Service’s mission. So we were thrilled to learn in 2011 that its scientists were using Google Earth Engine, our planetary-scale platform for Earth Science data and analysis, to aid its research, understanding, and effectiveness. In the years since, Google has worked with the Forest Service to meet its unique requirements for visual information about the planet. Using both historical and current data, the Forest Service built new products, workflows, and tools that help more effectively and sustainably manage our natural resources. The Forest Service also uses Earth Engine and Google Cloud to study the effects of climate change, forest fires, insects and disease, helping them create new insights and strategies.
Besides gaining newfound depths of insight, the Forest Service has also sped up its research dramatically, enabling everyone to do more. Using Google Cloud and Earth Engine, the Forest Service reduced the time it took to analyze 10 years worth of land-cover changes from three months to just one hour, using just 100 lines of code. The agency built new models for coping with change, then mapped these changes over time, in its Landscape Change Monitoring System (LCMS) project.
Emergency responders can now work better on new threats that arise after wildfire, hurricanes, and other natural disasters. Forest health specialists can detect and monitor the impacts of invasive insects, diseases, and drought. More Forest Service personnel can use new tools and products within Earth Engine, thanks to numerous training and outreach sessions within the Forest Service.
Researchers elsewhere also benefited when the Forest Service created new toolkits, and posted them to GitHub for public use. For example, there’s geeViz, a repository of Google Earth Engine Python code modules useful for general data processing, analysis, and visualization.
This is only the start. Recently, the Forest Service started using Google Cloud’s processing and analysis tools for projects like California’s Wildfire and Forest Resilience Action Plan. Forest Service researchers also use Google Cloud to better understand ecological conditions across landscapes in projects like Fuelcast, which provides actionable intelligence for rangeland managers, fire specialists, and growers, and the Scenario Investment Planning Platform for modeling local and national land management scenarios.
The Forest Service is a pioneer in building technology to help us better understand and care for our planet. With more frequent imaging, rich satellite data sets, and sophisticated database and computation systems, we can view and model the Earth as a large-scale dynamic system.
We are honored and excited to respond to the unique set of requirements of the scientists, engineers, rangers, and firefighters of the USFS, and look forward to years of learning about — and better caring for — our most precious resources.
*Image 1: The USDA Forest Service (USFS) Geospatial Technology and Applications Center (GTAC) uses science-based remote sensing methods to characterize vegetation and soil condition after wildland fire events. The results are used to facilitate emergency assessments to support hazard mitigation, to inform post-fire restoration planning, and to support the monitoring of national fire policy effectiveness. GTAC currently conducts these mapping efforts using long-established geospatial workflows. However, GTAC has adapted its post-fire mapping and assessment workflows to work within Google Earth Engine (GEE) to accommodate the needs of other users in the USFS. The spatially and temporally comprehensive coverage of moderate resolution multispectral data sources (e.g., Landsat, Sentinel 2) and analytical power provided by GEE allows users to create geospatial burn severity products quickly and easily. Box 1 shows a pre-fire Sentinel-2 false color composite image. Box 2 shows a post-fire Sentinel-2 false color composite image with the fire scar apparent in reddish brown. Box 3 shows a differenced Normalized Burn Ratio (dNBR) image showing the change between the pre- and post-fire images in Boxes 1 and 2. Box 4 shows a thresholded dNBR image of the burned area with four classes of burn severity (unburned to high severity), which is the final output delivered to forest managers.
*Image 2: Leveraging Google Earth Engine (GEE), the USDA Forest Service (USFS) Geospatial Technology and Applications Center (GTAC) and USFS Region 8, developed the Tree Structure Damage Impact Predictive (TreeS-DIP) modeling approach to predict wind damage to trees resulting from large hurricane events and produce spatial products across the landscape. TreeS-DIP results become available within 48 hours following landfall of a large storm event to allow allocation of ground resources to the field for strategic planning and management. Boxes 1 and 3 above show TreeS-DIP modeled outputs with varying data inputs and parameters. Box 2 shows changes in greenness (Normalized Burn Ratio; NBR) that was measured with GEE during the recovery from Hurricane Ida and is shown as a visual comparison to the rapidly available products from TreeS-DIP.
*Image 3: Severe drought conditions across the American West prompted concern about the health and status of pinyon-juniper woodlands, a vast and unique ecosystem. In a cooperative project between the USDA Forest Service (USFS) Geospatial Technology and Applications Center (GTAC) and Forest Health Protection (FHP), Google Earth Engine (GEE) was used to map pinyon pine and juniper mortality across 10 Western US States. The outputs are now being used to plan for future work including on-the-ground efforts, high-resolution imagery acquisitions, aerial surveys, in-depth mortality modeling, and planning for 2022 field season work.
Box 1 contains remote sensing change detection outputs (in white) generated with GEE, showing pinyon-juniper decline across the Southwestern US. Box 2 shows NAIP imagery from 2017 with, with box 3 showing NAIP imagery from 2021. NAIP imagery from these years shows trees changing from healthy and green in 2017 to brown and dying in 2021. In addition, box 2 and box 3 show change detection outputs from Box 1 for a location outside of Flagstaff, AZ converted to polygons (in white). The polygon in box 2 is displayed as a dashed line to serve as a reference, while the solid line in box 3 shows the measured change in 2021. Converting rasters to polygons allows the data to be easily used on tablet computers, as well as the ability to add information and photographs from field visits.
Read More for the details.
Climate-induced wildfires, massive storms, and deadly heat waves, along with the complexities around managing sustainable supply chains and emissions reduction, have awoken the corporate world to the planet’s stark realities of climate change.
Sustainability and environmental impact have become top of mind for executives across the world, with many starting to prioritize sustainable changes to how they operate. In a new global survey of 1,491 executives across 16 countries conducted by The Harris Poll for Google Cloud, business leaders shared their views on prioritization, challenges and opportunities for sustainability.
Environmental, Social, and Governance initiatives came out as a top organizational priority, on par with evolving or adjusting business models, with close to 10% of a company’s budget going to sustainability efforts. Executives are willing to grow their business in a way that is sustainable, even if it means lower revenue in the near future.
At face value, 80% of executives give their organization an above average rating for their environmental sustainability effort. Eighty-six percent (86%) believe their efforts are making a difference in advancing sustainability.
The research showed a troubling gap between how well companies think they’re doing, and how accurately they’re able to measure it. Only 36% of respondents said their organizations have measurement tools in place to quantify their sustainability efforts, and just 17% are using those measurements to optimize based on results.
Without accurate measurement, it’s hard to report genuine progress – 58% agree that green hypocrisy exists and their organization has overstated their sustainability efforts, with executives in Financial Services and Supply Chain/Logistics with the highest admission at 66% and 65% respectively. Roughly two-thirds (66%) questioned how genuine some of their organization’s sustainability initiatives are.
Businesses across industries struggle to quantify their sustainability efforts, with 65% agreeing they want to advance sustainability efforts, but don’t know how to actually do it – executives in Supply Chain/Logistics and Healthcare/Life Science top the list at 79% and 74% respectively, and retail at just 54%.
Leadership towards sustainability starts at the top of the organizational chart. When asked which groups are enabling organizational sustainability, 53% pointed to board members and senior leaders. But they hunger for more: 82% agreed with the statement, “I wish our board or senior leadership gave us more room to prioritize sustainability.”
Executives want more transparency and opportunity to overcome their top barriers – 87% agree that if business leaders can be more honest about the issues they face with becoming more environmentally sustainable, they can make meaningful progress.
The majority (82%) of executives wish they had more room to prioritize sustainability.
If executives can overcome challenges, 74% of executives believe sustainability can drive powerful business transformations. Technology and sustainability are the top two areas where executives plan to increase investment in 2022, and executives see them as uniquely intertwined: Technology innovation is the top area executives believe will have the most impact in tackling sustainability challenges. Additionally, about three in four executives (78%) cite technology as critical for their future sustainability efforts, attesting that it helps transform operations, socialize their initiatives more broadly, and measure and report on the impact of their efforts.
The good news is that it’s still early for many companies’ sustainability journeys – the majority (more than half) of executives say they are in the planning and early implementation phases of sustainability transformations so there’s progress to be made. The challenging news is we need urgent action from all industries now to prevent the worst impacts of climate change.
At Google Cloud, we’re committed to helping our customers use cloud technology to achieve their sustainability goals and do more for the planet. We operate the cleanest cloud in the industry, and because of that, we recognize that building a more sustainable business is not easy. Here, sustainability teams always have a seat at the planning table, so we can work together on using cloud technology to build a more sustainable future. For a deeper look at executive attitudes toward sustainability, check out the full report.
Read More for the details.
As you start scaling up your serverless applications, caching is one of elements that can greatly improve your site’s performance and responsiveness.
With the release of Django 4.0, Redis is now a core supported caching backend. Redis is available as part of Memorystore, Google Cloud’s managed in-memory data store.
For Django deployments hosted on Cloud Run, you can add caching with Memorystore for Redis in just a few steps, providing low latency access and high throughput for your heavily accessed data.
A note on costs: While Cloud Run has a generous free tier, Memorystore and Serverless VPC have a base monthly cost. At the time of writing, the estimated cost of setting up this configuration is around USD $50/month, but you should confirm all costs when provisioning any cloud infrastructure.
If you’ve deployed complex services on Cloud Run before, you’ll be aware that managed products like Cloud SQL have a public IP address, meaning they can be accessed from the public internet. Cloud SQL databases can optionally be configured to have a private IP address (often looking like 10.x.x.x), meaning they are only accessible by entities on the same internal network. That internal network is also known as a Virtual Private Cloud Network, or VPC network.
Memorystore allows only internal IP addresses, so additional configurations are required to be able to allow Cloud Run access.
In order for Memorystore to be able to connect with Cloud Run, you will need to establish a Serverless VPC Access connector, allowing for connectivity between Cloud Run and the VPC where your Memorystore instance lives.
To start, you will need to decide if you want to use the existing default network, or create a new one. The default network is automatically available in your Google Cloud project when you enable the Compute Engine API, and offers a /20 subnet (4,096 addresses) of network space for different applications to communicate with each other.
If you want to create a new network, or have other networking constraints, you can read more about creating and using VPC networks. This article will opt to use the default network.
To create a Serverless VPC Connector, go to the Create Connector page in the Google Cloud console, and enter your settings, connecting to your selected network.
If you have previously created anything on this network, ensure that whatever subnet range you enter does not overlap with any other subnets you have in that network!
You can also create a connector using gcloud, using the sample configuration below, which uses the default network, and suggested configurations from the console:
Once your network and VPC connector is set up, you can then provision your Memorystore instance to that network.
While Memcache has been supported in core Django since version 3.2 (April 2021), the 2021 Django Developers Survey showed that for developers that chose to use caching on their sites, they were nearly four times more likely to use Redis over Memcache.
This post will focus on deploying Redis caching, but you can read the Memcache implementation notes on the Django documentation to see who you could adapt this post for Memcache.
Following the Creating a Redis instance on a VPC Network instructions, you can provision an instance in moments after selecting your tier, capacity, and region.
You can also create a Redis instance from gcloud, using the sample minimum configuration below:
You can optionally configure Redis AUTH for this instance.You can read more about the configuration options in the reference documentation.
For reference, examples of configurations for the rest of the article will reference this instance as “myredis”.
One of the simplest ways to connect your Memorystore instance with Django is by implementing per-site caching. This will cache every page on your site for a default of 600 seconds. Django also has options for configuring per-view and template fragment caching.
To implement per-site caching, you will need to make some changes to your settings.py file to configure this.
After your DATABASES configuration, you’ll need to add a CACHES setting, with the backend and location settings. The backend will be “django.core.cache.backends.redis.RedisCache”, and the location will be the “redis://” scheme, followed by the IP and Port of your Redis instance. You can get the IP and port of your instance using gcloud:
Your final CACHES entry will look something like this:
If you’re opting to use django-environ with Secret Manager (as we do in our tutorials), you would add the REDIS_URL to your configuration settings:
Then use that value in your settings.py:
(As of django-environ 0.8.1 there is a pending feature request to support this in the existing cache_url helper.)
In your existing MIDDLEWARE configuration, add the UpdateCacheMiddleWare before and FetchFromCacheMiddleware after your (probably) existing CommonMiddleware entry:
This order is important, as middleware is applied differently during different phases of the request-response cycle. You can read more about this in the Django documentation.
Even though Django handles a lot of the caching implementation for you, you will still need to add the Python bindings as dependencies to your application. Django suggests installing both redis-py (for native binding support) and hiredis-py (to speed up multi-bulk replies). To do this, add redis and hiredis to your requirements.txt (or pyproject.toml if you’re using Poetry).
While you’re in your dependencies, make sure you bump django dependency to the most recent version!
With the configurations now set, you can now update your service, ensuring you connect your VPC you set up earlier! Updating your deployment will depend on how you deploy.
If you’re using continuous deployment, it will be easier to manually update your service before committing your code.
If you haven’t discovered source-based deployments, they allow you to build and deploy a Cloud Run service in one step:
If you originally set up your service to reference the latest version of the secret, you won’t need to make any changes to that setting now. But if you did set a specific version, you’ll need to make those changes now.
After deploying, you may feel like your site is much more speedy, but how do you confirm your cache is being used?
If you previously have any performance monitoring, you can check for any improvements over time. Or if you have any pages or searches on your site, you can see if they run faster the second time.
But to confirm your cache is having entries written to it, you can confirm this in a number of ways: In Memorystore, you can go to the Monitoring tab of your instance and check the “Calls” graph for statistics about the get and set operations. You can also export data to Cloud Storage, then read the data using programs like rbdtools.
For optimal performance, caching should be in the same network as the primary data source. Since you’ve just set up a Serverless VPC, you should also ensure that your Cloud SQL database is in the same VPC. You’ll want to edit your database to set a private IP in the VPC you created, then update your secret settings to reference this new IP (rather than the /cloudsql/instance) value.
Because the socket-based Cloud SQL connections were handled by Cloud Run in the Cloud SQL option, you can also remove that setting.
As your serverless applications scale, you may find you need to expand your infrastructure to suit. Using hosted services like Memorystore in Google Cloud, you can adapt and upgrade your applications to grow as you do without adding to your operations burden.
You can learn more about Memorystore for Redis in this Serverless Expeditions episode.
Read More for the details.
You can now use AWS PrivateLink to privately access Amazon Connect Wisdom instances from your Amazon Virtual Private Cloud (Amazon VPC) without using public IPs, and without requiring the traffic to traverse across the internet.
Read More for the details.
The transition of telecommunications networks to the cloud has begun. Yet, simply containerizing network functions and running them on centralized or edge clouds will not maximize the cost savings and operational efficiencies promised by the cloud. A nominal change in technology from virtualized to containerized functions risks the same disappointments seen with the previous virtualization effort: minimal reductions in costs and continued rigidity in deployments.
Over the last two years, Communication Service Providers (CSPs) have been approaching Google Cloud to ask us how we created our network. How did we achieve economies of scale and build out a resilient, flexible network while flattening the cost curve? How can they seize the opportunity to do the same during this network transition to telco and edge cloud?
To address these questions, Google Cloud and the Linux Foundation are thrilled to announce the formation of project Nephio in partnership with leaders across the telecommunications industry to work towards true cloud native automation to drive scale, efficiency, and high reliability across network operations.
The term ‘cloud’ or ‘cloud-native’ indicates a set of key principles, from high infrastructure and network programmability to application awareness and intent, all the way to scaled economics. To date, “telco cloud” lacks the full realization and implementation of these principles because of the lack of true cloud native automation.
The first thing to note is that just shifting the network technology base from “virtualized” to “containerized” does not flatten the curve. In fact, it can increase costs due to additional infrastructure, new operating models, a more fragmented ecosystem, migration and skill and training efforts. Any shifts in network technology should only be made to support a more flexible and efficient operational model. For Google Cloud, this means that our global infrastructure is designed for the automation of both provisioning and ongoing operation, along with Site Reliability Engineering (SRE) best practices.
Google Cloud’s strategy to achieve this global automation efficiency is through the use of declarative, intent-based automation. In this automation model, operators specify their intention for the network configuration, and allow automated systems to actually make that intention real. Since the intention does not change as faults occur, networks grow, and environments shift, human interventions are not needed to adjust to those changes. Instead, automation simply does what it always does: evaluate the current state, compare it to the intended state, and take the actions needed to reconcile the two. Operations teams spend their time improving the automation, rather than fighting fires and implementing routine changes.
Google Cloud exported this idea for applications when we released Kubernetes, and the result has been widely adopted throughout the technology world. Google Cloud is now applying the same principles to the telecommunications network. Kubernetes extensibility can be leveraged to jump-start this process. We strongly believe that, without applying these techniques, the industry will fail to realize the benefits of the transition to cloud.
The telecommunications industry is complex, however, there are three key groups of players that must come together to realize the benefits of this transition:
Cloud & Infrastructure Providers
Network Function Vendors
Communications Service Providers
Without any one of these groups mentioned above, the telecommunications industry will struggle to meet expectations.
Cloud and infrastructure providers must embrace open standards for configuration of their platforms, easing the burden on NF vendors for supporting each cloud. This also simplifies the automation; platform-neutral automation needs to be built, which can in turn delegate to the platform-specific components. This will require coordination between the platform vendors across the stack (silicon to cloud management).
Network function vendors must modernize their workloads. Notice the need to modernize, not simply containerize. Network functions must provide better separation from the infrastructure, or the automations become too complex. These workloads must become more transferable across hardware and cloud providers in order to enjoy the benefits of cloud-enabled scalability, self-healing, redundancy, and agility.
Finally, CSPs themselves must adapt and change their operational support models, embrace modern DevOps, intent-based design, and related methods. The way to truly achieve cost savings and agility is to let machines do the drudgery, and have operations teams focus on making the machines better.
Google Cloud’s experience has shown that scaling a global, resilient, and ever changing network is possible without similarly scaling the costs, by implementing cloud native, intent-based automation. With the start of the Nephio along with an ecosystem of network function vendors, cloud/infrastructure providers, CSPs and service orchestration partners, we are excited to work towards these goals together.
This project brings together these vital stakeholders to make sure we don’t miss this opportunity to dramatically reduce costs and increase agility. We really look forward to collaborating with the wider ecosystem in order to fully realize the overall business benefits of the journey to cloud via cloud native network automation.
To learn more about how Google Cloud is helping the Telecommunications industry go here.
Read More for the details.
AWS App Runner now supports streaming all request traces of applications running on App Runner to AWS X-Ray. App Runner makes it easy for developers to quickly deploy containerized web applications and APIs to the cloud, at scale and without having to manage infrastructure. X-Ray support in App Runner enables measuring these applications’ performance as they interact with AWS data services such as Amazon RDS, DynamoDB, or Elasticache, or with applications running on ECS, EKS, or Fargate. Further, this launch enables App Runner users to use X-Ray to get an end-to-end view of requests as they travel through your application, and gain insights to identify issues or optimization opportunities.
Read More for the details.
You can now automatically attach Amazon FSx file systems to new Amazon EC2 instances you create in the new EC2 launch experience, making it simple to use feature-rich and highly-performant FSx shared file storage with your EC2 instances.
Read More for the details.
Today, Amazon Elastic Container Registry Public (ECR Public) is announcing changes to the gallery including navigation breadcrumbs, copying of image identifier from dropdown, addition of publisher alias to the image name, and updates to repository not found page. These changes simplify the experience of discovering and pulling images, thus resulting in less time spent in identifying and pulling the images.
Read More for the details.
Starting today, you can now automatically attach Amazon EFS file systems to new Amazon EC2 instances created from the Configure storage section of the new and improved instance launch experience, making it simpler and easier to use serverless and elastic file storage with your EC2 instances. This integration helps you simplify the process of configuring EC2 instances to mount EFS file systems at launch time with recommended mount options. In addition, from the Configure storage section of the launch experience, you can also create new EFS file systems using the recommended settings without having to leave the Amazon EC2 console.
Read More for the details.
Migrating devices across device templates at scale is now supported in Azure IoT Central jobs.
Read More for the details.
If you adore databases like we do, the launch of a new major version is an event you celebrate. With each new major version comes cutting-edge database features, performance boosts, and other goodies for database lovers like you. You can hardly wait to spin up a Cloud SQL instance to test out the freshest release for your new services! But how about those existing databases of yours–is there a way to get them upgraded?
Cloud SQL now supports in-place upgrades for PostgreSQL and SQL Server in preview. With in-place upgrades, you can upgrade your databases in just a few minutes with a single command. When you perform an in-place upgrade, Cloud SQL keeps things simple for you and maintains your database’s IP address, user data, and settings, so that you don’t have more work to do after the upgrade completes. Cloud SQL even takes care of the mundane steps, such as running compatibility checks and backing up your data. If an issue does arise during an upgrade, Cloud SQL automatically rolls your database back to the previous version, getting you back up and running right away. When compared with migration-based upgrades, in-place upgrades are faster, easier, and more reliable.
If you use SQL Server, you may also use in-place upgrades to upgrade your edition and take advantage of features in a more premium SQL Server offering, such as SQL Server Enterprise.
In-place upgrades can be performed with a few clicks from the Cloud Console, or through gcloud or the API. MySQL support is coming soon.
Pretend you’re a database administrator at BuyLots, where you manage a fleet of Cloud SQL for PostgreSQL and SQL Server databases. The BuyLots fleet is on PostgreSQL 9.6 and SQL Server 2017–while that was state-of-the-art when you set up these databases four years ago, the software is now antiquated and you acknowledge it’s time for an update.
You do your homework and you decide to go straight to the latest and greatest with PostgreSQL 14 and SQL Server 2019. You see that PostgreSQL 14 offers major performance improvements for BuyLots’ connection-heavy workload and you can’t wait to try out SQL Server 2019’s accelerated database recovery feature. You check out the PostgreSQL and SQL Server documentation and find out that, fortunately, none of the incompatible changes introduced in the later versions are ones that you need to worry about. You know that upgrading incurs some downtime, so you check the calendar and decide that next month’s Saturday evening service update window–when BuyLots’ database activity is at its lowest–is the right moment to conduct the upgrade event.
In preparation for the upgrade event, you decide to do a dry run of your upgrade and kick the tires on PostgreSQL 14. You also want to get a sense for how long upgrading will take. You first clone one of your PostgreSQL 9.6 databases called inventory-db. After inventory-db-clone is created, you view the cloned instance’s Overview page from the Console and you click the Edit button. You spot the Upgrade button underneath the database version and you click it.
When you’re prompted to choose a major version to upgrade to, you click the drop down menu and select PostgreSQL 14.
On the next screen, you type in the cloned instance’s name to confirm that you are ready to start the upgrade.
Cloud SQL takes you back to the Overview page and you see the upgrade progress in real-time. You hold your breath for a moment–is it really this easy?
You leave for a brief moment to crush the daily Wordle (3 guesses!). When you click back a few minutes later: presto change-O! Your database is now on PostgreSQL 14!
You perform a few acceptance tests on the upgraded instance and things are running smoothly. You’re really impressed by how much faster things are running in PostgreSQL 14. You test out the upgrade procedure a few more times with other PostgreSQL and SQL Server instances in the fleet and everything looks good. You’re feeling ready as ever for the big upgrade event next month!
Learn more
With in-place upgrades, you can now upgrade your database major version and benefit from the latest database features, performance advances, and other improvements. To learn more about in-place upgrades, see our documentation.
Read More for the details.
Azure Sphere OS quality release 22.04 is available now.
Read More for the details.
To compete in a fast moving, transforming, and increasingly digital world, every team, business, process and individual needs to level up the way they think about data. Which is why this year at our Data Cloud Summit 2022, we saw record turnout, both in volume and diversity of attendance. Our thanks go out to all the customers, partners, and data community who made it such a great success!
Did you miss out on the live sessions? Not to worry – all the content is now available on demand.
Here are the five biggest areas to catch up on from Data Cloud Summit 2022:
Data is no longer solely the realm of the analyst. Every team, customer and partner needs to be able to interact with the data they need to achieve their goals. To help them do so, we announced 15 new products, capabilities and initiatives that help remove limits for our users. Here are some highlights:
BigLake allows companies to unify data warehouses and lakes to analyze data without worrying about the underlying storage format or system.
Spanner change streams tracks Spanner inserts, updates, deletes, and streams the changes in real-time across the entire Spanner database so that users will always have access to the latest data.
Cloud SQL Insights for MySQL helps developers quickly understand and resolve database performance issues for MySQL
Vertex AI Workbench delivers a single interface for data and ML systems.
Connected Sheets for Looker and the ability to access Looker data models within Data Studio combine the best of both worlds of BI, giving you centralized, governed reporting where you need it, without inhibiting open-ended exploration and analysis.
More product news announced at Data Cloud Summit can be found here.
Customers are at the heart of everything we do, and that was evident at the Data Cloud Summit. Wayfair, Walmart, Vodafone, ING Group, Forbes, Mayo Clinic, Deutsche Bank, Exabeam and PayPal all spoke about their use of Google’s Data Cloud to accelerate data-driven transformation. Check out some of their sessions to learn more:
Unify your data for limitless innovation, featuring Wayfair and Vodafone
Unlocking innovation with limitless data, featuring Exabeam
Spotlight: Database strategy and product roadmap, featuring Paypal
We also heard from you directly! Here are some great quotes from the post-event survey:
“This is the first time that I have been exposed to some of these products. I am a Google Analytics, Data Studio, Search Console, Ads and YouTube customer…so this is all very interesting to me. I’m excited to learn about BigQuery and try it out.”
“The speakers are very knowledgeable, but I appreciate the diversity in speakers at these cloud insights.”
“Great experience because of the content and the way that it is presented.”
“This is definitely useful to new Google Admin Managers like I am.”
“This was a great overview of everything new in such a short time!”
Our partner ecosystem is critical to delivering the best experience possible for our customers. With more than 700 partners powering their applications using Google Cloud, we are continuously investing in the ecosystem. At Data Cloud Summit, we announced a new Data Cloud Alliance, along with the founding partners Accenture, Confluent, Databricks, Dataiku, Deloitte, Elastic, Fivetran, MongoDB, Neo4j, Redis, and Starburst, to make data more portable and accessible across disparate business systems, platforms, and environments—with a goal of ensuring that access to data is never a barrier to digital transformation. In addition, we announced a new Database Migration Program to accelerate your move to managed database services. Many of these partners delivered sessions of their own at Data Cloud Summit 2022:
Accelerate Enterprise AI adoption by 25-100x, featuring C3 AI
Rise of the Data Lakehouse in Google Cloud, featuring Databricks
The Connected Consumer Experience in Healthcare and Retail, featuring Deloitte
Investigate and prevent application exploits with the Elasticsearch platform on Google Cloud
Experts from Google Cloud delivered demos giving a hands-on look at a few of the latest innovations in Google’s Data Cloud:
Cross-cloud analytics and visualization with BigQuery Omni and Looker, with Maire Newton and Vidya Shanmugam
Build interactive applications that delight customers with Google’s data cloud, with Leigha Jarett and Gabe Weiss
Build a data mesh on Google Cloud with Dataplex, with Prajakta Damie and Diptiman Raichaudhuri
Additional demos are available here on-demand.
If you want to go even deeper than the Summit sessions themselves, we’ve put together a great list of resources and videos of on-demand contentto help you apply these innovations in your own organization. Here are some of the highlights:
Guide to Google Cloud Databases (PDF)
How does Pokémon Go scale to millions of requests? (video)
MLOps in BigQuery ML using Vertex AI (video)
Database Engineer Learning Path (course)
Machine Learning Engineer Learning Path (course)
BI and Analytics with Looker (course)
Thanks again for joining us at this year’s Data Cloud Summit. Join us again at Applied ML Summit June 9, 2022!
Read More for the details.
If you’re trying to learn a new database, you’ll want to kick the tires by loading in some data and maybe doing a query or two. Cloud Bigtable is a powerful database service for scale and throughput, and it is quite flexible in how you store data because of its NoSQL nature. Because Bigtable works at scale, the tools that you use to read and write to it tend to be great for large datasets, but not so much for just trying it out for half an hour. A few years ago, I tried to tackle this by putting together a tutorial on importing CSV files using a Dataflow job, but that requires spinning up several VMs, which can take some time.
Here on the Bigtable team, we saw that the CSV import tutorial was a really popular example despite the need to create VMs, and we heard feedback from people wanting a faster way to dive in. So now we are excited to launch a CSV importer for Bigtable in the cbt CLI tool. The new importer takes a local file and then uses the Go client library to quickly import the data without the need to spin up any VMs or build any code.
If you already have the gcloud with the cbt tool installed, you just need to ensure it is up to date by running gcloud components upgrade. Otherwise, you can install gcloud which includes the cbt tool.
If you’re unable to install the tools on your machine, you can also access them via the cloud shell in the Google Cloud console.
I have a csv file with some time series data in the public Bigtable bucket, so I’ll use that for the example. Feel free to download it yourself to try out the tool too. Note that these steps assume that you have created a Google Cloud project and a Cloud Bigtable instance.
You need to have a table ready for the import, so use this command to create one:
Then, to import the data use the new cbt import command:
You will see some output indicating that the data is being imported. After it’s done you can use cbt to read a few rows from your table:
If you were following along, be sure to delete the table once you’re done with it.
The CSV file uses one row of headers specifying the column qualifiers and a blank for the rowkey. You can add an additional row of headers for the column families and then remove the column-family argument from the import command.
I hope this tool helps you get comfortable with Bigtable and can let you experiment with it more easily. Get started with Bigtable and the cbt command line with the Quickstart guide.
Read More for the details.