Azure – Azure Load Testing now available in public preview
You can now generate high scale load with ease and integrate it into existing CI/CD workflows.
Read More for the details.
You can now generate high scale load with ease and integrate it into existing CI/CD workflows.
Read More for the details.
Use the Flexible Server deployment option allowing better control, flexibility, zone-redundant high availability, and cost optimization on Azure Database for PostgreSQL, a managed service running the open source PostgreSQL database.
Read More for the details.
With Azure NetApp Files application volume group (AVG) for SAP HANA, you are able to deploy all volumes required to install and operate an SAP HANA system according to best practices.
Read More for the details.
Amazon SageMaker Canvas is a new capability of Amazon SageMaker that enables business analysts to create accurate machine learning (ML) models and generate predictions using a visual, point-and-click interface, no coding required.
Read More for the details.
AWS Lake Formation is excited to announce the general availability of three new capabilities that simplify building, securing, and managing data lakes. First, Lake Formation Governed Tables, a new type of table on Amazon S3, that simplifies building resilient data pipelines with multi-table transaction support. As data is added or changed, Lake Formation automatically manages conflicts and errors to ensure that all users see a consistent view of the data. This eliminates the need for customers to create custom error handling code or batch their updates. Second, Governed Tables monitor and automatically optimize how data is stored so query times are consistent and fast. Third, in addition to table and columns, Lake Formation now supports row and cell-level permissions, making it more easily to restrict access to sensitive information by granting users access to only the portions of the data they are allowed to see. Governed Tables, row and cell-level permissions are now supported through Amazon Athena, Amazon Redshift Spectrum, AWS Glue, and Amazon QuickSight.
Read More for the details.
We are happy to announce the preview of Amazon EMR Serverless, a new serverless option in Amazon EMR that makes it easy and cost-effective for data engineers and analysts to run petabyte-scale data analytics in the cloud. Amazon EMR is a cloud big data platform used by customers to run large-scale distributed data processing jobs, interactive SQL queries, and machine learning applications using open-source analytics frameworks such as Apache Spark, Apache Hive, and Presto. With EMR Serverless, customers can run applications built using these frameworks with a few clicks, without having to configure, optimize, or secure clusters. EMR Serverless automatically provisions and scales the compute and memory resources required by the application, and customers only pay for the resources they use.
Read More for the details.
Amazon Kinesis Data Streams is a serverless streaming data service that makes it easy to capture, process, and store streaming data at any scale. Kinesis Data Streams On-Demand is a new capacity mode for Kinesis Data Streams, capable of serving gigabytes of write and read throughput per minute without capacity planning. You can create a new on-demand data stream or convert an existing data stream into the on-demand mode with a single-click and never have to provision and manage servers, storage, or throughput. In the on-demand mode you pay for throughput consumed rather than for provisioned resources, making it easy to balance costs and performance.
Read More for the details.
Amazon Redshift now provides a serverless option (preview) to run and scale analytics without having to provision and manage data warehouse clusters. With Amazon Redshift Serverless, all users including data analysts, developers, and data scientists can now use Amazon Redshift to get insights from data in seconds. Amazon Redshift Serverless automatically provisions and intelligently scales data warehouse capacity to deliver best-in-class performance for all your analytics. You only pay for the compute used for the duration of the workloads on a per-second basis. You can benefit from this simplicity without making any changes to your existing analytics and business intelligence applications.
Read More for the details.
Today we announced Amazon MSK Serverless in public preview, a new type of Amazon MSK cluster that makes it easier for developers to run Apache Kafka without having to manage its capacity. MSK Serverless automatically provisions and scales compute and storage resources and offers throughput-based pricing, so you can use Apache Kafka on demand and pay for the data you stream and retain.
Read More for the details.
Today, we are announcing the preview of AWS Private 5G, a new managed service that helps enterprises set up and scale private 5G mobile networks in their facilities in days instead of months. With just a few clicks in the AWS console, customers specify where they want to build a mobile network and the network capacity needed for their devices. AWS then delivers and maintains the small cell radio units, servers, 5G core and radio access network (RAN) software, and subscriber identity modules (SIM cards) required to set up a private 5G network and connect devices. AWS Private 5G automates the setup and deployment of the network and scales capacity on demand to support additional devices and increased network traffic. There are no upfront fees or per-device costs with AWS Private 5G, and customers pay only for the network capacity and throughput they request.
Read More for the details.
This update allows the ability for you to transform device data in IoT Central into your preferred structure and export that transformed data to an external destination.
Read More for the details.
Starting today, the new Amazon EC2 C7g instances powered by the latest generation custom-designed AWS Graviton3 processors are available in preview. Amazon EC2 C7g instances will provide the best price performance in Amazon EC2 for compute-intensive workloads such as high performance computing (HPC), gaming, video encoding, and CPU-based machine learning inference. These instances are the first in the cloud to feature the cutting edge DDR5 memory technology, which provides 50% more bandwidth compared to DDR4 memory. C7g instances provide 20% higher networking bandwidth compared to previous generation C6g instances based on AWS Graviton2 processors. They also support Elastic Fabric Adapter (EFA) for applications such as high performance computing that require high levels of inter-node communication.
Read More for the details.
AWS Mainframe Modernization is a unique platform for mainframe migration and modernization. It allows customers to migrate and modernize their on-premises mainframe workloads to a managed and highly available runtime environment on AWS. This service currently supports two main migration patterns – replatforming and automated refactoring – allowing customers to select their best-fit migration path and associated toolchains based on their migration assessment results.
Read More for the details.
Across all industries, the last few years have accelerated the need to transform to digital, attain new talent and capabilities at a rapid pace, and enlist new operating models, such as cloud technology. The impetus for these changes has been not only to drive down costs but also to increase competitiveness and boost productivity.
Mergers & acquisitions (M&A), as well as restructurings such as carve-outs, divestitures, spin-offs etc., have always been a common tool in the CEO and Board agenda in order to deliver added growth, create and deliver synergies, reposition the company’s strategy and rebalance the corporate portfolio to its most efficient and forward looking uses. Increasingly, strategic access to innovative technologies, data acquisition and monetization, technical debt elimination and capabilities such as Artificial Intelligence and Machine Learning (AI/ML) are some of the main reasons to pursue a deal. The post-merger integration of the new bigger and more complex technology estate is likely to be an increasingly important element to delivering added value…if the integration is completed successfully.
Google Cloud has a unique set of solutions, processes, partners and people to accelerate strategic deals and retain or create even more value than initially built into the deals valuation model. In this blogpost, we expand on how Google Cloud acts as an accelerating agent for realizing additional value propositions.
Google Cloud can be a trusted advisor in tracing the strategic integration journey after an M&A deal. More concretely, Google Cloud’s value can be summarized in the following:
1. Seamless integration and single pane of control across Cloud Service Providers (CSPs) and companies’ physical data centers with Anthos.
What we typically observe is that after M&A customers end up with a fragmented technology stack across multiple CSPs and a private cloud on-premise. This increases the friction in the software development lifecycle (SDLC) due to different processes and skill sets required to develop, test and release software across the various environments. Without a consistent platform, companies squander valuable technical resources and fall short of business demands for velocity and customer experience. In a recent study by Forrester Consulting (commissioned by Google Cloud), using Anthos as a managed platform to control the SDLC across environments led to a projected 4.8x return on investment (ROI), 38% reduction in non-coding activities for technology teams and 75% increase in application migration and modernization.
2. Increased optionality, observability and control for IT and vendor rationalization in a post-merger landscape, with our API management platform Apigee.
For technology teams, the merger impact is felt immediately. The technology estate is bigger, more complex and most probably has multiple pockets of duplication, which will likely be a shifting landscape in the years to come. APIs are a key element to the success of post-merger integration. By putting Apigee, an API management platform, in front of all HTTP application and data traffic, an organization can create a single pane of glass across all infrastructure, allowing companies undergoing an M&A to make more strategic, data-driven decisions. For instance, traffic to vendor or legacy systems can be centrally monitored and measured to make their use observable which leads to more data-driven rationalization decisions. Also, introducing an API layer around pockets of duplication provides optionality and relaxes time constraints; both likely critical elements of a successful integration.
3. Rapid and automated modernization of legacy technical debt, reducing the time of post-merger operations on gauging IT priorities.
At Google Cloud we offer three main levers to help with these post-merger situations:
Post-merger modernization: Using Google Cloud-owned tools, processes and methodologies which we refer to as the G4 Platform, Google Cloud can help automate and rapidly modernize legacy systems, e.g. Mainframe modernization, by automatically translating code written in Cobol to modern, easily and cost-effectively maintainable languages such as Java.
Attacking the post-merger backlog: Google Cloud Cortex Framework offers repeatable blueprints and reference architectures that can accelerate time to value. Often backlogs multiply after a merger, leveraging repeatable patterns is key to scale the output of the technology teams.
Divide and conquer complexity: Decomposing legacy applications through containers opens the path for step by step migrations and modernization, gradually moving processes to Virtual Machines on Cloud and then onwards to containers to leverage the full power of Kubernetes.
4. Merging data and re-prioritising data operations and licenses, while avoiding sunk costs and fixed, often duplicated, multi-year contracts with data providers.
Using market and alternative data as a service on Google Cloud, merged companies can quickly and efficiently rationalize their post-merger data licensing and infrastructure needs. For instance, financial institutions can leverage commercial market data readily available on Google Cloud in an analysis-ready state, while corporate sustainability teams can use geolocation datasets available throughGoogle Earth Engine. The business can focus on the new business opportunities instead of adapting and merging legacy data estates and operations. Additionally, with BigQuery Omni, any data infrastructure merge doesn’t have to be a big bang – the data can live in other cloud providers or on-premise while still managing that data from a single pane of control.
5. Placing security, operational resilience, and sovereignty at the center of the post-deal operations.
Post-M&A, merged companies will likely have to adhere to more jurisdictions and regulatory oversight compared to what each individual entity had to adhere and report to previously. This can pose significant challenges in terms of ensuring that the merged company’s technology estate operates within the bounds of data, operational and software sovereignty restrictions of each jurisdiction. Google Cloud offers a series of characteristics that inherently help in this situation. For example, Google Cloud offers access transparency and data sovereignty, portability during stressed exit scenarios for jurisdictions that mandate it and a Sovereign Cloud offering with trusted partners for projects and jurisdictions that demand the highest level of sovereignty assurances.
M&A can place significant pressures on technology teams in the race to identify and merge processes, technologies, and responsibilities. Google Cloud has a plethora of technologies, solutions, and patterns to help you through the journey and unlock the potential of the new combined entity to focus on what matters.
Acknowledgments
Special thanks for their contribution to this blog post to Mose Tronci, Solutions Architect – Financial Services and Prue Mackenzie, Key Account Director – Financial Services.
Read More for the details.
The term “serverless” has infiltrated most cloud conversations, shorthand for the natural evolution of cloud-native computing, complete with many productivity, efficiency and simplicity benefits. The advent of modern “Functions as a Service” platforms like AWS Lambda and Google Cloud Functions heralded a new way of thinking about cloud-based applications: a move away from monolithic, slow-moving applications toward more distributed, event-based, serverless applications based on lightweight, single-purpose functions where managing underlying infrastructure was a thing of the past.
With these early serverless platforms, developers got a taste for not needing to reason about, or pay for, raw infrastructure. Not surprisingly, that led them to apply the benefits of serverless to more traditional workloads. Whether it was simple ETL use cases or legacy web applications, developers wanted the benefits of serverless platforms to increase their productivity and time-to-value.
Needless to say, many traditional workloads turned out to be a poor fit for the assumptions of most serverless platforms, and the task of rewriting those large, critical, legacy applications into a swarm of event-based functions wasn’t all that appealing. What developers needed was a platform that could provide all the core benefits of serverless, without requiring them to rewrite their application — or really have an opinion at all about the workload they wanted to run.
With the introduction of Cloud Run in 2019, the team here at Google Cloud aimed to redefine how the market, and our customers, thought about severless. We created a platform that is serverless at its core, but that’s capable of running a far wider set of applications than previous serverless platforms. Cloud Run does this by using the container as its fundamental primitive. And in the two years since launch, the team has released 80 distinct updates to the platform, averaging an update every 10 days. Customers have similarly accelerated their adoption: Cloud Run deployments more than quadrupled from September 2020 to September 2021.
The next generation of serverless platforms will need to maintain the core, high-value characteristics of the first generation, things like:
Rapid auto-scaling from, and to zero
The option of pay-per-use billing models
Low barriers to entry through simplicity
Looking ahead, serverless platforms will need a much more robust set of capabilities to serve a new, broader range of workloads and customers. Here are the top five trends in serverless platforms that we see for 2022 and beyond.
Serverless’s value proposition isn’t limited to new applications, and shouldn’t require a wholesale rewrite of what is (and has been), working just fine. Developers ought to be able to apply the benefits of serverless to a wider range of workloads, including existing ones.
Cloud Run has been able to expand the range of workloads it can address with several new capabilities, including:
Per-instance concurrency. Many traditional applications run poorly when constrained to a single-request model that’s common in FaaS platforms. Cloud Run allows for up to 1,000 concurrent requests on a single instance of an application, providing a far greater level of efficiency.
Background processing. Current-generation serverless platforms often “freeze” the function when it’s not in use. This makes for a simplified billing model (only pay while it’s running), but can make it difficult to run workloads that expect to do work in the background. Cloud Run supports new CPU allocation controls, which allow these background processes to run as expected.
Any runtime. Modern languages or runtimes are usually appropriate for new applications, but many existing applications either can’t be rewritten, or depend on a language that the serverless platform does not support. Cloud Run supports standard OCI images and can run any runtime, or runtime version, that you can run in a container.
Recent high-profile hacks like SolarWinds, Mimecast/Microsoft Exchange, and Codecov have preyed on software supply chain vulnerabilities. Malicious actors are compromising the software supply chain — from bad code submission to bypassing the CI/CD pipeline altogether.
Cloud Run integrates with Cloud Build, which offers SLSA Level 1 compliance by default and verifiable build provenance. With code provenance, you can trace a binary to the source code to prevent tampering and prove that the code you’re running is the code you think you’re running. Additionally, the new Build Integrity feature automatically generates digital signatures, which can then be validated before deployment by Binary Authorization.
Workloads with highly variable traffic patterns, or those with generally low traffic, are a great fit for the rapid auto-scaling and scale-to-zero characteristics of serverless. But workloads with a more steady-state pattern can often be expensive when run with fine-grained pay-per-use billing models. In addition, as powerful as unbounded auto-scaling can be, it can make it difficult to predict the future cost of running an application.
Cloud Run includes multiple features to help you manage and reduce costs for serverless workloads. Organizations with stable, steady-state, and predictable usage can now purchase committed use contracts directly in the billing UI, for deeply discounted prices. There are no upfront payments, and these discounts can help you reduce your spend by as much as 17%.
The always-on CPU feature removes all per-request fees, and is priced 25% lower than the standard pay-per-request model. This model is generally preferred for applications with either more predictable traffic patterns, or those that require background processing.
For applications that require high availability with global deployments, traditional “fixed footprint” platforms can be incredibly costly, with each redundant region needing to carry the capacity for all global traffic. The scale-to-zero behavior of Cloud Run, together with its availability in all GCP regions, make it possible to have a globally distributed application without needing a fixed capacity allocation in any region.
A large part of increasing simplicity and productivity for developers is about reducing the barriers to entry so they can just focus on their code. This simplicity needs to extend beyond the “day one” operations, and provide an integrated DevOps experience.
Cloud Run supports and end-to-end DevOps experience, all the way from source code to “day-two” operations tooling:
Start with a container or use buildpacks to create container images directly from source code. In fact, you don’t even need to learn Docker or containers. With a single “gcloud run deploy” command, you can build and deploy your code to Cloud Run.
Built-in tutorials in Cloud Shell Editor and Cloud Code make it easy to come up to speed on serverless. No more switching between tabs, docs, your terminal, and your code. You can even author your own tutorials, allowing your organization to share best practices and onboard new hires faster.
Experiment and test ideas quickly. In just a few clicks, you can perform gradual rollouts and rollbacks, and perform advanced traffic management in Cloud Run.
Get access to distributed tracing with no setup or configuration, allowing you to find performance bottlenecks in production in minutes.
The code you write and the applications you run should not be tied to a single vendor. The benefits of the vendor’s platform should be applied to your application, without you needing to alter your application in unnecessary ways that lock you in to a particular vendor.
Cloud Run runs standard OCI container images. When deploying source code directly to Cloud Run, we use open source buildpacks to turn your source code into a container. Your source code, your buildpack used and your container can always be run locally, on-prem, or on any other cloud.
These five trends are important things to consider as you compare the various serverless solutions in the market in the coming year. The best serverless solution will allow you to run a broad spectrum of apps, without language, networking or regional restrictions. It will also offer secure multi-tenancy, with an integrated secure software supply chain. And you’ll want to consider how the platform helps you keep costs in check, whether it provides an integrated DevOps experience, and ensures portability. Once you’ve answered these questions for yourself, we encourage you to try out Cloud Run, with these Quickstart guides.
Read More for the details.
Notified is a leading communications cloud for events, public relations, and investor relations to drive meaningful insights and outcomes. They provide communications solutions to effectively reach and engage customers, investors, employees, and the media.
One of Notified’s Public Relations solutions is the ‘Media Contact Database’ that allows customers to discover media and influencers in a unique media database powered by AI and human-curated research.
The goal of the initiative is to expand the scope of the AI driven, dynamically discovered influencers, and analyze online news articles using AI/ML technologies to extract entities and classify content. The prior process to extract insights from news articles provided only 30-40% of the desired results, and there were accuracy and stability issues that resulted in a lot of manual intervention.
A key outcome of the AI driven process is to identify the ‘Journalist Beat’. A Journalist Beat essentially summarizes the individual’s area of focus such as a sports writer, financial journalist etc.
Three options were evaluated for the AI/ML process to generate the Journalist Beats :
Option 1: Topic ML
Unsupervised ML approach to determine the commonly used terms.
Pro: Common approach to grouping documents and determine similar textCon: Unbounded list of text
Option 2: ML Classification
Build classification models (supervised) to map reference articles to ‘Beats’
Pro: Aligns to ‘Research Analytics’ existing processesCon: Time to build and maintain ML models for hundreds of beats.
Option 3: GCP Context Classification
Leverage GCP’s Natural Language API for initial classification and as input to Notified single model
Pro: Aligns to ‘Research Analytics’ without building ML models.
Ultimately the GCP Natural Language API solution was chosen because of the speed of execution and a high level of accuracy with the pretrained models. The Notified team was able to launch the product feature within a few weeks, without ever needing to do extensive data collection and train the models.
Here is the high level process that was implemented for Journalist Beats.
Since Notified supports curated media contacts globally, news articles were instantly translated to English using GCP Translation API. GCP Natural Language API’s solution to classify text was used to analyze the translated text and generate the list of content categories.
Here is a sample solution architecture for the ‘Discovered Journalist’ process.
Three core principles guided the above architecture – Serverless & Fully Managed, Scalability & Elasticity for flexibility and to optimize costs, API led real-time processing.
In addition to the GCP Natural Language API and Translation API below are a few serverless GCP products that were part of the automated solution:
BigQuery is Google Cloud’s fully managed, petabyte-scale, and cost-effective analytics data warehouse that lets you run analytics over vast amounts of data in near real time.
Cloud Run is a fully managed serverless platform that can be used to develop and deploy highly scalable containerized applications.
Cloud Tasks is a fully managed service that allows you to manage the execution, dispatch, and delivery of a large number of distributed tasks.
The powerful pre-trained models of the Natural Language API provide a comprehensive set of features to apply natural language understanding to applications such as sentiment analysis, entity analysis, entity sentiment analysis, content classification, and syntax analysis.
In an effort to even further improve its best in class ‘Media Contact Database’, Notified looks to super scale the above AI driven Influencer Discovery process to the order of 100+ million news articles per month. It plans to expand the scope of entities extracted from the news articles and provide a news exploration service for its customers by performing intelligent entity-based searches.
Acknowledgments
We’d like to thank our collaborators at Google and Notified for making this blog post possible. Thanks to Arpit Agrawal at MediaAgility for contributing to this blog post.
To learn more about how Google Cloud Natural Language AI can help your enterprise, try out an interactive demo and take the next step, visit the product overview page here.
Read More for the details.
Not long ago, building AI into recommendation engines was a daunting, expensive task that could take years to get off the ground. But as Bazaarvoice has shown, with the help of cloud services, the time from AI investment to business outcomes is shorter than ever.
Bazaarvoice is the leading provider of product reviews and user-generated content (UGC) solutions that help brands and retailers understand and better serve customers. Its 2019 acquisition of Influenster.com, a community of consumer reviewers 6.5 million strong, expanded the Bazaarvoice portfolio with a platform where consumers can share their candid opinions — and share they have, over 54 million times.
After the acquisition, Bazaarvoice expanded the site’s product diversity by 53%, to more than 5.4 million unique products. To keep user engagement high, Influenster must be seen as both a source of trusted, transparent reviews and a place for customers to discover useful, relevant products for the first time. By introducing shoppers to new products Influenster not only provides value to customers but also helps brands collect consumer insights.
Influenster started out as a place where people gathered to share their honest thoughts on beauty products but quickly expanded to nearly every category, from Art to Wearables. Because of the much smaller scope, the site started and flourished under a rules-based recommendation engine. However, as Influenster expanded its scope under Bazaarvoice, a more robust recommendation system became necessary. In its earliest days, Influenster was successful because of the human perspective it offered: For every product there was a litany of reviews and images that made users feel as if they were getting an endorsement on a product from a friend.
The Bazaarvoice engineering team asked themselves how they could keep that same feeling of personalization with an ever-growing catalog of items and categories. They needed recommendations that could scale with the site, rather than requiring more rules be constructed each time a new product category was introduced. They also needed to ensure the Influenster experience would remain performant even towards unknown members.
Bazaarvoice tested out several recommendation engines, benchmarking each against their current rules-based system. In the end they decided on Google Cloud’s Recommendations AI because of its transparent billing, ease of integration and setup, and naturally, its proven results.
“Part of what the engineers loved was they knew exactly what it was going to cost as it scaled” says Nick Shiftan, SVP, Content Acquisition Services Product Unit for Influenster. The goal was to build once and innovate rather than leave a wake of technical debt only to be tackled when costs grew unexpectedly out of control. Google Cloud’s straightforward and pay-as-you-go billing allowed them to anticipate how costs would grow as user interactions did and plan accordingly.
“I’m positively surprised how Google packed such a complex system in a very easy-to-use API” remarks Eralp Bayraktar, the Software Engineering team lead overseeing the project. Because the original team was made of just one full-time engineer the ease of integration became an even more critical feature. Not only does Recommendations AI pull from years of suggestion expertise in Google Search and YouTube, but in combination with Ad’s Merchant Center, it also creates a streamlined process for importing product metadata. From there, creating a model becomes a matter of picking the preferred recommendation type and then the business objective to optimize for. Once the model is created and the API integrated into the website, the code is already deployed at the global scale: There are no further architectural considerations to ensure recommendations are available to users worldwide. For Bazaarvoice, this meant going from ideation to production in one month.
“We have used it for product recommendations and off-loaded our DB-tiring business logic to Recommendations AI, which resulted in overall faster response times and much better recommendations as proven by our A/B tests,” Erlap continues.
Bazaarvoice began by A/B testing Recommendations AI against their rules-based system. Early on in the experimental phase they noticed a clear and consistent 60% increase in the click-through rate over their original recommendation system.
Even more impressive was the performance on Unknown Members. For every person that signs up for an account on Influenster.com there are many other visitors that come to the website and leave without fully registering. This is typically referred to as the “cold start” problem in the industry — how do you figure out what to recommend to those people without their history, behavior, or preferences? Recommendations AI gives you the option to input and train on unknown users, and by providing metadata on products, it can provide high-quality suggestions to registered members and first-time users alike.
With a mind to the future, Erlap concludes his thoughts on Bazaarvoice’s experience: “It enables discovery by adding an adjustable percentage of cross-category products [for] healthier [traffic distribution] across all our catalog. We are investing in data science and having the Recommendations AI as the baseline is a good challenge for us to thrive.”
To learn more about Recommendations AI and how it can help your organization thrive, check out our recently published 4 part guide which kicks off with an overview on “How to get better retail recommendations with Recommendations AI.” This series also covers data ingestion, modeling, as well as serving predictions & evaluating Recommendations AI. You can also easily get started with our Quickstart Guide.
Read More for the details.
At Google Cloud, we are committed to supporting the next wave of growth for Europe’s businesses and organizations. Germany is one of the largest and most connected global economies, and it is undergoing digital transformation enabled by the use of cloud services. To further support that transformation, we announced plans to invest approximately EUR 1 billion in cloud infrastructure and green energy in Germany by 2030.
Organizations in Germany and across Europe need solutions that meet their requirements for security, privacy, and digital sovereignty, without compromising on functionality or innovation. To help meet these requirements, we launched ‘Cloud. On Europe’s Terms’ in September, and as part of that initiative, we entered into a strategic, long-term partnership with T-Systems to build a Sovereign Cloud offering in Germany for private and public sector organizations.
Building on our investments and partnership, we want to share the next steps in our plan to support sustainable digital transformation in Germany. Together with T-Systems, we will embark on an ambitious co-innovation program focused on developing new sovereign cloud and digital transformation solutions that help and promote the innovation and competitiveness of local cloud customers.
Munich is at the center of our plans. Google and Google Cloud have a long history in the city. Since the initial opening of our Munich office in 2006, it has grown to become one of Google’s main European engineering hubs. More than 1,500 Googlers are based in Munich-Arnulfpark, our largest office in Germany. It is home to our global Google Safety Engineering Center (GSEC)where robust privacy and security solutions for billions of Google users are being built. Munich was a natural choice for Google to locate this center given the security and privacy expertise in the region.
The Free State of Bavaria and its capital are among the most exciting places globally where the next decade of digitization is being shaped – it is home to a rich ecosystem of established and emerging industry leaders, a vibrant technology sector, and academic and research excellence.
This environment is the ideal place to establish our first European Google Cloud Co-Innovation Center, located in our offices on the Westhof side of the developing Arnulfpost Quarter. The Co-Innovation Center will open in the coming months and serve as a digital innovation stronghold in the heart of the Bavarian capital.
Dr. Markus Söder, Minister-President of Bavaria, commented: “Isar Valley meets Silicon Valley: the future lies in digitisation. Google Cloud and T-Systems launch a global partnership for cloud computing in Bavaria. Munich thus continues to grow as one of the leading IT locations worldwide. With the HighTech Agenda, Bavaria is promoting technology, digitalisation and Super-Tech with a total of 3.5 billion euros. We are creating 13,000 new university places and 1,000 professorships, 100 of which are for AI alone.”
Together with T-Systems, a company that has also championed innovation in Munich for a long time, we’ll leverage the new Co-Innovation Center to:
Serve our joint customers’ needs by collaborating on new solutions aligned with their sustainability and transformation goals
Create a space for attracting and developing top cloud engineering talent and expertise in and for Germany
Build the foundation for future sovereign solutions at scale that will strengthen competitiveness and overall digital transformation efforts in Germany.
The Co-Innovation Center will also offer programs that support the development of expertise among cloud customers and partners to further accelerate digital transformation efforts across the ecosystem.
Dr. Maximilian Ahrens, Chief Technology Officer at T-Systems, said: “We are very excited to see co-innovation as a central aspect of our partnership with Google Cloud come to life in Munich. Trust is a core strength for T-Systems and so is innovation for Google Cloud. Co-innovating along these values uniquely positions us to work together with our joint customers in Germany to address their most critical sovereign needs.”
As part of this effort, T-Systems and Google Cloud will create a number of highly-qualified software engineering roles in Munich.
The official opening of our Co-Innovation Center will take place in mid-2022. Dr. Wieland Holfelder – Google Cloud’s long-standing Vice President, Engineering and site lead for Google Munich – will be leading the efforts for a new generation of sovereign cloud solutions.
Given the challenges and opportunities of our time, we believe it is more useful to build bridges and fewer walls, and team up to innovate and develop better technology for Germany, Europe and beyond. We look forward to welcoming our customers and partners to the new Center as we work to make Google Cloud the best possible place for sustainable, digital transformation.
Read More for the details.
Today, we announced the AWS Migration and Modernization Competency. These AWS Partners have deep domain expertise in offering software products that enable customers to migrate and modernize applications while customers move to the cloud. AWS Migration and Modernization Competency Partners can help customers optimize cost and reduce TCO, modernize legacy applications and data, and reduce operational burden.
Read More for the details.