In a world in which we are increasingly bombarded with information, it is harder and more important than ever to get the right content to the right people.
Outbrain, one of the world’s leading content discovery platforms, prides itself on helping publishers and readers separate the signal from the noise and get to the articles they really want to read. Building on more than a decade of experience, Outbrain has honed its recommendation service, working with top publishers like CNN, Le Parisien, and The Guardian. Today, with native placements on many of the world’s premium publishers, it makes 275 billion highly targeted recommendations to over 600 million unique users every month.
A large part of the company’s reputation for success comes from the strength of its research and data science teams. Working on a separate cluster from the live recommendations platform, the researchers pore over Outbrain’s petabytes of data, constantly looking for ways to improve its recommendation algorithms. When the time came to upgrade the existing research cluster, Outbrain began looking for ways to radically improve its efficiency and take advantage of the power of the public cloud for its research platform. On the other hand, the new cluster had to work hand in hand with the existing on-premises infrastructure. For Outbrain, the best solution was Google Cloud Platform (GCP).
“We were at a point where we needed to update our research cluster as the technology it was built on was becoming obsolete,” says Orit Yaron, VP of Cloud Platform at Outbrain. “At the same time, we saw an opportunity for Google Cloud Platform to help us use our clusters more efficiently than with our previous infrastructure.”
“With more than a decade’s experience, Outbrain has a wealth of accumulated knowledge to work with, amounting to around 6 petabytes of data in its research cluster. The company’s existing infrastructure consisted of thousands of servers hosted across multiple data centers. While this solution works for Outbrain’s live recommendation service, which is constantly in use, its research projects have different requirements. Research cluster usage is much more elastic, so smaller projects can lead to idle servers and wasted costs when less compute power is needed.
By 2017, it was time for the company to upgrade its research clusters to use the latest open-source technologies, such as Apache Spark or ORC. Outbrain saw this as the perfect opportunity to redesign its research infrastructure with scalability and efficiency at its core.
The task came with some challenges, however. First, time was tight. Outbrain’s hosting agreement was up for renewal at the end of Q1 2018, effectively giving the company just four months to choose, trial, and fully migrate its new solution. Second, Outbrain could not afford any disruption to its normal activities during the migration. Finally, the new research cluster, even though it was in the public cloud, would have to work seamlessly with the company’s live recommendation cluster, which remained on-premises. After a careful evaluation of costs and technology alignments, Outbrain chose to test a new research cluster built with GCP.”
Teaming up with Israeli Google Premier Partner DoIT International, Outbrain began planning the migration in late 2017. “In DoIT International, we found a partner we can really consult with and come up with radical new ways of how to do things,” says Orit. “They have a deep understanding of the technologies involved. They’re not just a contractor that executes our plan.”
The first task was to complete a Proof of Concept (POC), during which Outbrain could test the viability of the technology, extrapolate costs and begin training its researchers on how to use the new cluster. By January 2018, the company had a good idea of the specific solutions it needed and began migrating the research cluster in full. The migration team ported the data to Cloud Storage using the physical network backup lines in its data center, which enabled a quick transfer without disrupting normal operations.
Open source compatibility with Cloud Dataproc
Cloud Dataproc, with its integrated Hadoop features, formed the backbone of the new research cluster, seamlessly linking with the data in Cloud Storage. Outbrain took full advantage of Google’s innovative scaling options to wring as much efficiency as possible from the solution.
“With Cloud Dataproc, we implemented autoscaling which enabled us to increase or reduce the clusters easily depending on the size of the project,” says Orit. “We also used preemptible nodes for parts of the clusters, which helped us with efficiency of costs.”
In addition to the research cluster, Outbrain also found uses for GCP elsewhere in its operations. The Cloud Vision API, for instance, proved a useful tool for tagging images at scale. With BigQuery, the company could run analytics on its data centers and decide how to shift traffic between them to maximize performance.
“BigQuery gave us the kind of analytics that allowed us to move traffic around in a very granular way,” says Orit. “For example, if one country was having problems, we could now move all of that country’s traffic onto a more suitable data center.”
Google Cloud Platform and DoIT International helped Outbrain successfully migrate its research cluster into the cloud in just four months with minimal disruption and improved capabilities. By reducing on-premises servers and embracing innovative features like autoscaling and preemptible nodes, Outbrain has saved around 40% in infrastructure costs compared with its previous research cluster.
As well as a reduction in costs, the switch to Google has given Outbrain a much better idea of what those costs are likely to be for each project, thanks to autoscaling and the ability to spin up and shut down clusters at will. Meanwhile, a serverless research cluster helps Outbrain get new projects off the ground with greater ease.
“Before, whenever anyone proposed a new project there was always a long lead time while we ordered hardware and provisioned new servers,” says Orit. “Now, as long as the cost is worth it, we don’t have to tell people no or find workarounds. We can get going straight away.”
Since the migration, Outbrain has also found that using the latest technology in a hybrid infrastructure has attracted a new caliber of developers who want to work with the company. “From an obsolete technology, we’re now on the cutting edge,” says Orit. “It puts us in a very competitive position from a talent acquisition perspective.”
With the new research cluster complete, Outbrain continues to look for new ways to optimize its service. After important or sudden news events, the company often experiences heavy spikes in traffic which means over-provisioning servers and having them run idle for a large part of the time, or risk outages. For additional support for these heavy loads, Outbrain is currently experimenting with using Kubernetes clusters in Google Cloud , which can ramp up compute power in seconds, and then shut down when demand is lowered, improving efficiency and lowering costs even further. As impressive as all the technology is, for Outbrain, the real strength of Google was its attitude throughout this process.
“This was a real collaboration between Outbrain and Google,” says Orit. “We really feel like they’re looking out for us in the long term, whether it’s ongoing support or letting us experiment with new tools. It’s a lot of things done well.”