At Mercado Libre, we are obsessed with unlocking the power and potential of data. One of our key cultural principles is to have a Beta Mindset. This means that we operate in a “state of beta”, constantly asking new questions of our data, experimenting with technologies and iterating our business operations in service of creating the best experiences for our customers.
To provide context about the footprint of our organization; Mercado Libre has 30,000+ employees, across six countries in LATAM, with multiple corporate and thousands of home offices. We serve over 65+ million customers and only last year processed 75+ million payments. Like many of our peers, we are learning to adapt to new challenges of scaling our organization, training programs and business to build the most inclusive and impactful company culture across many locations, with data at its core.
This is the first of a three part blog series on continuous intelligence, cutting edge technology and data culture, in which we reveal some of our capabilities to win ecommerce and become the fintech leaders.
Continuous Intelligence – Building the Foundation
The foundation to our success is our continuous intelligence ecosystem. These are the systems and interfaces our analytics team has put in place to consistently serve many of the needs of our users at scale.
According to Gartner:
“Continuous intelligence is a design pattern in which real-time analytics are integrated into business operations, processing current and historical data to prescribe actions in response to business moments and other events.”
At Mercado Libre, while delivering analytics, we aim to build design patterns that allow us to programmatically consume data from Google BigQuery (BQ), our powerful cloud data warehouse. These design patterns are built using the universal semantic modeling layer of Looker. It is our goal that our continuous intelligence ecosystem both demonstrates the art of the possible with data and allows others to innovate and build upon existing work to meet their very specific needs at that moment in time.
Real-time analytics at scale with Google BigQuery
We attempt every effort to make decisions based on data. To infuse our decision making with data, we have found that data needs to be timely, credible, and available for analysis, no matter the source. This includes streaming, collecting and presenting data from our whole ecosystem, which includes external data, our internal management systems, web traffic from products like Google Analytics and App Annie, warehouse and network logs, cloud usage and costs, and, of course, all of our APIs.
We chose Google BigQuery as the primary query engine within our ecosystem, due to its resiliency and reliability working with our growing volumes and number of data sources. With Google’s serverless, auto-scaling data warehouse, we are able to process hundreds of terabytes of raw data and run hundreds of thousands of queries every day without sacrificing performance or additional management overhead. This combination of speed and scale on-demand has helped accelerate our journey towards real-time analytics to support improved decision making with the freshest insights and data available.
Bridging the technical gap between data and users with Looker
Having raw data available through powerful query engines is a fundamental component of our continuous intelligence ecosystem, but it isn’t enough on its own. Even with an army of analysts with SQL knowledge ready to query data, could result in inconsistent logic and ungoverned business rules.
To build trust in our continuous intelligence ecosystem at scale, we needed a data modeling and business intelligence tool that would enable us to consistently and programmatically consume data – and could grow at the speed of our business. Enter Looker, a modern business intelligence and data platform. Powered by its robust universal semantic modeling layer and numerous modern integrations, it allows us to easily infuse data into workflows.
At Looker’s core is a collaborative and version-controlled semantic model that’s created using LookML, a dependency language which is easy to learn and maintain for anyone familiar with SQL. We were able to create a single source of truth for our business by defining dimensions, metrics, and join logic centrally. For our analysts and business users, this defined logic is used to automatically generate consistent SQL statements on behalf of a user, which is where things start to become interesting: trustworthy self-service analytics at scale.
BigQuery+Looker: Power to the People
This powerful data stack connects users to the power of cloud computing and analytics. Insights can be delivered and created in many different ways from viewing the classic dashboard, integrating with Slack, receiving email alerts, to embedding analytics. Data can also be accessed through other BI tools including spreadsheets which are a very popular tool that many are comfortable with. This is why we are particularly excited about Looker’s new Google Sheets integration. As alpha testers for this new integration, we have been able to provide even broader access to data through the spreadsheet interface that everyone is familiar with. Rather than asking people to learn yet another new tool, our users’ existing workflows aren’t dramatically changing, they’re constantly evolving!
Our Business and Customer Experience has Improved.
Following are some examples of outcomes from different points along the data lifecycle which are propelling our business forward and improving our customer’s experience: Once we were on BigQuery, we managed to reach 99% of the SLA availability required by processes that feed +30 near-real-time monitors we developed in Looker which are consumed by business, transport and operations teams to make key decisions. Along with BQ and the ease of use of Looker ML and Looker, we gained agility and speed in the creation and deployment of new dashboards, adapting to changes in our incredibly competitive industries.
And in creating value for customers, we are able to monitor (in near real-time) the delivery promise of our shipments and optimize scheduling based on the capacity of our aircrafts, providing reliability to our stakeholders.
Building Upon The Foundation – A Culture of learning and innovation
As an analytics team, we can’t possibly respond and scale as fast as the company grows. Knowingly, we’ve developed our continuous intelligence ecosystem as a foundational layer that can help solve 80% of people’s data questions. The additional 20% still depends on our ability to empower people to leverage their own business knowledge and expertise with the tools we provide in order to innovate on top of what we have built.
Ultimately, a true continuous intelligence ecosystem is bigger than technologies – it requires user empathy to understand how they want to use and consume data, and provide education. To realize our longer term adoption and data literacy goals, we have built out comprehensive programs employees go through, to instill a beta mindset and how to use these tools to innovate and build on their own insights.
As our user’s data skills evolve in parallel with our data technologies, our vision is to get to a point where people can build their own models, frameworks, decision frameworks on top of the continuous intelligence ecosystem that we’ve built. This continuous iteration and innovation is what fuels our data-driven culture. How we teach data literacy and culture will be the topic of our third blog installment.
In our second blog installment, we will explore a Shipping use case of how our continuous intelligence ecosystem was leveraged and innovated upon by our users to deliver impactful outcomes.
Gartner IT Glossary, Continuous Intelligence, as on Mar 28, 2022.
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.
Cloud BlogRead More