Back to articles

AI-by-Design in Practice: Real Cases from OLX

5 min readBy Ioannis Zempekakis
AI by DesignCase StudyOLXProductPart 4

AI-by-Design in Practice: Real Cases from OLX

Part 4 of the AI-by-Design series. Start from Part 1.


To help teams adopt the AI-by-Design framework, here are real examples from our work at OLX Group that illustrate the importance of each step.

Step 1 — Discover: What Happens When You Solve the Wrong Problem?

We trained an AI algorithm to identify and remove inorganic text from images on our car platform. The solution worked technically — we could even recreate the missing parts of the image.

But our customers were less impressed than us.

What we missed: In most cases, the inorganic text wasn't harmful or illegal. It included phone numbers of local car representatives. Our platform didn't support multiple phone numbers per region, so sellers were forced to use images to add contact details.

"Our AI solution solved one problem but it ignored an even bigger customer problem."

The fix: We (1) updated the platform to allow multiple phone numbers, and (2) retrained the algorithm to automatically extract and fill in the missing numbers.

Learning: Start with identifying the actual customer need. Don't make assumptions.

Step 2 — Define: Customer Needs vs. AI Potential

When OLX² wanted to improve trust in car selling, we conducted in-depth interviews with 15 car buyers and mapped their needs. We then used three lenses:

  1. Desirability — Quantitative survey to understand which need was most important
  2. Viability — Competitor mapping to find where OLX could bring unique value
  3. Feasibility — AI solutions mapped to understand if needs could be addressed with technology

Learning: Regardless of the approach, consider desirability, viability and feasibility in defining scope.

Step 3 — AI-by-Design Decision: Four Critical Questions

Before committing to AI, ask:

  1. Does a human know how to determine it in a few seconds? Anything you could do in seconds can probably be automated with AI. Deciding if a car has a scratch? Yes. Deciding if the engine is in good condition? Much harder.

  2. Has anyone attempted something similar? Explore GitHub, look for parallel business cases. If you can't find similar use cases, you might be solving an unsolved problem that requires more time and resources.

  3. How often does the prediction need to be correct? If always — or when the cost of a wrong prediction is high — evaluate whether a simpler logic-driven approach should be used instead.

  4. Is the right data available? Having data for a long time doesn't mean it's the right data or good enough quality.

"Avoid using AI as a gimmick."

Case: Building Datasets with Users

For our AI car inspection tool, we needed damage recognition data. Existing data was messy and would take months to label manually.

Our approach: Data scientists and designers brainstormed solutions that could both solve a customer problem and deliver the right data. The prerequisite: the tool should still offer value to the customer — it couldn't just be a data collection tool.

Learning: It is possible to design a tool that gathers data and improves the customer experience at the same time.

Step 4 — Develop: Designing for Trust

Many AI initiatives fail because of lack of trust. We focus on three pillars:

Explainable

Prefer solutions with predictions that can be explained. It's challenging to gain stakeholder trust if they can't understand how the algorithm makes predictions.

Unbiased

Explore whether your data is biased toward a particular population. The same bias in an AI system can have a much bigger impact than in human decision-making.

Transparent

Indicate to users the limitations of predictions, how data is gathered, and how their interaction shapes the solution. Use non-technical language.

Case: Car Price Prediction

We had two options: a complex tree-based algorithm and a simpler one. Both had the same error rate. We chose the simpler one.

Why? Higher adoption from sellers and inspection centres — caused by higher explainability and transparency. The simpler algorithm was easier for end-users to comprehend and made it easier for sellers to provide feedback to directly improve it.

Step 5 — Test: Always Test Before Building

Case: The Artificial Showroom

We discovered an AI tool that could place cars in a crisp, fake showroom. A data scientist estimated it would take just a couple of days to build. Why not develop it right away?

We showed it to car sellers first. They unanimously disliked it. Private sellers feared scammers could abuse it to pretend to be dealers. Dealers feared it would harm their reputation.

"The lesson became clear: always test your assumptions before you start building."

The tool could have jeopardised our goal of improving trust — leading to the intended opposite effect.

Step 6 — Deliver & Evaluate: Feedback Loops

Case: Automatic Attribute Extraction

We developed algorithms to extract missing listing attributes (colours, brands, origin country) from descriptions, images, and videos.

In the first iteration, we intentionally asked sellers to accept or change the automatically extracted values. "Is your car blue?" We prioritised transparency and asked for direct feedback.

This feedback was used to:

  • Calculate live model accuracy
  • Capture better labels to improve the solution
  • Build trust with both buyers and sellers

Learning: Feedback loops empower end-users, build trust, and improve predictions over time.


Next: Part 5 — Key Takeaways

Previous: Part 3 — The 6-Step Framework