I joined Mercadona’s first hackathon with my brother Laijie and his team. Mercadona’s IT organization powers the backbone of Spain’s largest supermarket chain, so the problems are very real and the constraints are practical.

The prompt was simple: use AI to improve the customer experience in Mercadona’s online store.

We built Mercadona Chef - a prototype assistant that takes a food photo, identifies the dish, and generates a structured recipe plus a shopping list you can add to cart.

User flow

  1. Upload a photo of a dish.
  2. The system proposes what the dish is.
  3. Generate a recipe (ingredients + steps).
  4. Produce an ingredients list aligned with a grocery checkout flow.

System design

We split work across backend, frontend, and AI. I owned the AI pipeline.

Instead of pushing everything through a single monolithic model, we went with a three-stage pipeline so each model does what it is good at:

  1. Dish proposal (VLM) - a vision-language model suggests candidate dish names from the uploaded image.
  2. Ranking (CLIP) - CLIP selects the most plausible match among candidates.
  3. Recipe generation (LLM) - an LLM expands the selected dish into a structured recipe and ingredient list.

This modular approach improved accuracy and gave us better control over failure modes (for example, we could inspect and constrain intermediate outputs). The tradeoff is latency, because steps run sequentially.

Tech stack

  • Models: Hugging Face-hosted VLM + LLM
  • Ranking: CLIP
  • Backend: FastAPI
  • Frontend: Angular

Outcome

By the end of the hackathon we had a working prototype: drop in a photo, wait a few seconds, and get a recipe plus a ready-to-use shopping list.

It worked well enough that we won 1st place.

One thing I appreciated about the event: other teams brought strong, user-focused ideas too (including interfaces aimed at helping elderly users navigate the shopping experience).