← Back to case studies

Insights

TRIZ review analysis for product improvement

Automated product analysis and development journey by combining Amazon review scraping, AI-powered TRIZ analysis, and generative design — turning customer feedback into actionable product improvements.

Updated Jan 2026 By Shahzeb

Highlights

  • Automated review scraping and analysis per product category
  • TRIZ methodology to identify technical and design contradictions
  • AI-generated product images showing resolved issues
TRIZ review analysis for product improvement

The problem

An Amazon client wanted to bridge product development with AI to automate their entire product analysis workflow. The challenge was:

  • Manually reading hundreds of reviews per category to identify product flaws was time-consuming and inconsistent
  • Technical vs. design problems were often mixed together, making prioritization difficult
  • Translating customer complaints into actionable product improvements required domain expertise
  • Visualizing potential solutions before prototyping was costly and slow

They needed a system that could automatically extract insights from reviews and generate concrete improvement recommendations.

What is TRIZ?

TRIZ (Theory of Inventive Problem Solving) is a systematic methodology for innovation that identifies contradictions in design and engineering problems.

Instead of just listing complaints, TRIZ frames problems as contradictions:

  • Technical contradictions: improving one parameter worsens another (e.g., "stronger material" vs. "lighter weight")
  • Physical contradictions: a system requires opposite properties (e.g., "should be large" and "should be small")

By framing customer feedback through TRIZ principles, we could systematically identify root problems and apply proven solution patterns.

Example: Reviews complaining "the vacuum is too heavy but lacks suction power" translate to a TRIZ contradiction: Weight vs. Power. This leads to specific solution patterns like segmentation or dynamic systems.

The pipeline: Reviews → Analysis → Solutions → Visuals

We built a multi-stage pipeline combining web scraping, local LLMs for efficiency, and paid LLMs for precision work:

Stage 1: Review Scraping

  • Scraped Amazon reviews per product category (kitchenware, electronics, home goods, etc.)
  • Extracted review text, ratings, verified purchase status, and helpful votes
  • Cleaned and deduplicated reviews to focus on substantive feedback

Stage 2: Initial Analysis (Local LLM)

  • Used a local LLM to quickly categorize reviews into complaint themes
  • Identified recurring patterns across hundreds of reviews
  • Filtered out noise (shipping complaints, seller issues) to focus on product-specific problems

Stage 3: TRIZ Analysis (Paid LLM)

  • Fed categorized complaints to a paid LLM (GPT-4/Claude) with TRIZ framework prompts
  • Identified technical contradictions (e.g., "durable but too heavy")
  • Identified design contradictions (e.g., "needs to be compact but hold more")
  • Mapped contradictions to TRIZ solution principles

Stage 4: Solution Generation & Visualization (Paid LLM)

  • Generated detailed product improvement recommendations based on TRIZ principles
  • Used image generation models (DALL-E/Midjourney API) to create visuals of improved products
  • Produced side-by-side comparisons: current product vs. proposed solution
TRIZ analysis pipeline flow diagram

Separating technical vs. design problems

A key insight was that not all product problems require engineering changes — some are purely design/UX issues.

Technical problems:

  • Material strength, durability, performance specs
  • Battery life, motor power, thermal management
  • Require engineering resources and longer development cycles

Design problems:

  • Ergonomics, visual clarity, button placement
  • Packaging confusion, unclear instructions
  • Can often be fixed faster and cheaper

By segregating these using TRIZ analysis, the client could prioritize low-hanging fruit (design fixes) while planning longer-term technical improvements.

Real example: Reviews for a kitchen gadget mentioned "hard to clean" (design) and "motor burns out quickly" (technical). Design fixes were implemented in weeks; motor redesign was scheduled for the next product generation.

Tech stack

  • Scraping: Python (Playwright/Selenium) for Amazon review extraction
  • Local LLM: Llama 2 or similar for fast initial categorization
  • Paid LLMs: OpenAI GPT-4 / Anthropic Claude for TRIZ analysis and solution generation
  • Image generation: DALL-E 3 / Midjourney API for product mockups
  • Storage: PostgreSQL for structured problem/solution tracking
  • Orchestration: Python scripts + cron for scheduled runs per category

Cost optimization: Local LLMs handled ~80% of the initial filtering work, reducing API costs significantly.

Outcome

The system automated what used to take the client's team weeks of manual work:

  • Before: Manually reading reviews, debating problems in meetings, guessing at solutions
  • After: Structured problem reports per category, TRIZ-backed recommendations, visual mockups ready for review

The client now runs this pipeline monthly per product line, feeding insights directly into their product development roadmap.

Key wins:

  • Reduced time from "complaint identified" to "solution drafted" from weeks to days
  • Clear separation of quick wins (design) vs. long-term investments (technical)
  • Visual mockups helped secure stakeholder buy-in faster

We continue to refine the TRIZ prompts and expand to new product categories as the client grows their Amazon catalog.

Want something like this?

Tell me your stack + what you want automated. I’ll reply with a simple plan.

WhatsApp