Back to Blog

Case Study: Building a Mobile Islamic Assistant with a Custom LLM

StableWorks

A mobile Islamic assistant powered by a custom LLM that answers in the user’s language, cites trusted sources, acknowledges differing opinions, and favors humility, privacy, and safety.

Jul 23, 2025

Back to Blog

Case Study: Building a Mobile Islamic Assistant with a Custom LLM

StableWorks

A mobile Islamic assistant powered by a custom LLM that answers in the user’s language, cites trusted sources, acknowledges differing opinions, and favors humility, privacy, and safety.

Jul 23, 2025

Back to Blog

Case Study: Building a Mobile Islamic Assistant with a Custom LLM

StableWorks

A mobile Islamic assistant powered by a custom LLM that answers in the user’s language, cites trusted sources, acknowledges differing opinions, and favors humility, privacy, and safety.

Jul 23, 2025

We set out to build a different kind of AI experience: one that offers Islamic guidance with humility, clarity, and care. The assistant listens, responds in the user’s language, shares relevant references from the Qur’an and Hadith when appropriate, and acknowledges differences of opinion with respect. And when it isn’t sure, it says so — plainly and politely.

This article explains the experience in simple terms and why it matters.

The promise: guidance that feels human and trustworthy

From the first message, the assistant models good etiquette. It avoids speculation, steers clear of sectarian bias, and uses familiar phrases (like “Allah knows best”) where appropriate. Instead of delivering rigid answers, it explains the reasoning, offers context, and — when there are valid scholarly differences — notes that gently so users can make informed choices.

Most importantly, it replies in the same language the user speaks. English in? English out. Arabic in? Arabic out. That small detail makes the experience feel far more personal and accessible.

The experience: fast, friendly, and mobile-first

The interface is a clean chat on mobile. You ask a question; the assistant starts responding right away so you can read as it types. There’s no jumble of links — just a clear, well-structured answer that you can scroll at your own pace. If you want more detail, you ask a follow-up and the conversation flows.

Behind the scenes, conversations are saved securely so you can pick up where you left off. If a message ever takes too long, the assistant gracefully lets you know and invites you to try again — no dead ends, no confusion.

How it thinks (without the jargon)

Think of the assistant as a careful team leader coordinating a few reliable helpers:

  1. One helper focuses on understanding the question exactly as asked.

  2. Another checks trusted sources and “keeps facts fresh” so answers reflect current understanding.

  3. A third helps weigh context and different opinions fairly, then produces a clear, humble explanation.

  4. A final helper keeps things orderly — making sure messages are saved, language is consistent, and the conversation stays on track.

The result: fewer mistakes, fewer repeats, and answers that feel consistent from one day to the next.

Business impact (what teams actually feel)

  • Consistency at scale: The assistant gives steady, respectful answers across thousands of conversations, so users feel they’re getting equal treatment and clear guidance every time.

  • Content you can stand behind: Answers are framed with appropriate context and references where helpful, and the assistant admits uncertainty when needed. That builds trust, not just traffic.

  • Lower load on human teams: The assistant handles routine questions and follow-ups, freeing knowledgeable people to focus on complex cases and community engagement.

  • Inclusive by default: Responding in the user’s language lowers friction and expands reach without maintaining multiple content sets.

  • Streamlined support & review: Conversations are saved with a simple, human-readable timeline, so support teams can help users faster and reviewers can spot opportunities to improve the assistant’s behavior.

  • Fair access models: Usage limits and gentle upgrade prompts keep service sustainable without interrupting genuine learning moments.

Trust and safety by design

We designed the assistant to choose caution over confidence. It won’t pretend to know what it doesn’t. It avoids divisive framing and treats all recognized schools of thought with respect. And it explains the difference between widely accepted guidance and areas where scholars disagree — in friendly, everyday language.

The service also keeps private details private. Sensitive keys are never exposed to the user’s device, and chats are stored securely. Threads belong to the user who started them; that’s an important boundary for dignity and safety.

What we learned

Small details matter. Answering in the user’s language, acknowledging differences with courtesy, gracefully handling delays, and remembering where a conversation left off — these design choices do more than tidy up an app. They create trust.

Why StableWorks

We build AI assistants that behave like great colleagues: they listen carefully, explain clearly, and follow the rules. For this project, we combined a mobile-first experience with a custom-tuned language model and practical guardrails. The outcome is an assistant that feels warm, helpful, and dependable — not just “smart.”

If you’re exploring how AI can deliver respectful, trustworthy guidance in your domain, we’d love to help you design it thoughtfully and ship it with confidence.

We set out to build a different kind of AI experience: one that offers Islamic guidance with humility, clarity, and care. The assistant listens, responds in the user’s language, shares relevant references from the Qur’an and Hadith when appropriate, and acknowledges differences of opinion with respect. And when it isn’t sure, it says so — plainly and politely.

This article explains the experience in simple terms and why it matters.

The promise: guidance that feels human and trustworthy

From the first message, the assistant models good etiquette. It avoids speculation, steers clear of sectarian bias, and uses familiar phrases (like “Allah knows best”) where appropriate. Instead of delivering rigid answers, it explains the reasoning, offers context, and — when there are valid scholarly differences — notes that gently so users can make informed choices.

Most importantly, it replies in the same language the user speaks. English in? English out. Arabic in? Arabic out. That small detail makes the experience feel far more personal and accessible.

The experience: fast, friendly, and mobile-first

The interface is a clean chat on mobile. You ask a question; the assistant starts responding right away so you can read as it types. There’s no jumble of links — just a clear, well-structured answer that you can scroll at your own pace. If you want more detail, you ask a follow-up and the conversation flows.

Behind the scenes, conversations are saved securely so you can pick up where you left off. If a message ever takes too long, the assistant gracefully lets you know and invites you to try again — no dead ends, no confusion.

How it thinks (without the jargon)

Think of the assistant as a careful team leader coordinating a few reliable helpers:

  1. One helper focuses on understanding the question exactly as asked.

  2. Another checks trusted sources and “keeps facts fresh” so answers reflect current understanding.

  3. A third helps weigh context and different opinions fairly, then produces a clear, humble explanation.

  4. A final helper keeps things orderly — making sure messages are saved, language is consistent, and the conversation stays on track.

The result: fewer mistakes, fewer repeats, and answers that feel consistent from one day to the next.

Business impact (what teams actually feel)

  • Consistency at scale: The assistant gives steady, respectful answers across thousands of conversations, so users feel they’re getting equal treatment and clear guidance every time.

  • Content you can stand behind: Answers are framed with appropriate context and references where helpful, and the assistant admits uncertainty when needed. That builds trust, not just traffic.

  • Lower load on human teams: The assistant handles routine questions and follow-ups, freeing knowledgeable people to focus on complex cases and community engagement.

  • Inclusive by default: Responding in the user’s language lowers friction and expands reach without maintaining multiple content sets.

  • Streamlined support & review: Conversations are saved with a simple, human-readable timeline, so support teams can help users faster and reviewers can spot opportunities to improve the assistant’s behavior.

  • Fair access models: Usage limits and gentle upgrade prompts keep service sustainable without interrupting genuine learning moments.

Trust and safety by design

We designed the assistant to choose caution over confidence. It won’t pretend to know what it doesn’t. It avoids divisive framing and treats all recognized schools of thought with respect. And it explains the difference between widely accepted guidance and areas where scholars disagree — in friendly, everyday language.

The service also keeps private details private. Sensitive keys are never exposed to the user’s device, and chats are stored securely. Threads belong to the user who started them; that’s an important boundary for dignity and safety.

What we learned

Small details matter. Answering in the user’s language, acknowledging differences with courtesy, gracefully handling delays, and remembering where a conversation left off — these design choices do more than tidy up an app. They create trust.

Why StableWorks

We build AI assistants that behave like great colleagues: they listen carefully, explain clearly, and follow the rules. For this project, we combined a mobile-first experience with a custom-tuned language model and practical guardrails. The outcome is an assistant that feels warm, helpful, and dependable — not just “smart.”

If you’re exploring how AI can deliver respectful, trustworthy guidance in your domain, we’d love to help you design it thoughtfully and ship it with confidence.

Previous

Next Article

More Articles

Written by

Aaron W.

Nov 24, 2025

The Context Window Expansion and What It Means for Your Business

Context windows expanded 1,000x in five years, enabling AI to process entire contracts, codebases, and document libraries in one pass. Practical guide to capabilities, limitations, costs, and when to use long context versus RAG.

Written by

Aaron W.

Nov 24, 2025

The Context Window Expansion and What It Means for Your Business

Context windows expanded 1,000x in five years, enabling AI to process entire contracts, codebases, and document libraries in one pass. Practical guide to capabilities, limitations, costs, and when to use long context versus RAG.

Written by

Aaron W.

Nov 24, 2025

The Context Window Expansion and What It Means for Your Business

Context windows expanded 1,000x in five years, enabling AI to process entire contracts, codebases, and document libraries in one pass. Practical guide to capabilities, limitations, costs, and when to use long context versus RAG.

Written by

Aaron W.

Oct 24, 2025

The Real Business Impact of AI According to 2024-2025 Data

Research from 2024-2025 reveals that strategic AI implementation delivers 3-10x ROI while 95% of companies see zero returns, with success determined by investment levels, data infrastructure maturity, and treating AI as business transformation rather than technology adoption.

Written by

Aaron W.

Oct 24, 2025

The Real Business Impact of AI According to 2024-2025 Data

Research from 2024-2025 reveals that strategic AI implementation delivers 3-10x ROI while 95% of companies see zero returns, with success determined by investment levels, data infrastructure maturity, and treating AI as business transformation rather than technology adoption.

Written by

Aaron W.

Oct 24, 2025

The Real Business Impact of AI According to 2024-2025 Data

Research from 2024-2025 reveals that strategic AI implementation delivers 3-10x ROI while 95% of companies see zero returns, with success determined by investment levels, data infrastructure maturity, and treating AI as business transformation rather than technology adoption.

Written by

Aaron W

Oct 17, 2025

When Uncertainty Becomes the Safety Signal: How AI Companies Are Deploying Precautionary Safeguards

Anthropic, OpenAI, and Google deployed their newest models with enhanced safety protections before proving they were necessary, implementing precautionary safeguards when evaluation uncertainty itself became the risk signal.

Written by

Aaron W

Oct 17, 2025

When Uncertainty Becomes the Safety Signal: How AI Companies Are Deploying Precautionary Safeguards

Anthropic, OpenAI, and Google deployed their newest models with enhanced safety protections before proving they were necessary, implementing precautionary safeguards when evaluation uncertainty itself became the risk signal.

Written by

Aaron W

Oct 17, 2025

When Uncertainty Becomes the Safety Signal: How AI Companies Are Deploying Precautionary Safeguards

Anthropic, OpenAI, and Google deployed their newest models with enhanced safety protections before proving they were necessary, implementing precautionary safeguards when evaluation uncertainty itself became the risk signal.

Written by

Aaron W

Oct 13, 2025

Petri: Anthropic's Open-Source Framework for Automated AI Safety Auditing

Technical walkthrough of how Petri orchestrates automated model behavior audits, from seed instructions to evaluation pipelines.

Written by

Aaron W

Oct 13, 2025

Petri: Anthropic's Open-Source Framework for Automated AI Safety Auditing

Technical walkthrough of how Petri orchestrates automated model behavior audits, from seed instructions to evaluation pipelines.

Written by

Aaron W

Oct 13, 2025

Petri: Anthropic's Open-Source Framework for Automated AI Safety Auditing

Technical walkthrough of how Petri orchestrates automated model behavior audits, from seed instructions to evaluation pipelines.