Frequently Asked Questions

Honest answers to the hard questions about Nobi's model, sustainability, and mission.

๐ŸŒ Mission & Vision

What's the long-term vision? โ–ผ

Digital companionship as accessible as Wikipedia, as resilient as Bitcoin, as personal as a diary.

An AI companion for every human that no company can take away, censor, or monetise against your interests. Free forever. Community-funded. Open source. Privacy-first.

Read the full vision at projectnobi.ai/vision.

How does Nobi benefit the Bittensor network? โ–ผ

Most Bittensor subnets are infrastructure โ€” they don't charge end users nor have a revenue path. So why do people still stake into them? They exist because the network funds them through emissions. That's how Bittensor works.

Nobi is the same โ€” except we do something unique: we put Bittensor directly in front of millions of regular people.

That's the butterfly effect:

1. Person discovers Nori โ†’ just wants an AI companion
2. Nori works great โ†’ they tell friends, it spreads organically
3. Curious user asks "what powers this?" โ†’ discovers Bittensor
4. Some buy TAO โ†’ demand increases
5. Some become miners โ†’ network grows
6. Media covers "the free AI that no company owns" โ†’ cycle accelerates
7. Bittensor becomes a household name โ€” not because of a whitepaper, but because of a product people use every day

No other subnet is building this path to the general public. Most subnets serve developers, researchers, or infrastructure. Nobi serves everyday humans.

Nobi is the Trojan horse that brings Bittensor to the masses. The subnet's value isn't measured in revenue โ€” it's measured in how many people it introduces to the network.

That's worth staking on.

What if the subnet gets deregistered? A whale could knock you out. โ–ผ

Yes, they could. Deep pockets are stronger than deep faith in Bittensor today. That's the reality.

But consider the alternatives. If Nobi fails: the team loses months of work and personal investment, and we'd need to ensure all user data is handled responsibly during any wind-down. That's why we're on testnet โ€” proving everything works BEFORE real-world scale. The code is open source and lives on regardless. Someone can fork it and try again.

The damage of NOT trying: Bittensor stays a closed ecosystem of developers, miners, and speculators. TAO stays a niche token nobody outside crypto has heard of. And the question "what is Bittensor actually FOR?" never gets answered for normal humans.

Every transformative project started at worse odds. Linux vs Microsoft. Wikipedia vs Britannica. Bitcoin vs banks. They survived because the people building them didn't stop when the odds looked bad.

We're building with full awareness we might fail. Our personal money. Our time. Our reputation. We're not asking anyone else to take that risk โ€” we already took it.

The real question: does the Bittensor community want mass adoption, or just more subnets?

If "just more subnets" โ€” fair enough. We'll keep building anyway. If "yes, we want real people using this network" โ€” then support the project that's actually trying to make that happen.

This isn't a business plan. It's a test of what this community actually values.

What's the risk? Could this fail? โ–ผ

Yes, honestly. If the community doesn't stake, emissions stay low, miners leave, and quality drops. That's the real risk. It's the same risk every open-source project faces โ€” will people care enough to support it?

We're betting they will, because privacy-preserving AI companionship is something people actually want and can't get from corporations. But we're transparent: this is a conviction bet, not a guaranteed outcome.

What we can control: building the best possible product. What the community controls: whether they believe in it enough to support it.

Isn't giving out value for free without any path to revenue just... bad economics? โ–ผ

"Demand comes from value to the subnet" โ€” correct! And what creates value? Users. Millions of them. Chatting every day. Creating demand for miner compute. Which creates demand for ALPHA stake. Which creates... wait for it... value to the subnet ๐ŸŽ‰

Google gave away free search for 6 years before making a penny. Facebook was free for 8 years. Bittensor itself has been emitting TAO for years without "revenue." Were they all foolish, or were they just building something people actually want first?

Revenue follows usage. Usage follows a product people love. We're at step 1. We may never be directly "profitable" as a subnet โ€” and that's the point. But we will bring millions of adopted users to Bittensor. And when those users discover what this network can do? Every subnet benefits. Every TAO holder benefits. Every miner benefits.

That's not bad economics. That's a Trojan horse ๐Ÿด

๐Ÿค– Product & Differentiation

How is Nobi different from ChatGPT or other AI assistants? โ–ผ

Three structural differences โ€” not just features:

Ownership: ChatGPT can delete your history, change policies, or shut down. Nobi memories are encrypted on a decentralised network โ€” no single entity controls them.

Privacy: OpenAI may train on your conversations. Nobi memories are encrypted at rest (AES-128, server-side). Miners process conversation content to generate responses. End-to-end TEE encryption is code-complete and deploying to production. Your data, your control.

Persistence: ChatGPT doesn't truly know you across sessions. Nobi builds a persistent knowledge graph โ€” your name, preferences, relationships, emotional patterns โ€” that compounds over months and years.

Voice: Send Nori a voice note on Telegram and she transcribes it and responds โ€” no other companion bot does this out of the box.

Biological memory: Memories naturally decay over time (like humans), Nori infers your habits and interests from your conversation patterns without you having to tell her, and she tracks your emotional trends to adjust her tone when you need it.

The pitch isn't "better ChatGPT." It's: "Your AI companion that no company can take away from you."

What does "mass adoption for Bittensor" actually mean? โ–ผ

Millions of regular people using Bittensor-powered AI without knowing it's Bittensor. Just like you use the internet without knowing about TCP/IP.

Today, most people who know about Bittensor are technical โ€” miners, validators, developers, traders. Nobi's goal is to bring regular people chatting with an AI companion on Telegram or Discord, getting a great experience, and never needing to know about subnets, TAO, or miners.

Their usage creates demand for the network. More users โ†’ more miners needed โ†’ more usage โ†’ more reasons for the community to stake โ†’ network grows. Every person chatting with Nori is proof that Bittensor works at consumer scale.

Think of Android โ€” billions use it, 99.99% have no idea it's Linux underneath. But their usage makes the entire Linux ecosystem more valuable.

Is there a market for this outside the Bittensor community? โ–ผ

Absolutely. The market is everyone who wants an AI companion but doesn't trust corporations with their personal data:

โ€ข Privacy-conscious users (growing rapidly post-GDPR, post-Cambridge Analytica)
โ€ข People in countries where corporate AI is censored or unavailable
โ€ข Users burned by AI companies changing terms (Replika removing features, Character.AI restricting content)
โ€ข Anyone who wants genuine AI memory persistence without corporate lock-in

The Bittensor part is invisible to end users. They just see a bot that remembers them. The decentralisation is the how, not the what.

Is Nobi just a consumer app or an actual subnet? โ–ผ

Both โ€” and that's the point. Every subnet is infrastructure for something. SN1 is infrastructure for text generation. SN19 for image generation. Nobi is infrastructure for personal AI companionship.

The subnet provides the decentralised, censorship-resistant, privacy-preserving backbone with custom quality scoring. The bot and app are the user-facing interface. One doesn't work without the other.

๐Ÿ’ฐ Sustainability & Funding

How can Nobi be free forever? Who pays for it? โ–ผ

Nobi runs on the Bittensor network, which pays miners and validators through TAO token emissions โ€” the same way every Bittensor subnet works. Miners earn TAO for serving quality companion responses. The network funds the infrastructure, not user subscriptions.

Think of it like public roads โ€” you don't pay a toll every time you drive. The infrastructure is funded by a larger system so individuals can use it freely.

LLM inference costs have dropped 99% in two years and continue falling. Our miners currently run on $30-70/month VPS servers using API inference (Chutes at $20/month base + pay-as-you-go). As technology improves, costs go down while quality goes up.

Who pays the miners if users don't pay? โ–ผ

Bittensor network emissions pay miners. Every Bittensor subnet receives TAO emissions based on how much TAO the community stakes on it. Miners compete to serve the best companion responses โ€” validators score them on quality, memory recall, personality consistency, and emotional intelligence. Better miners earn more TAO.

This is the standard Bittensor model. Miners aren't doing charity โ€” they're earning TAO for providing a service, just like miners on any other subnet.

What does "burn 100% of owner emissions" mean? โ–ผ

Every Bittensor subnet owner receives a mandatory 18% of emissions โ€” this cannot be set to zero. Most subnet owners keep this as profit.

We burn 100% of ours. Using Bittensor's native burn_alpha() function, we destroy every token we receive. Every burn transaction is on-chain and publicly verifiable by anyone. No profit for the team. Ever.

This burn also benefits the community โ€” it reduces token supply, creating deflationary pressure that helps all token holders.

How can the community support Nobi? โ–ผ

Voluntary staking. TAO holders who believe in the mission can stake on the Nobi subnet. This increases emissions to miners and validators, which improves infrastructure quality. It's a virtuous cycle powered by community conviction.

Unlike a donation, staking is reversible โ€” you can unstake at any time. You also earn ALPHA token emissions while staked. And because we burn 100% of owner emissions, the ALPHA supply decreases over time.

No one is required to stake. No one is pressured. The model works because people who want better AI companionship for the world choose to support it โ€” like donating to Wikipedia or contributing to Linux.

Disclaimer: This is not financial advice. Staking TAO involves risk. Do your own research.

Shouldn't you charge a small fee at certain limits? โ–ผ

That's the conventional wisdom. And it works โ€” for companies that need to return investor money. We don't have investors. We have no equity to return. The 18% owner emissions get burned on-chain (burn_alpha()) โ€” verifiable by anyone. There's literally no entity extracting profit.

The moment you add a paywall โ€” even a small one โ€” you've:

1. Created a barrier to adoption in developing countries (our target)
2. Required payment processing (GDPR, PCI compliance, entity formation)
3. Changed the incentive from "serve users well" to "convert free users to paid"
4. Lost the moral authority of "truly free" that differentiates us from every competitor

Wikipedia serves 1.7 billion monthly visitors on ~$170M/year in donations. They've been "unsustainable" for 23 years. Sometimes the right model is conviction-funded, not transaction-funded.

Isn't staking on Nobi just a bad investment with declining value? โ–ผ

This misunderstands how dynamic TAO (dTAO) staking works. Stakers don't just lose value โ€” they receive ALPHA tokens proportional to their stake. If the subnet has real usage and real utility, ALPHA demand increases, ALPHA price rises. Stakers can unstake at any time.

It's not a one-way donation โ€” it's a market signal. Good subnets attract stake because their ALPHA is worth holding. Bad ones lose stake and die. That IS the mechanism for positive return: useful subnet โ†’ more users โ†’ more stake inflow โ†’ ALPHA appreciation.

And the 100% burn of owner emissions via burn_alpha() actually helps stakers โ€” it reduces ALPHA supply, creating deflationary pressure on the token. Every burn makes existing ALPHA more scarce.

Disclaimer: This is not financial advice. Staking involves risk. Do your own research.

Why not just ask for donations instead? โ–ผ

Donations don't create an incentive loop. Staking does. When someone stakes on Nobi:

โ€ข Miners get paid โ†’ better infrastructure โ†’ better product
โ€ข ALPHA supply decreases (our burns) โ†’ ALPHA value benefits
โ€ข Staker earns ALPHA emissions โ†’ they're compensated for their support

A donation is gone forever. Staking is reversible, earns yield, and creates network effects. It's structurally superior to donations for sustainability.

No revenue means no value. Why would anyone stake on Nobi's ALPHA? โ–ผ

"No monetization" means no monetization from users. It doesn't mean the subnet has no economic activity. The economic activity IS the usage.

The ALPHA demand loop:

Millions of users chatting with Nori โ†’ millions of queries โ†’ miners needed to serve them โ†’ miners register and earn TAO for quality responses. More users = more miners = a healthier, more competitive network.

The burn effect:

We receive 18% owner emissions (mandatory). We burn 100% via burn_alpha(). Every burn reduces ALPHA supply. More users = more emissions = more burns = less supply. That's deflationary pressure that benefits all ALPHA holders.

Miner profitability:

Miners earn TAO for serving quality responses โ€” no staking required to mine. If mining is profitable, more miners join. More miners = better competition = higher quality responses. Community members and validators who stake ALPHA on the subnet increase emissions, making mining more profitable and attracting more miners.

The honest answer to "what if nobody stakes?" โ€” then emissions stay low, miners leave, quality drops, and we'd need to prove our value proposition harder. That's the same risk every subnet faces. We're betting that a product with millions of daily active users creates enough organic demand to sustain itself โ€” like Wikipedia creates enormous value without charging readers.

Disclaimer: This is not financial advice. Staking involves risk. Do your own research.

Every other subnet has a revenue source. Why doesn't Nobi? โ–ผ

Not true. There are research subnets on Bittensor mainnet with zero users and zero revenue path. Benchmarking subnets. Academic subnets. They earn emissions because stakers believe in the research. Nobody asks them "where's your revenue?" or demands 10K DAU in 3 months. They're funded on conviction. That's how Bittensor is designed โ€” for every subnet, not just Nobi.

The difference: we're the only one that can point to a working product on Telegram, Discord, and web, with 1,661 passing tests and MIT-licensed code anyone can audit.

If conviction-funded subnets with zero users can attract stake, a subnet with real users and a real product has a stronger case โ€” not a weaker one.

Every staker on every subnet is staking with the belief that the subnet will create value. That's not hope โ€” that's how dTAO works.

How do you deal with alpha traders? Won't they hurt the subnet? โ–ผ

Alpha traders exist on every subnet โ€” that's a Bittensor-wide reality, not a Nobi-specific problem. We can't control what traders do with the alpha token, and we don't try to.

What we can control: building a product good enough that long-term stakers outweigh short-term traders.

And the burn helps โ€” every owner emission we burn via burn_alpha() reduces alpha supply permanently. More usage โ†’ more emissions โ†’ more burns โ†’ less supply. That's structural deflationary pressure that works in favour of holders, not traders.

Traders come and go. Builders stay. ๐Ÿ”ฅ

Isn't "mass adoption" and "users won't know about Bittensor" a contradiction? โ–ผ

Do you know what database Instagram uses? What protocol your bank runs on? What kernel your phone runs? No โ€” but you use them every day.

That's literally what mass adoption means: people use something without knowing or caring what's under the hood.

If users need to understand Bittensor to use the product, it's not mass adoption โ€” it's a developer tool.

We're building the product normal people actually use. Bittensor is the engine, not the steering wheel.

The butterfly effect: person discovers Nori โ†’ loves it โ†’ tells friends โ†’ curious user asks "what powers this?" โ†’ discovers Bittensor โ†’ some buy TAO, some become miners โ†’ network grows. That's how mass adoption works. Not by explaining consensus mechanisms to grandma.

Aren't TAO emissions just a permanent subsidy? Who pays for it? โ–ผ

This gets to the heart of why Bittensor exists. Let's go back to first principles.

Why does anyone support Bitcoin? Bitcoin miners earn block rewards โ€” funded by inflation that dilutes every holder. Nobody calls that a "subsidy." It's the network paying for security. Holders accept dilution because mining makes Bitcoin more valuable than the dilution costs. That's the social contract.

Bittensor is the same contract, applied to AI.

Bitcoin: emissions fund miners who secure the network.
Bittensor: emissions fund miners who provide useful AI services.

Every subnet receives TAO because the community stakes on them, not because they charge end users. That's how the network is designed. The mechanism IS the model.

So the real question isn't "who pays for emissions?" โ€” it's "Is Nobi a good use of emissions compared to other subnets?"

Bittensor's long-term value depends on the world knowing it exists and using what it produces. How many subnets today have real consumer users? How many people outside crypto have heard of Bittensor? Without real-world demand, TAO's value is purely speculative. Someone has to build the demand side.

We believe bringing millions of everyday users into Bittensor โ€” people who've never heard of TAO โ€” makes the network more valuable. That's our thesis. The community decides via staking whether they agree.

Other subnets like Chutes charge users. Why doesn't Nobi? โ–ผ

Different markets, different models.

Chutes provides inference-as-a-service to developers and businesses who pay for API access. That's a B2B revenue model serving technical users. Nobi serves everyday consumers who chat with an AI companion. Our target users are regular people โ€” many in developing countries โ€” who want a private AI friend. Adding a paywall kills adoption in exactly the demographics we're building for.

That said โ€” Chutes and Nobi are complementary, not competing. A consumer-facing subnet that drives millions of inference requests and an infrastructure subnet that serves them? That's a self-evolving ecosystem. We see collaboration potential there, not conflict.

Closer comparisons for our model: Wikipedia (1.7B monthly visitors, free, donation-funded), Signal (100M+ users, free, foundation-funded). Consumer products serving a public good can sustain without charging users โ€” if the mission resonates enough to attract support.

The key insight: miners pay their own server costs. We don't pay miners โ€” Bittensor does. There's no production cost to pass on to users.

Won't users who stake get dumped on by degens and leave? โ–ผ

Fair concern โ€” a user loves Nori, wants to support the project, buys alpha, gets dumped on, and leaves bitter. It's happened across crypto.

But here's the key: staking is never required or even encouraged for normal users. Nobi is free. The product works without any token interaction at all.

Supporting Nobi doesn't mean buying alpha โ€” it means using Nori, giving feedback, spreading the word, or donating via fiat when that's available.

The staking layer is for people who understand Bittensor tokenomics and want exposure to subnet alpha. That's a different audience from the everyday user chatting with their AI companion. We deliberately keep those worlds separate โ€” Nori never pushes users toward tokens, never gamifies staking, never creates a funnel where someone buys alpha because the app told them to.

If someone who understands crypto chooses to stake, that's their informed decision. But the millions of normal users we're building for? They'll never need to touch a token.

Disclaimer: This is not financial advice. Staking TAO involves risk. Do your own research.

If subnets get stake without users, why does having users matter? โ–ผ

Right now, subnets get stake on speculation alone โ€” people bet on potential. That's early-stage Bittensor. But that won't last forever.

As the network matures, stakers will increasingly ask: "Does this subnet actually DO something? Does anyone use it?"

When that shift happens โ€” and it will โ€” subnets with zero users and no revenue path lose their stake. Subnets with millions of active users don't.

Today: speculation drives stake. Nobi can compete on that like anyone else.
Tomorrow: fundamentals drive stake. Nobi is the only subnet building for that world.

We're not saying users = stake today. We're saying users = durable stake when the music stops for empty subnets.

Can I support Nobi without buying tokens? โ–ผ

Absolutely! You don't need to touch crypto to support Nobi:

โ€ข Use Nori โ€” every conversation proves the product works and generates real usage data that attracts stakers
โ€ข Tell friends โ€” word of mouth is the most powerful growth engine
โ€ข Give feedback โ€” report bugs, suggest features, help us improve
โ€ข Contribute code โ€” Nobi is open source (MIT). PRs, issues, and ideas welcome
โ€ข Join the community โ€” Discord, Telegram, help answer questions
โ€ข Run a miner โ€” earn TAO by providing AI compute

A fiat donation gateway (Stripe/Ko-fi) is on the roadmap for after mainnet launch. For now, the best support is simply using Nori and sharing it with people who'd benefit from a private AI companion.

Why not just add ads, affiliate links, or monetize user data? โ–ผ

Because the moment we do that, we lose the one thing that makes Nobi different: "your AI companion that no company can monetize or exploit."

That's not idealism โ€” it's strategy. The AI companion market is dominated by companies that monetize users. Our competitive advantage is that we don't. People will choose Nori over alternatives because we can credibly promise we'll never sell their data, show them ads, or change the rules.

The butterfly effect:

Right now, almost everyone who knows about Bittensor is technical. What breaks that ceiling? A product regular people use every day WITHOUT knowing it's Bittensor underneath.

โ†’ Person discovers Nori โ€” just wants an AI friend
โ†’ Nori works great โ€” they tell friends
โ†’ Curious user asks "what powers this?" โ€” discovers Bittensor
โ†’ Some buy TAO, some become miners โ€” network grows
โ†’ Media covers "the free AI no company owns" โ€” cycle accelerates

One product. Millions of entry points to Bittensor. That's what stakers are betting on โ€” adoption, not revenue.

The model works at scale:

โ€ข Wikipedia: 1.7B monthly visitors, ~$170M/year from donations, 23 years, zero ads
โ€ข Signal: 100M+ users, zero revenue from users, foundation-funded
โ€ข Linux: Runs 90%+ of cloud infrastructure, community-funded

These aren't charities โ€” they're some of the most valuable projects in the world. They just don't extract value from users.

Subscriptions are proven โ€” aren't you misunderstanding consumer behaviour? โ–ผ

Subscriptions work. But they're not the only model. Counter-examples:

โ€ข Wikipedia: 1.7B monthly visitors, ~$170M/year in voluntary donations, 23 years running
โ€ข Signal: 100M+ users, funded by Signal Foundation, no subscription
โ€ข Linux: Runs 90%+ of cloud infrastructure, community-funded
โ€ข Firefox: Community-funded browser serving hundreds of millions

The pattern: products that serve a public good can sustain on conviction funding IF the mission resonates. The question is whether "AI companion that no company can take away" resonates enough. We're betting it does.

If it's free and good, won't people exploit it? โ–ผ

Free doesn't mean unlimited. Multiple layers of protection:

Rate limiting. Configurable limits per user โ€” currently 300 messages/day, 10 per minute. Casual users never hit these. Abusers do. Limits adjust per development phase.

Quality-based scoring. Miners are scored on response QUALITY, not volume. There's no incentive to serve spam or abuse โ€” only quality earns TAO.

Falling costs. LLM inference costs drop roughly 10x every 18 months. We're building on a cost curve that works in our favour.

At massive scale โ€” yes, it requires significant miner infrastructure funded by significant stake. We're not pretending it's free to operate. We adapt at each stage: limits tighten, models adjust, efficiency improves.

The product is free. Abuse is not.

If Nobi doesn't charge users, where does the money come from? โ–ผ

Miners pay for their own servers. Bittensor pays the miners. Users pay nothing.

Think of Uber โ€” Uber doesn't buy cars for drivers. Drivers buy their own cars because driving is profitable. Same model: miners buy their own servers because mining is profitable. They earn TAO for serving quality companion responses. If it's not profitable, they leave. If the subnet grows, more join.

What the project does NOT pay for:

โ€ข Miners fund their own servers ($30-70/month each, varies by phase)
โ€ข Validators fund their own servers
โ€ข User inference at scale: handled by miners

This is a community and open-source project. The founder team bootstraps the rest โ€” all self-funded, no VC, no grants, no token sales โ€” and will continue to do so. Operational sustainability at mainnet will be discussed when we reach that milestone.

We believe if our product provides real value to massive end-users, everything will be paid off.

Why not build on existing subnets instead of creating a new one? โ–ผ

We considered this. The problem: Nobi needs a custom incentive mechanism. Our validators score miners on memory recall quality, personality consistency, emotional intelligence, and multi-turn conversation โ€” not raw inference speed or storage capacity.

Existing infrastructure subnets (Targon, Chutes, Hippius) optimise for different things. None of them measure "is this a good AI companion?" That requires a dedicated subnet with custom validation logic.

We do use Chutes for inference as a fallback โ€” the subnets complement each other. But the Nobi subnet IS the mechanism that makes miners compete specifically on companion quality.

๐Ÿ”’ Privacy & Security

How is my data protected? โ–ผ

Your memories are encrypted at rest with AES-128 (server-side encryption โ€” protects stored data). They're distributed across independent miners on the Bittensor network โ€” no single company owns or controls your data.

Miners process conversation content to generate responses. End-to-end TEE encryption is code-complete and deploying to production. Browser-side memory extraction is code-complete and available in the web app.

You have full control: use /memories to see what Nori knows, /export to download everything, /forget to delete it all permanently. Your data, your choice.

You're handling sensitive personal data. Where's the security audit? What about breaches? โ–ผ

You're right โ€” handling personal memories IS sensitive data. We take this seriously:

โ€ข AES-128 encryption at rest on all stored memories โ€” today, in testnet (server-side encryption)
โ€ข Miners process conversation content to generate responses โ€” this is transparent and expected
โ€ข End-to-end TEE encryption: code-complete, deploying to production
โ€ข Browser-side memory extraction: code-complete, available in the web app
โ€ข User controls: /memories to view, /export to download, /forget to delete everything
โ€ข Open source: anyone can audit our code right now on GitHub
โ€ข No central database: memories are distributed across independent miners

We're honest about where we are: this is testnet. We don't claim production-grade security yet. That's exactly what testnet is for โ€” finding and fixing issues before real scale.

Our roadmap includes professional GDPR compliance review at mainnet, third-party security audits when scale justifies cost, and on-device privacy (data never leaves your device) in later phases.

A security breach at OpenAI or Google harms billions because they promised perfection. A testnet project that's honest about being in development? We're earning trust by being transparent, not by making promises we can't keep yet.

The real protection is decentralisation itself. There's no single database to hack. No single company to subpoena. No CEO who can decide to sell your data. The architecture IS the security โ€” and it gets stronger as more miners join.

Is on-device privacy really possible with decentralised architecture? โ–ผ

Yes โ€” and it's already proven at scale. Google's Federated Learning does exactly this: models train ON the user's device, only encrypted updates leave, raw data never does. Google uses this for Gboard predictions serving billions of users.

For Nobi, this would mean:

โ€ข Nori's memory extraction runs on your phone or browser
โ€ข Only encrypted memory summaries get sent to miners
โ€ข Miners never see your raw conversations
โ€ข Memory matching happens client-side

It IS harder for decentralised architecture than centralised โ€” but not impossible. The key insight: miners don't need your raw data to serve good responses. They need encrypted memory embeddings plus your current message.

End-to-end TEE encryption is code-complete and deploying to production. Browser-side memory extraction is code-complete and available in the web app. We're transparent about the current state (server-side encryption at rest) while actively shipping stronger protections.

๐Ÿ›ก๏ธ Safety & Trust

How do you handle user safety? What if someone is in crisis? โ–ผ

Safety is built into every layer of the system:

Content filtering: A ContentFilter module checks every user message BEFORE it reaches the AI model, and checks every AI response BEFORE it reaches the user. Messages involving self-harm, violence, illegal content, or child exploitation are blocked entirely โ€” not just disclaimed.

Crisis response: If someone expresses suicidal thoughts or self-harm intent, Nori immediately provides crisis resources (Samaritans 116 123, Crisis Text Line, local emergency services) and does NOT pass the message to the AI for a conversational response.

Miner accountability: Safety is a scoring dimension in our validator reward pipeline. Miners that serve harmful content receive zero emission for that round, regardless of response quality. ~10% of validator queries are adversarial safety probes testing how miners handle crisis scenarios, manipulation attempts, and illegal content requests.

Important: Nori is an AI companion, not a substitute for professional mental health care, therapy, or crisis intervention. If you or someone you know is in crisis, please contact professional services immediately.

Is this safe for vulnerable users? What about addiction or overuse? โ–ผ

We take the risk of unhealthy dependency seriously. Our protections go beyond simple rate limits:

Dependency monitoring: A DependencyMonitor tracks conversation patterns over time โ€” frequency, unusual hours, emotional escalation, isolation signals, and parasocial attachment indicators. When concerning patterns are detected, Nori provides graduated interventions:

โ€ข Mild: Gentle encouragement to connect with real people
โ€ข Moderate: Direct reminder that Nori is an AI and real human connections are irreplaceable
โ€ข Severe: Strong recommendation to speak with a trusted person, with professional resources provided
โ€ข Critical: Temporary cooldown period activated, crisis resources provided

Periodic AI reminders: Every 50 interactions, Nori reminds users "I'm an AI companion" โ€” because transparency about what you're talking to matters.

Emotional topic disclaimers: When conversations involve mental health, crisis, or heavy emotional content, Nori appends a reminder to seek professional help for serious concerns.

Rate limits: Configurable daily message limits prevent excessive use. Testnet defaults are generous for testing; mainnet limits will be tighter.

What about minors? How do you enforce the 18+ requirement? โ–ผ

Nori is for adults aged 18 and over only. Multiple enforcement layers:

Mandatory age gate: The very first thing on /start is age verification โ€” you cannot skip it, you cannot chat without confirming you are 18+. Under-18 selection permanently blocks the account.

Date of birth verification: Users provide their birth year (the year is not stored โ€” only the verification status is recorded for privacy).

Behavioural minor detection: 15 pattern-matching signals (school mentions, homework references, "my parents" language) โ€” if 2+ signals are detected, Nori asks for age re-confirmation.

Periodic re-verification: Every 30 days, the age requirement is re-confirmed.

Branding: All marketing, onboarding, and bot personality are calibrated for an adult audience. We have systematically removed any child-appealing language from our entire codebase, documentation, and website.

We acknowledge that no age verification system is perfect โ€” determined minors can bypass self-attestation. We're exploring integration with age verification services before mainnet. Every major AI companion harm case has involved a minor, and we are committed to doing everything technically feasible to prevent that.

Can miners read my conversations? โ–ผ

Honest answer: currently, yes โ€” during processing.

Storage: Your memories are encrypted at rest with AES-128 (Fernet, PBKDF2 per-user keys). This protects stored data against unauthorised disk access.

During response generation: Miners decrypt and process your conversation content to generate responses. This is the current reality on testnet, and we believe honesty about this is more important than marketing language.

What we've built (code-complete, deploying to production):

โ€ข TEE encryption (AES-256-GCM + HPKE X25519 key wrapping): Validators encrypt your message before sending to miners. Only miners running inside Trusted Execution Environments (TEE enclaves) can decrypt โ€” even the miner operator cannot read your data. 72 tests passing.
โ€ข Browser-side memory extraction: Your raw conversation text never leaves your browser. Only encrypted memory embeddings are sent to the server. Available in the web app with the privacy toggle.

We will update this answer as these protections are deployed to production. We won't claim "end-to-end encrypted" until it's actually live for all users.

What happens if a miner serves harmful content? โ–ผ

It gets blocked before reaching you, and the miner gets penalised.

Our ContentFilter checks every miner response before delivery. Harmful content (self-harm instructions, violence, illegal activity, child exploitation) is replaced entirely โ€” the original response is never shown to the user.

On the incentive side: validators run adversarial safety probes as ~10% of scoring queries. Miners that fail these probes receive a safety score of zero, which is applied as a multiplier to their total reward. Zero safety score = zero emission, regardless of how good their other responses were.

This means miners have a direct economic incentive to handle sensitive topics responsibly โ€” it's not just the right thing to do, it's the profitable thing to do.

What's your legal structure? Do you have legal counsel? โ–ผ

Honest answer: we're currently operating as individuals building on testnet. No entity is registered yet, and we don't have dedicated legal counsel for AI product liability. We're transparent about this.

We're evaluating entity registration in either England & Wales (Community Interest Company) or the Republic of Ireland (Company Limited by Guarantee) โ€” both have advantages for a community-funded AI project handling personal data. Ireland offers EU GDPR/AI Act compliance from day one; the UK offers established CIC structures for community projects.

Final decision will be made with legal counsel before mainnet. Our commitment: the entity structure will be non-profit, community-governed, with no equity, no investors, and 100% of owner emissions burned on-chain.

Before mainnet, we will have: formal legal review of AI product liability, entity registration, and clear liability framework for the decentralised architecture.

If it's decentralized, how do you handle legal data requests? โ–ผ

Great question. The AI inference layer (miners generating responses) is decentralized. The legal/compliance layer is centralized โ€” and always will be.

Your consent records, age verification, ToS acceptance, and audit trail are stored on our infrastructure โ€” not on miners. This data never touches the Bittensor subnet. It stays under our control at all times.

This means:

โ€ข Legal requests: We can always pull your consent records, regardless of how many miners or validators are in the network
โ€ข GDPR requests: Handled by our infrastructure directly โ€” access, erasure, portability, all fulfilled without miner cooperation
โ€ข Audit trail: Every consent change is logged with timestamps in an append-only, immutable database
โ€ข Dispute resolution: Complete consent history available via our legal API

GDPR requires a data controller โ€” a legal entity responsible for user data. That's us. Decentralization is for the AI quality competition. Legal accountability is ours.

We commit to always operating the bot/app layer and at least one validator, ensuring your data rights can always be exercised.

Still have questions?

Chat with Nori directly or join our community.

Try Nori โ†’ Join Community