Why I Canceled My App “mymasal”

Published on April 26, 2025

ArticleReflectionAIAI EthicsPersonalizationDigital ParentingSocial PolarizationBehavioral Design

Why I Canceled My App “mymasal”

Artificial intelligence, personalization, recommendation algorithms, and digital content creation have become an inseparable part of daily life. As a developer, I strive to create valuable, useful, and humanity-focused products using these technologies. Yet, sometimes, what seems technically perfect can turn out to be ethically and socially problematic.

In this post, I’ll explain why I decided to cancel the “mymasal” app project.

Why Did I Cancel It?

1. Risk of Deepening Social Polarization

While developing my project and analyzing its core functionality, I realized that my system, designed to deliver personalized content to children, actually had the potential to create a “digital echo chamber.” Producing content filtered by parents’ values, cultural perspectives, and worldviews could distance children from alternative perspectives and perpetuate social polarization to the next generation.

2. Limiting Children’s Critical Thinking Skills

The app’s “protective” approach could prevent children from encountering different ideas. For healthy development, children need to engage with various perspectives, develop critical thinking skills, and form their own opinions. Over-personalized and filtered content risked hindering this natural developmental process.

3. Normalizing Divisive Parental Attitudes

The most disturbing realization was that the app’s “protect my child” narrative could provide technological legitimacy to parents’ divisive attitudes. This meant a well-intentioned parenting practice could, over time, turn into a tool reinforcing social division.

4. Social Responsibility of Technology

As developers, the social impact of what we build is as important as its technical functionality. Powerful personalization technology, like “mymasal,” when misused or unexpectedly evolved, could cause irreparable damage to our social fabric.

Conclusion

This decision wasn’t easy. Canceling a technically impressive and commercially promising project is never easy. But as a developer aware of technology’s power, I must consider the societal impacts of what I create. This is the world I want. As Gandhi said, I must be the change I wish to see in the world.

Maybe, in the future, we’ll find ways to design such personalization technologies so that children can meet both safe and diverse perspectives. Until then, I will focus on using current technologies more consciously and responsibly.

Note: Below, I share the original project proposal for the “mymasal” app I canceled. I share this as part of my personal development and as a reference for other developers who may face similar ethical dilemmas.


mymasal

Overview

mymasal is an AI-powered application that generates personalized tales and lullabies. The app produces content based on children’s personal traits, parental values, and cultural or geographic differences. Parents can quickly generate ready-made content using information they’ve previously provided, or they can fully customize tales and lullabies by specifying topics, subtexts, characters, and events. Narration is delivered via high-quality AI TTS, so both educational and entertaining content reaches children, while parents maintain control and guidance.

Core Features

ON-DEMAND Personalized Experience

Child Profile: On app launch, details such as the child’s age, name, friends’ names, school, favorite fruits, animals, superhero choices, fears, and what makes them happiest are gathered.

Parent Profile: The parent is asked for information that will shape the tales and lullabies for the child—values, sociocultural identity, ethics, worldview, interests, priorities (tradition, progressiveness, sports, art, science, family ties, analytical thinking, etc.), and cultural-geographical background.

One-Time Registration: This information is collected only once at the beginning, stored on the user’s device, and referenced throughout the app’s use.

Tale & Lullaby Generation

  • Credit or Membership System: Each tale/lullaby is produced in exchange for credits. With a membership, users receive a daily quota of free content.
  • Fully Customizable Tales: Parents can manually define story topic, characters, plot, subtext, and messages. The general tone of the tale (e.g., thrilling, calming, educational) can be adjusted with a tone slider.
  • Readability Level Selection: Optionally, tales can be written in a “decodable” language suitable for the child’s age.
  • Premium Narration: Paid options include narration by a favorite character’s voice or the parent’s own voice.
  • Interactive Quiz Module: Some tales include quizzes at the end to enhance reading/listening comprehension.
  • Special Needs: For children with special needs (ADHD, dyslexia, PTSD, etc.), story content and narration are specially adapted.
  • Badge System: Children earn badges as they hit milestones in reading or listening, helping to track progress and boost motivation.

Adaptive Curriculums & Social Sharing

  • Adaptive Preset Curriculums: The app offers preset packages (e.g., potty training or 2nd-grade science) automatically tailored to the parent and child profile, turning each into educational, fun, and developmentally supportive tales and lullabies.
  • Explore & Share: Users can share stories with others in the “Share for Other Children” section. Parents can choose to disable this feature. Users earn free credits based on the number of likes their tales receive, boosting social interaction and quality content.

Lullaby Center

  • Ready & AI-Generated Lullabies: Both pre-made and dynamically generated lullabies with personalized lyrics are available.
  • Special Character Voices: Paid modules offer lullabies and tales narrated by favorite character voices.

Global and Local Support

  • Localization: The app initially targets Turkish market and content but aims for multilingual support and global distribution.
  • Cultural Adaptation: Built-in tales and curriculums consider geography, culture, and local values for meaningful content in every region.

Cost & Technical Details

  • TTS and API Costs: Producing a 10-minute tale uses ~20K tokens, costing ~5 cents output, 3 cents verification, 2 cents input—totaling about 10 cents. Multilang TTS adds 1.5 cents/minute. Ten minutes of content, narration, and visuals is about 30 cents. Daily 10 minutes = ~$9/month.
  • Scalability: Initial costs may be high due to development speed, but more economic solutions can be integrated later.
  • Text-Only Books: Parents can choose text-only content, read to their child, or download as a PDF.
  • Delayed Production Options: Lower-cost, delayed content production (2-6 hours) via batching can be selected, saving credits.

User Experience

  • Onboarding: Detailed one-time data collection for both parent and child, used as the reference for all future content generation.
  • Content Generation: The app generates personalized stories as the child answers questions under parental guidance.
  • Listening Process: High-quality TTS narrates tales and lullabies, with parents able to intervene as needed.

Development Suggestions

  • Customized Story-Based (RPG) Games: Beyond tale/lullaby generation, interactive story-based RPG modules can be developed, letting children shape their own adventures.
  • Story Completion: Interactive sections let the child add their own ideas to complete the story, boosting creativity and thinking skills.
  • Multi-Character and Game Scenarios: Modules can allow different characters from tales to interact in new storylines.
  • Print Order: Users can order their stories as physical books.

User Use Case Scenarios

Scenario 1: Fast, Ready-Made Content

User: Parent
Summary: The parent uses previously saved profiles to generate a ready-made tale or lullaby with one tap. The child enjoys the AI-narrated content.
Features:

  • One-tap content generation
  • Preset curriculums and profile adaptation
  • High-quality TTS narration

Scenario 2: Highly Personalized Content Creation

User: Parent (with child)
Summary: The parent enters details like topic, subtext, characters, and events to create a fully personalized story, supported by interactive quiz sections.
Features:

  • Detailed customization
  • Extra interactive elements (quiz, info)
  • Premium narration and story flow

Scenario 3: Content for Situational & Emotional Needs

User: Parent
Summary: The parent observes the child’s emotional or social challenges (e.g., not sharing toys) and creates themed tales for that moment, including elements to support empathy and social skills.
Features:

  • Real-time, emotion/behavior-based content
  • Themed tale/lullaby options
  • Flexible use of profile data

Scenario 4: Ongoing Content via Packages

User: Parent
Summary: The parent creates content packages (e.g., for sleep problems due to bird phobia) to help the child build habits and overcome fears through regularly played tales/lullabies.
Features:

  • Packaged content generation
  • Scheduled content delivery
  • Special content modules for sleep, fears, etc.

Scenario 5: Adaptive Curriculum-Based Educational Content

User: Education-focused parent
Summary: The parent selects an adaptive curriculum package (e.g., potty training or 2nd-grade science), personalized based on age and interests, converting it into tale/lullaby content for educational, fun experiences.
Features:

  • Preset curriculum packages
  • Personalized educational content
  • Stepwise, structured format

MyMasal Value Proposition

The Normalization of Divisiveness in the Digital Age: Reinforcing Social Polarization Through Parenting

In the 2020s, the extent of social polarization has become a defining feature of daily life. We experience deepening divisions at home, work, on social media, and even within families. The global shift toward the right of the socio-political spectrum is merely a symptom of broader societal breakdowns. Economic crises, geopolitical tensions, climate change, and political uncertainty increase anxiety, driving people to seek security within like-minded groups and distance themselves from “others.”

At this point, a subtle but powerful neuro-psychological and sociological mechanism kicks in: As people become more divided, they feel the need to rationalize and legitimize these exclusionary practices—reducing cognitive dissonance. In theory, most of us know discrimination and exclusion are morally problematic. But in times of heightened threat perception, in-group favoritism and out-group derogation dominate. This inner conflict pushes individuals toward a delicate psychological balance, triggering moral licensing mechanisms.

The app I proposed targets this fragile balance, mobilizing a nuanced psychological maneuver: By giving parents innovative control over the content their children consume, it indirectly legitimizes their divisive inclinations. The true success and danger of this app lies here: Rather than overtly promoting exclusion, it masterfully disguises this under universally virtuous aims like “protecting children from digital harm” and “supporting cognitive and moral development in a ‘safe’ environment.” (This can be read as a micro-level application of Foucault’s biopolitics: a technology of power operating through managing and shaping life—here, the child’s mental and emotional life.)

How does this mechanism work? Imagine, as a parent, you quietly prevent your child from exposure to content reflecting conflicting worldviews or alternative lifestyles. The most attractive function of the app isn’t to block the “undesirables” directly, but to actively produce, adapt, or prioritize content aligning with the parent’s world. This appears as constructive, positive, and proactive parenting on the surface. But a deeper analysis shows that this positive content creation inevitably serves the same exclusionary end. The parent internalizes the belief: “Alternatives to my values are harmful or unnecessary. Choosing and presenting the ‘best’ for my child is my core duty.” This experience offers a sophisticated technological legitimacy to divisive attitudes, reinforcing the parent’s habitus and constructing a symbolic universe that blocks meaningful encounters with alternatives—an effective form of symbolic violence. Sometimes, the strongest control isn’t about what’s forbidden, but about what is always presented as the only alternative.

Van der Toorn’s research¹ on system justification shows how people amplify and internalize even the slightest approval from their environment to manage moral uncertainty. In today’s hyper-polarized, distrustful climate, even the smallest technological or social cues affirming divisive attitudes bring comfort.

Our app subtly communicates: “See? Divisiveness isn’t that bad. It might even be necessary for your child’s development.” As Habermas² notes with his “colonization of the lifeworld” concept, technological rationality and algorithmic logic now permeate even the most intimate areas, such as parenting, further narrowing the public sphere.

Chua’s research³ at Stanford on “emotional technologies” shows that digital tools deeply affect users’ identity, emotional regulation, and moral positioning. Our app, while offering parents the affective reward of “being a good, conscious, protective parent,” also normalizes and reinforces their tendency for social categorization and exclusion.

Stern’s “moral self-licensing” theory⁴ explains how people, after virtuous acts, tend to shift toward more problematic moral behaviors. Similarly, by offering parents the “virtuous” act of educating/protecting their child, our app legitimizes broader divisive tendencies.

Williams’ concept⁵ of “epistemic bubbles” reveals how people become surrounded by content that confirms their worldview, leading to cognitive closure. Our app plants the seeds of division and a gap in understanding not just in the current generation, but across generations by bringing these filter bubbles into childhood itself. Ironically, this also means that the app may become popular precisely because it resonates with the current polarized mood.

From a neuropsychology perspective, Damasio’s⁶ work shows how emotional brain and somatic markers drive decision-making, with rationalization following later. The app, by activating the “protect the offspring” instinct, legitimizes divisive attitudes on a deep emotional level, bypassing rational scrutiny.

I believe this app would succeed commercially because it addresses one of humanity’s deepest psychological needs: cognitive consistency and being at peace with oneself. [As Byung-Chul Han argues in his psychopolitics analysis, individuals increasingly choose these control mechanisms of their own “free will.”] In a fragmented, anxious world, people will easily cling to any tool that offers psychological comfort from the guilt or cognitive dissonance caused by their divisive attitudes.

Imagine: As a parent, you criticize people with different views in the morning news, belittle or mock outsiders on social media, then use an app in the evening that filters what your child consumes through your narrow worldview. The app whispers: “What you’re doing isn’t just normal—it’s responsible. It’s not divisiveness, it’s selectivity and protection. Many other ‘good parents’ are doing the same.” This small, continuous technological validation is enough to ease the tiny moral uneasiness people feel from their divisive attitudes.

Bloom’s “moral hedonism” concept⁷ at Yale explains how people develop unconscious strategies to preserve and reinforce their own moral positions. Our app embodies one of these strategies: offering the undisputed moral shield of “I’m doing this for my child’s good,” enabling reinforcement of social identity and normalization of divisive attitudes. Dr. Prentice’s⁸ work on group dynamics and normative influence shows how people use “virtue signaling” to show loyalty and gain status within their group. Our app, under the guise of “conscious,” “digitally literate,” and “protective parenting,” offers a sophisticated virtue signaling tool, reinforcing both group loyalty and distance from outsiders.

In today’s climate of trust erosion and social fragmentation, people seize any narrative or tool that legitimizes exclusion, suspicion, and seeing “others” as threats. In the complex geopolitical landscape of 2025, amid post-truth information and identity politics, individuals uncertain of their place in the world are drawn to tools that, under the mask of protecting their children, reinforce their tribalistic and divisive tendencies.

Finally, the most dangerous and potentially commercially successful feature of the app: It conveys that divisiveness and isolation are not just individual choices but a societal norm, through collective action. Each download, filter, and positive comment subtly communicates: “You’re not alone. Everyone’s protecting their ‘neighborhood,’ building their echo chamber. This is the new normal.” This tech-enabled norm shift helps people adopt and maintain divisive attitudes, narrowing the Overton Window of acceptable ideas.

Paradoxically, this is exactly why the app, despite its potential to further fragment society, may succeed in the market. As Greene’s⁹ work from Harvard shows, people resolve moral conflict by clinging to their “tribe.” Our app delivers this under the guise of “good parenting.”

Consider: Every parent wants to protect their child. But in the chaos of 2025, this protective instinct can be easily manipulated. By “protecting” your child from “harmful” ideas, you may unwittingly deprive them of the richness of the wider world and contribute to a cycle of renewed social division. The app’s insidious success—and its ethical danger—lies exactly here: the instrumentalization of good intentions.


¹ Van der Toorn, J. (2023). The Psychology of System Justification in Polarized Societies.
² Habermas, J. (1984). The Theory of Communicative Action.
³ Chua, S.M. (2024). Emotional Technologies and Identity Formation in Digital Age.
⁴ Stern, C. (2022). Moral Self-Licensing in Politically Charged Environments.
⁵ Williams, M. (2023). Epistemic Bubbles and the Erosion of Shared Reality.
⁶ Damasio, A. (2000). The Feeling of What Happens: Body and Emotion in the Making of Consciousness.
⁷ Bloom, P. (2021). Moral Hedonism and Social Division.
⁸ Prentice, D. (2023). Virtue Signaling as Group Dynamics.
⁹ Greene, J. (2024). Moral Tribalism in the Age of Digital Radicalization.