What You Need to Know about the Chat & Ask AI Data Breach

  • Published: Feb 11, 2026
  • Last Updated: Feb 11, 2026

Chat & Ask AI is a popular mobile application developed by Codeway, a Turkish technology company founded in Istanbul in 2020. With more than 50 million downloads across Google Play Store and Apple App Store, Chat & Ask AI has become one of the most popular AI chat applications in the world. 

The app functions as a wrapper service, providing a mobile gateway to large language models from major technology companies. Users can interact with OpenAI's ChatGPT, Anthropic's Claude, or Google's Gemini through a single interface.

In January 2026, an independent security researcher known as Harry discovered a catastrophic security vulnerability in Chat & Ask AI's backend infrastructure. The researcher found that the app's database was misconfigured, allowing anyone with basic technical knowledge to access its entire contents without any authentication. Through this vulnerability, Harry was able to access approximately 300 million messages from more than 25 million users. A detailed analysis of a sample containing about 60,000 users and over one million messages confirmed the massive scope of the exposure.

The exposed data included users' complete chat histories, the AI models they had used, custom chatbot names they had created, timestamps of their conversations, user settings and preferences, and other internal metadata. 

The researcher reported finding extremely sensitive content in the exposed messages, including discussions of illegal activities, requests for suicide assistance, mental health struggles, work secrets, and deeply personal relationship details. This type of content is particularly concerning because many users treat AI chatbots as private journals, therapists, or confidential brainstorming partners, sharing things they would never post publicly or even say out loud.

The vulnerability was not the result of a sophisticated hacking attack but rather a simple and preventable configuration error in how Codeway set up Google Firebase, the cloud-based backend service the company uses to store and manage app data. 

Harry disclosed the vulnerability to Codeway on January 20, 2026, following responsible disclosure practices. The company reportedly fixed the issue across all of its apps within hours of being notified. However, the breach also exposed data from users of other applications developed by Codeway, demonstrating that the misconfiguration affected the company's entire app ecosystem.

When Was the Chat & Ask AI Data Breach?

The exact timeframe during which the database was left exposed remains unclear. Security researchers documented that Chat & Ask AI had entire chat histories for 18 million users and 380 million messages exposed as of January 18, 2026. The database appears to have been misconfigured from the time the app was deployed, meaning that user data may have been accessible to anyone with the technical knowledge to find it for an extended period, potentially since the app's launch.

The vulnerability stemmed from Firebase Security Rules being left in a public state. Firebase is a Backend-as-a-Service platform provided by Google. By default, Firebase databases start secure, but developers must set rules to control access. In Chat & Ask AI's case, these rules were configured to allow public reads, meaning anyone could access the data. This well-known misconfiguration essentially left the database's front door wide open.

Harry disclosed the vulnerability to Codeway on January 20, 2026, and the company reportedly resolved the issue across all of its applications within hours of notification. However, there is no way to determine whether other malicious actors discovered and exploited this vulnerability before Harry reported it, or how long any exposed data may have been copied, scraped, or distributed before the configuration was corrected. Once data has been exposed online, it can be copied and shared indefinitely, making it impossible to fully contain.

How to Check If Your Data Was Breached

If you have used Chat & Ask AI or any other applications developed by Codeway, your private conversations and personal data may have been exposed. Here are several ways to verify your exposure:

  • Visit the Firehound registry at firehound.covertlabs.io, a website created by security researcher Harry to help users identify apps affected by Firebase misconfigurations. The site lists hundreds of mobile applications with insecure data storage. While Codeway's apps have been removed from the list after the company fixed the vulnerability, you can check whether any other apps you use are currently exposed.
  • Review your Chat & Ask AI usage history and consider what types of conversations you had with the AI. If you discussed sensitive personal information, mental health issues, financial details, work secrets, or anything else you would not want publicly accessible, assume that information may have been exposed and take appropriate protective measures.
  • Check whether you use any other applications developed by Codeway, as the Firebase misconfiguration affected the company's entire app ecosystem. Look through your installed apps on both iOS and Android devices to identify any Codeway products beyond Chat & Ask AI.
  • Monitor your accounts and personal information for signs of misuse. While Codeway has stated it is GDPR compliant and uses enterprise-grade security, the breach contradicts these claims and raises questions about what other security measures may be inadequate.

 

What to Do If Your Data Was Breached

If you have used Chat & Ask AI or suspect your conversations may have been exposed, take these immediate protective steps:

  • Delete the Chat & Ask AI app from your devices immediately and stop using it until Codeway provides transparent information about the full extent of the breach. How long the data was exposed, and what additional security measures have been implemented beyond simply fixing the Firebase configuration. 
  • Review everything you discussed with the AI chatbot and assess potential risks. If you shared information that could be used to identify you, compromise your security, or harm you professionally or personally, take appropriate precautions. This might include changing passwords if you discussed account credentials, alerting relevant parties if you shared confidential business information, or seeking support if you discussed mental health struggles that you would not want exposed.
  • Be extremely vigilant for phishing attempts and social engineering attacks. Criminals who obtained access to the exposed database now have detailed information about users' interests, concerns, problems, and personal situations. Be suspicious of any unexpected messages, emails, or phone calls, especially those that reference topics you discussed with the AI.
  • Contact Codeway directly to demand answers about the breach. Reach out via their support email at askaiweb@codeway.co to request information about how long your data was exposed.  Whether they have evidence of unauthorized access beyond Harry's disclosure, what steps they are taking to protect users going forward, and what notification or support they are providing to affected users. At the time of reporting, Codeway had not responded to media requests for comment about the incident.
  • Consider using data removal services to limit your digital footprint and reduce the amount of personal information available about you online. If your exposed AI conversations can be cross-referenced with other publicly available data about you, criminals could assemble comprehensive profiles for targeted attacks. Reducing your overall digital exposure makes these attacks more difficult.
  • Review your privacy settings and data-sharing practices with all AI tools you use. This incident demonstrates that even when the underlying AI models from companies like OpenAI or Google may be secure, the third-party wrapper apps used to access them can be serious security vulnerabilities. Always research an app's security practices and privacy policies before sharing sensitive information.

Are There Any Lawsuits Because of the Data Breach?

As of mid-February 2026, no class action lawsuits have been publicly filed against Codeway regarding the Chat & Ask AI data breach. However, the severity of the exposure makes litigation likely. The company claims GDPR compliance and enterprise-grade security, but the Firebase misconfiguration contradicts these representations.

Potential legal claims could include negligence in failing to implement basic security measures and violations of privacy regulations including GDPR and state privacy laws. The misconfiguration is particularly problematic legally because it is easily preventable. Security experts have extensively documented Firebase best practices, and Google provides clear guidance. Codeway's failure to follow basic security despite claims of enterprise-grade protection could be viewed as gross negligence.

If class action lawsuits are filed and successful, affected users could potentially recover compensation for various damages. Such damages include the disclosure of extremely sensitive personal information and mental distress caused by exposure of private conversations. As well as time and expense spent protecting themselves from potential identity theft or other harms, and any actual damages resulting from misuse of the exposed data. 

Users interested in potential litigation should document their use of the app, save any communications from Codeway, and monitor announcements from law firms investigating the breach.

Can My Chat & Ask AI Information Be Used for Identity Theft?

Yes, absolutely. The complete chat histories exposed in this breach are potentially more dangerous than typical data breach information. Conversations users had with AI chatbots likely reveal unprecedented amounts of personal, psychological, and behavioral information. Many users treat AI chatbots as confidential sounding boards, discussing mental health struggles, suicidal thoughts, illegal activities, work secrets, financial problems, and deeply personal concerns.

Criminals can use exposed conversations to understand exactly what someone cares about, fears, and struggles with. For example, if your chat reveals worries about job security, a scammer could pose as a recruiter or claim to be from your bank with an urgent fraud alert. The attack would be precisely calibrated to your specific vulnerabilities.

The exposed data also creates blackmail and extortion risks. If you discussed anything embarrassing or professionally damaging, believing conversations were private, criminals could threaten exposure. For users who discussed work topics, the breach creates corporate espionage risks, potentially revealing valuable proprietary information or strategic plans.

What Can You Do to Protect Yourself Online?

The Chat & Ask AI breach highlights unique risks associated with AI chatbot applications. Here are specific steps you can take to protect yourself when using AI tools:

  • Use AI services that explicitly offer privacy-focused or incognito modes and guarantee they do not store your conversations or use them for training. Before using any AI chatbot app, carefully read its privacy policy and terms of service to understand exactly how your data will be stored, used, and protected. Look for apps that allow you to use the service without creating an account or that offer end-to-end encryption.
  • Never share real names, personal identifying information, or sensitive details when using AI chatbots. Treat every conversation as potentially public. Do not upload personal documents, discuss specific identifiable people or places, or share information that could be traced back to you. If you need to discuss sensitive topics, use hypotheticals and keep everything generic and impersonal.
  • Do not rely on AI chatbots for important life decisions, particularly mental health support or medical advice. While these tools can be helpful for brainstorming or general information, they have no genuine experience, empathy, or professional training. Discuss serious personal problems with qualified human professionals, not AI applications that may store your most vulnerable moments in insecure databases.
  • Never share AI conversations with others unless absolutely necessary, and be aware that in some cases shared conversations can become searchable online. Some AI services make shared conversations publicly accessible through search engines, creating an additional privacy risk beyond database breaches.
  • If using AI services provided by social media companies like Meta AI, Grok, or Gemini, make sure you are not logged into your social media account while using the AI. Your conversations could be linked to your social media profile, which often contains extensive personal information that could be cross-referenced with your AI conversations to build detailed profiles.
  • Research app developers before trusting them with sensitive information. Prefer using AI services directly from the companies that develop the underlying models like OpenAI, Anthropic, or Google, rather than third-party wrapper apps that may have inadequate security practices. Wrapper apps add an additional layer where your data can be compromised.
  • Check the Firehound registry regularly to see if any apps you use have been identified with Firebase misconfigurations or other security vulnerabilities. This tool has identified nearly 200 iOS apps with similar issues affecting hundreds of millions of users, demonstrating how widespread this problem has become.
  • Remember that AI technology is developing faster than security and privacy protections can keep pace. Always maintain healthy skepticism about privacy claims from AI services, especially newer apps from smaller companies that may prioritize speed to market over security. Even the best AI systems can hallucinate or provide incorrect information, so verify important details independently.

By following these practices and maintaining awareness of the risks, you can continue to benefit from AI tools while minimizing your exposure to data breaches and privacy violations like the Chat & Ask AI incident.

Related Articles

What is Data Leak and How to Prevent Accidental Data Leakage

Data breaches take many forms, and one of them is through data leak and accidental web exposure. M ... Read More

The Saga of T-Mobile Data Breach: 2013, 2015, 2021 and 2023 Hacks

T-Mobile has experienced a number of data breaches in the past decade. The first case occurred som ... Read More

Anthem Data Breach Exposed 78 Million Records

In the Anthem Data Breach of 2015, hackers were able to steal 78.8 million member’s records. ... Read More

Everything You Need to Know About Insider Data Breach

Data breaches are on the news frequently, but the average person doesn’t really know that much a ... Read More

The NSA Hack, How Did it Happen?

The National Security Agency (NSA) was the main attraction in a major data breach involving three ... Read More

Latest Articles

What You Need to Know about the Chat & Ask AI Data Breach

What You Need to Know about the Chat & Ask AI Data Breach

Chat & Ask AI is a popular mobile application developed by Codeway, a Turkish technology company founded in Istanbul in 2020.

What You Need to Know about the SoundCloud Data Breach

What You Need to Know about the SoundCloud Data Breach

SoundCloud is one of the world's largest audio streaming and music distribution platforms, founded in Berlin, Germany in 2007 and headquartered in New York City.

What You Need to Know about the Substack Data Breach

What You Need to Know about the Substack Data Breach

Substack is a popular subscription-based digital publishing platform that allows writers, journalists, podcasters, and content creators to send newsletters directly to their subscribers while monetizing their work.

What You Need to Know about the Conduent Data Breach

What You Need to Know about the Conduent Data Breach

Conduent, Inc. is a major business process services company headquartered in Florham Park, New Jersey. Founded in 2017 as a spin-off from Xerox Corporation, the company provides technology-enabled business solutions to government agencies and Fortune 100 companies across 22 countries.

What You Need to Know about the Panera Bread Data Breach

What You Need to Know about the Panera Bread Data Breach

Panera Bread is a leading American bakery-café fast casual restaurant chain with over 2,000 locations across the United States and Canada.

What You Need to Know about the Crunchbase Data Breach

What You Need to Know about the Crunchbase Data Breach

Crunchbase is a leading market intelligence platform that provides comprehensive data on private and public companies worldwide.

Featured Articles

How to Buy a House with Bad Credit

How to Buy a House with Bad Credit

Buying your own home is the American Dream, but it might seem out of reach to those with bad credit. However, the good news is, if your credit is less than perfect, you do still have options and in most cases, can still buy a home.

How Secure Is Your Password? Tips to Improve Your Password Security

How Secure Is Your Password? Tips to Improve Your Password Security

Any good IT article on computers and network security will address the importance of strong, secure passwords. However, the challenge of good passwords is that most people have a hard time remembering them, so they use simple or obvious ones that pose a security risk.

Top 10 Senior Scams and How to Prevent Them

Top 10 Senior Scams and How to Prevent Them

Senior scams are becoming a major epidemic for two reasons. First, seniors often have a lot of money in the bank from a life of working hard and saving.

Notice

By proceeding with this scan, you agree to let IDStrong run a Free Scan of supplied parameters of your personal information and provide free preliminary findings in compliance with our Terms of Use and Privacy Notice. You consent to us using your provided information to complete the Free Scan and compare it against our records and breach databases or sources to provide your Free preliminary findings report.

Rest assured: IDStrong will not share your information with third parties or store your information beyond what is required to perform your scan and share your results.

Free Identity Threat Scan
Instantly Check if Your Personal Information is Exposed
All fields below are required
Please enter first name
Please enter last name
Please enter a city
Please select a state
Please enter an age
Please enter an email address
Close