CoinsValue.net logo CoinsValue.net logo
Cryptopolitan 2025-10-26 21:05:46

AI web browsers open the door to unseen security risks

AI browsers like Atlas from OpenAI and Comet from Perplexity promise convenience. But they come with major cybersecurity risks, forming a new playground for hackers. AI powered web browsers compete with traditional browsers like Google Chrome and Brave, aiming to attract billions of daily internet users. A few days ago, OpenAI released Atlas, while Perplexity’s Comet has been around for months. AI-powered browsers can type and click through pages. Users can tell it to book a flight, summarize emails, or even fill out a form. Basically, AI-powered browsers are designed to act as digital assistants and navigate the web autonomously. They are being hailed as the next big leap in online productivity. Security researchers flag AI browser flaws But most consumers are unaware of the security risks that come with the use of AI browsers. Such browsers are vulnerable to sophisticated hacks through a new phenomenon called prompt injection. Hackers can exploit AI web browsers, gain access to users’ logged-in sessions, and perform unauthorized actions. For example, hackers can access emails, social media accounts, or even view banking details and move funds. According to recent research by Brave, hackers can embed hidden instructions inside web pages or even images. When an AI agent analyzes this content and sees the hidden instructions, it can be tricked into executing them as if they were legitimate user commands. AI web browsers cannot tell the difference between genuine and fake user instructions. Brave engineers experimented with Perplexity’s Comet and tested its reaction to prompt injection. Comet was found to process invisible text hidden within screenshots. This approach enables attackers to control browsing tools and extract user data with ease. Brave’s engineers called these vulnerabilities a “systemic challenge facing the entire category of AI-powered browsers.” Prompt injection is hard to fix Security researchers and engineers say that prompt injection is difficult to fix. That’s because artificial intelligence models do not understand where instructions come from. They can’t differentiate between genuine and fake prompts. Traditional software can tell the difference between safe input and malicious code, but large language models (LLMs) struggle with that. LLMs process everything, including user requests, website text, and even hidden data, and treat it as one big conversation. That’s why prompt injection is dangerous. Hackers can easily hide fake instructions inside content that looks safe and steal sensitive information. AI companies admit prompt injection is a serious threat Perplexity stated that such attacks don’t rely on code or stolen passwords but instead manipulate the AI’s “thinking process.” The company built multiple defense layers around Comet to stop prompt injection attacks. It uses machine learning models that detect threats in real time and has integrated guardrail prompts that keep the AI focused on user intent. Moreover, the browser requires mandatory user confirmation for sensitive actions like sending an email or purchasing an item. Security researchers believe AI-powered browsers should not be trusted with sensitive accounts or personal data until major improvements are rolled out. Users can still utilize AI web browsers, but with no access to tools, disabled automated actions, and should avoid using them when logged in to banking accounts, emails, or healthcare apps. The Chief Information Security Officer (CISO) of OpenAI, Dane Stuckey, acknowledged the dangers of prompt injection and wrote on X, “One emerging risk we are very thoughtfully researching and mitigating is prompt injections, where attackers hide malicious instructions in websites, emails, or other sources to try to trick the agent into behaving in unintended ways.” He explained that OpenAI’s goal is to make people “trust ChatGPT agent[s] to use your browser, the same way you’d trust your most competent, trustworthy, and security-aware colleague or friend.” Stuckey said the team at OpenAI is “working hard to achieve that.” Get up to $30,050 in trading rewards when you join Bybit today

면책 조항 읽기 : 본 웹 사이트, 하이퍼 링크 사이트, 관련 응용 프로그램, 포럼, 블로그, 소셜 미디어 계정 및 기타 플랫폼 (이하 "사이트")에 제공된 모든 콘텐츠는 제 3 자 출처에서 구입 한 일반적인 정보 용입니다. 우리는 정확성과 업데이트 성을 포함하여 우리의 콘텐츠와 관련하여 어떠한 종류의 보증도하지 않습니다. 우리가 제공하는 컨텐츠의 어떤 부분도 금융 조언, 법률 자문 또는 기타 용도에 대한 귀하의 특정 신뢰를위한 다른 형태의 조언을 구성하지 않습니다. 당사 콘텐츠의 사용 또는 의존은 전적으로 귀하의 책임과 재량에 달려 있습니다. 당신은 그들에게 의존하기 전에 우리 자신의 연구를 수행하고, 검토하고, 분석하고, 검증해야합니다. 거래는 큰 손실로 이어질 수있는 매우 위험한 활동이므로 결정을 내리기 전에 재무 고문에게 문의하십시오. 본 사이트의 어떠한 콘텐츠도 모집 또는 제공을 목적으로하지 않습니다.