It is no secret that technology advances at a dizzying pace, bringing innovations that transform people's daily lives and the business environment. AI, in particular, has gone from being a promise to becoming an omnipresent tool (in both private and corporate environments). However, this rapid adoption also opens the door to new risks, ethical dilemmas, and information security challenges. How should companies position themselves? What is the responsibility of financial institutions in a landscape of increasingly sophisticated scams? And what dangers lurk in the new AI tools we use?

In episode #405 of the Segurança Legal podcast, Guilherme Goulart and Vinícius Serafim dive into these complex topics. With the clarity and depth that are the show's trademark, they analyze recent news ranging from password leaks and banking fraud to the emerging risks of AI-powered browsers and the consequences of careless use of technology by major corporations. This article summarizes the key points and insights from this essential conversation for professionals in law, IT, compliance, and security.

The AI dilemma in the corporate environment: between prohibition and "Shadow IT"

The episode begins by addressing a common pain point in many companies: the lack of clear guidelines on the use of AI tools, such as LLMs. It is a growing problem known as "Shadow AI," where, in the absence of rules, employees use whatever tools they see fit, often without proper security configurations. Vinícius Serafim highlights the two dangerous extremes that companies adopt:

"I have seen situations where the company simply says you can't use it [...] And there is also the other extreme, where people use whatever comes to mind. So there is no formal recommendation from the company."

The lack of a usage policy does not prevent the use of AI; it merely makes it invisible and uncontrolled, exposing the company to data leaks, intellectual property violations, and other legal and reputational risks.

Banking fraud and the duty of security: Brazil's STJ ruling

One of the central topics of the episode was the analysis of a recent ruling by Brazil's Superior Court of Justice (STJ) that expanded the liability of payment institutions in social engineering fraud cases. The ruling establishes that these institutions have a duty of security that goes beyond simply protecting against data breaches.

Guilherme Goulart explains that companies must analyze their customers' behavior to identify suspicious activities. In the case at hand, a customer who rarely used their account had 14 transactions carried out in a single day. The STJ found that this break in the usage pattern should have triggered an alert. According to Goulart, institutions must monitor:

  • Transactions that deviate from the customer's profile.
  • Time and location of operations.
  • Time interval and sequence of transactions.
  • Atypical loan applications.

This ruling consolidates the understanding that activity risk cannot be entirely transferred to the consumer, forcing financial institutions to invest in technologies, including AI, for the preventive detection of fraud.

The new threat: prompt injection in AI-powered browsers

The popularization of browsers with integrated AI, such as Atlas (from ChatGPT) and Comet (from Perplexity), ushers in a new era of convenience but also of vulnerabilities. Vinícius Serafim warns about the danger of prompt injection in these environments.

These browsers function as agents that can perform actions on behalf of the user, such as making purchases, sending emails, or interacting with social media. If a user accesses a page or opens a malicious file containing a prompt injection, the AI can be "convinced" to execute unauthorized commands.

"In essence, that is what these browsers can do. And then, obviously, prompt injection attacks against AI browsers started to appear. [...] The possibilities are theoretically infinite, because you have a tool that can interact with the internet on your behalf," warns Vinícius.

The risk is that the AI, interacting with the user's already logged-in accounts, can be used to steal data, make fraudulent purchases, or carry out other malicious actions without needing to steal passwords.

The "AI fiasco": when blind trust comes at a high cost

The episode closes with an emblematic case: consulting firm Deloitte had to refund 1.5 million Australian dollars to the government after delivering an AI-generated report riddled with "hallucinations," such as citations of academic articles and books that did not exist.

Guilherme Goulart uses the case to discuss the real consequences of careless AI use, connecting it to cases of lawyers in Brazil who were fined for bad-faith litigation after submitting petitions with AI-fabricated case law. This raises a fundamental question about AI-based fulfillment, that is, when and how artificial intelligence can be used in delivering a contracted service.

"We will have to start rethinking contractual arrangements to, while respecting objective good faith, make it clear and transparent when and how AI will be used," reflects Guilherme.

The lesson is clear: AI is a support tool, not a substitute for verification and professional responsibility. Lack of transparency and overconfidence can tarnish reputations and generate significant financial losses.

Conclusion

Episode #405 of Segurança Legal works as a full circle: it begins with the lack of internal AI governance and ends by showing the serious external consequences of that negligence. The discussions demonstrate that, in a technology-driven world, information security and legal compliance need to evolve constantly. Ignoring the new risks is not an option.

To deepen the discussion with practical examples and detailed analyses, listen to the full episode of Café Segurança Legal. And if you value independent, quality content about security, law, and technology, consider supporting the podcast through its crowdfunding campaign.


This post was summarized from the podcast audio using AI, with human review.