By Érica Kaori Akamine and Ana Flávia Vivarelli Molina
With growing interest in alternative assets, many investors are seeking above-market returns in distressed or judicialized claims, whether through investment funds or direct acquisitions.
However, when proper upfront analysis is lacking, investors who acquire such claims expecting higher returns often find themselves trapped in slow-moving proceedings, with limited recovery prospects and low liquidity. In many cases, the issue was already embedded in the asset prior to acquisition—and the cost of failing to identify it early can be significant.
There are various classes of judicial assets, each with distinct characteristics. This article focuses on claims already under enforcement based on enforceable instruments.
Finally IARA has arrived at CARF. It is not a new member of the Council, but rather the long-awaited generative artificial intelligence established to support the adjudicatory function, announced back last year.
The Administrative Council for Tax Appeals (“CARF”) officially launched its activities by publishing two ordinances: CARF Ordinance No. 142/2026, which sets out guidelines for developing and using generative AI solutions, and CARF Ordinance No. 854/2026, which establishes and homologates version 1.0 of IARA, Artificial Intelligence in Administrative Appeals. In an official announcement, the Council presented the initiative as an institutional milestone for modernization, focused on the responsible, safe and ethical use of technology within administrative tax disputes.
According to the agency, IARA was designed to help council members draft decisions, particularly by finding case law references that fit the specific cases under review. The tool is entering its final testing phase in a real-world environment, restricted to a pilot group of 24 council members for 30 days. It was developed by Serpro (Federal Data Processing Service) with technical oversight from FGV.
The initiative, is at first, positive, as CARF didn’t just formalize AI in its workflow, but also surrounded the rollout with guidelines for human oversight, data protection, information security, and institutional responsibility. In its official announcement, the agency stresses that the tool aims to boost efficiency, consistency, and the technical quality of rulings without removing the responsibility of the competent authority., já que o CARF não apenas formalizou o uso de inteligência artificial em seu fluxo de trabalho, como também procurou cercar essa implementação de diretrizes de supervisão humana, proteção de dados, segurança da informação e responsabilidade institucional. No comunicado oficial, o órgão ressalta que a ferramenta busca aumentar a eficiência, a consistência e a qualificação técnica da atividade julgadora, sem afastar a responsabilidade da autoridade competente.
This regulatory framework is key. The guidelines released by CARF highlight, among other points, a focus on the human element, the protection of personal and sensitive data, the reasonable duration of proceedings, and the need for effective, periodic, and proper human supervision throughout the AI solutions’ lifecycle. In other words, the technology is presented as a support too rather than a replacement for the judge’s decision-making.
That said, this update should be approached with caution. In tax matters—especially complex ones—AI might find precedents that are semantically similar without necessarily catching the specific legal reasoning of the dispute, the procedural context, or how well the precedent actually applies to the case at hand. At CARF, this care is even more vital because many discussions have a heavy factual component and rely on a detailed analysis of evidence and the specifics of each case; the debate isn’t limited to just legal rights. This risk is consistent with the model CARF adopted, which requires human review of any result generated by the tool.
In practice, this means IARA can be quite useful for speeding up research and organizing case law references, but it doesn’t get rid of the need for critical analysis. If the tool suggests grounds or decisions that don’t quite fit the matter being judged, and this mismatch ends up reflected in the ruling’s reasoning, it could lead to more challenges later and a ‘backfire’ effect—increasing litigation time through procedural side-tracks like motions for clarification or interlocutory appeals, which could represent a step backward or even the risk of weak decisions if not admitted.
This point deserves special attention, considering that CARF’s Internal Regulations allow for Motions for Clarification (Embargos de Declaração) when a ruling contains obscurity, omission, or contradiction between the decision and its grounds, or when a point that should have been addressed by the Panel is missed, within five days of the notice. Furthermore, CARF’s own regulatory framework considers the possibility of Motions for Clarification with infringing effects, which reinforces that significant flaws in reasoning can, in theory, open the door to attempts to modify the judgment.
The same applies to Interlocutory Appeals (Agravos), which are available when a request for a Special Appeal is denied or only partially accepted. These are often necessary when a superficial analysis misses the specifics of the case in the paradigm rulings—subtleties that IARA, despite having a person’s name, may not yet be equipped with the ‘personality’ to catch.
With these caveats in mind, the establishment of IARA is a positive and welcome step forward, mainly because it comes with a formal concern for governance, security, and human oversight. In complex topics, AI can be a great support tool, provided it doesn’t replace careful reading, contextual examination, and rigorous control over the legal reasoning.
This content is provided for informational purposes only and does not constitute legal advice. The application of this information depends on the analysis of each specific case.