Back close

Exploring Efficient Truncation Strategies for Case Dismissal Prediction using Transformer Models

Publication Type : Conference Paper

Publisher : IEEE

Source : 2025 5th International Conference on Intelligent Technologies (CONIT)

Url : https://doi.org/10.1109/conit65521.2025.11167544

Campus : Coimbatore

School : School of Artificial Intelligence

Year : 2025

Abstract :

Recent advancements in Large Language Models (LLMs), Language Models (LMs) and the access to a wellstructured annotated dataset have improved the performance of legal case outcome prediction models. The proposed model utilizes the PredEx dataset, one of the highly annotated expert datasets and extends the prior work by applying the leveraged RoBERTa-Large model with a significant data preprocessing strategy. While implementing the transformer, instead of using the full case text with 512 token chunks as input, the proposed method uses 50% of the input and truncates it into 512 token chunks, still achieving a test accuracy of 78%, thereby highlighting the effectiveness of the proposed methodology. By utilizing only 50% of the documents, the proposed system concludes that the full document context is not required for case dismissal classification. Beyond contributing a new perspective for efficient input handling, this method also reduces computational time and memory usage without compromising model performance. The proposed system also implements various traditional machine learning models and establishes a new benchmark baseline result.

Cite this Research Publication : G Jagadeesh, T Guhan Kumar, S Dharshan Kumaar, Neethu Mohan, S Sachin Kumar, Exploring Efficient Truncation Strategies for Case Dismissal Prediction using Transformer Models, 2025 5th International Conference on Intelligent Technologies (CONIT), IEEE, 2025, https://doi.org/10.1109/conit65521.2025.11167544

Admissions Apply Now