Description
Client support ticket data captured in issue tracking systems (ITS) is inherently sparse, repetitive, and includes domain-specific language. However, ITS are both necessary and invaluable business tools to monitor and manage client satisfaction. This project explored a variety of different attempts to use BERT transfer learning to more effectively use client support ticket data to predict the criticality of a newly created support ticket without requiring expensive pre-training of BERT to capture domain-specific knowledge.
Outcome
Since its introduction in 2018, BERT has proved itself to be one of the most significant innovations in natural language processing. While many papers have documented BERT’s success in fine tuning tasks across various domains, the project's findings revealed that fine tuning support ticket data on the BERT base uncased trained model yielded unsatisfactory results. This is primarily due to the unique attributes of support ticket datasets, namely the fact that (i) the ticket text is riddled with abbreviations, short-hand, domain-specific language and fragmented language rather than proper full sentence prose, and (ii) the dataset is highly imbalanced by class.
These two attributes are inherently misaligned with the underlying BERT base models which are pre-trained on a large text corpus of proper natural language prose. The results suggest that bespoke business datasets like client support tickets require an investment in pre-training to fully benefit from BERT capabilities. Nevertheless, good results were achieved by using models comprised of layered neural networks over frozen BERT weights without pre-training.
More Information
More information can be found at the following links:
GitHub Respository: https://github.com/nsylva/w266_final_project