Lloyd M
MSc > >
  • Renovating Requirements Engineering: First Thoughts to Shape Requirements Engineering as a Profession
    Y Pham, L Montgomery, W Maalej
    3rd International Workshop on Learning from other Disciplines for RE (D4RE)
    Jeju Island, South Korea
    Workshop Paper
    DOI
    Legacy software systems typically include vital data for organizations that use them and should thus to be regularly maintained. Ideally, organizations should rely on Requirements Engineers to understand and manage changes of stakeholder needs and system constraints. However, due to time and cost pressure, and with a heavy focus on implementation, organizations often choose to forgo Requirements Engineers and rather focus on ad-hoc bug fixing and maintenance. This position paper discusses what Requirements Engineers could possibly learn from other similar roles to become crucial for the evolution of legacy systems. Particularly, we compare the roles of Requirements Engineers (according to IREB), Building Architects (according to the German regulations), and Product Owners (according to "The Scrum-Guide"). We discuss overlaps along four dimensions: liability, self-portrayal, core activities, and artifacts. Finally we draw insights from these related fields to foster the concept of a Requirements Engineer as a distinguished profession.
  • Research on NLP for RE at the University of Hamburg: A Report
    D Fucci, C Stanik, L Montgomery, Z Kurtanovic, T Johann, W Maalej
    1st Workshop on Natural Language Processing for Requirements Engineering (NLP4RE)
    Utrecht, Netherlands
    Workshop Paper
    PDF
    The Mobile Applied Software Technology (MAST) group at the University of Hamburg focuses its research on context-aware adaptive systems and the social side of software engineering. In the context of natural language processing for requirements engineering, the group has mostly focused on mining app stores reviews. Currently, the group is involved in the OpenReq project where natural language processing is being used to recommend requirements from diverse sources (e.g., social media, issue trackers), and to improve the structural quality of existing requirements.
  • Customer Support Ticket Escalation Prediction using Feature Engineering
    L Montgomery, D Damian, T Bulmer, S Quader
    Springer Requirements Engineering Journal (REJ)
    Journal Paper
    Understanding and keeping the customer happy is a central tenet of requirements engineering. Strategies to gather, analyze, and negotiate requirements are complemented by efforts to manage customer input after products have been deployed. For the latter, support tickets are key in allowing customers to submit their issues, bug reports, and feature requests. If insufficient attention is given to support issues, however, their escalation to management becomes time-consuming and expensive, especially for large organizations managing hundreds of customers and thousands of support tickets. Our work provides a step toward simplifying the job of support analysts and managers, particularly in predicting the risk of escalating support tickets. In a field study at our large industrial partner, IBM, we used a design science research methodology to characterize the support process and data available to IBM analysts in managing escalations. In a design science methodology, we used feature engineering to translate our understanding of support analysts’ expert knowledge of their customers into features of a support ticket model. We then implemented these features into a machine learning model to predict support ticket escalations. We trained and evaluated our machine learning model on over 2.5 million support tickets and 10,000 escalations, obtaining a recall of 87.36% and an 88.23% reduction in the workload for support analysts looking to identify support tickets at risk of escalation. Further on-site evaluations, through a prototype tool we developed to implement our machine learning techniques in practice, showed more efficient weekly support ticket management meetings. Finally, in addition to these research evaluation activities, we compared the performance of our support ticket model with that of a model developed with no feature engineering; the support ticket model features outperformed the non-engineered model. The artifacts created in this research are designed to serve as a starting place for organizations interested in predicting support ticket escalations, and for future researchers to build on to advance research in escalation prediction.
  • Predicting Developers' IDE Commands with Machine Learning
    T Bulmer, L Montgomery, D Damian
    ACM 15th Mining Software Repositories (MSR)
    Gothenburg, Sweden
    Research Paper
    When a developer is writing code they are usually focused and in a state-of-mind which some refer to as flow. Breaking out of this flow can cause the developer to lose their train of thought and have to start their thought process from the beginning. This loss of thought can be caused by interruptions and sometimes slow IDE interactions. Predictive functionality has been harnessed in user applications to speed up load times, such as in Google Chrome's browser which has a feature called "Predicting Network Actions". This will pre-load web-pages that the user is most likely to click through. This mitigates the interruption that load times can introduce. In this paper we seek to make the first step towards predicting user commands in the IDE. Using the MSR 2018 Challenge Data of over 3000 developer session and over 10 million recorded events, we analyze and cleanse the data to be parsed into event series, which can then be used to train a variety of machine learning models, including a neural network, to predict user induced commands. Our highest performing model is able to obtain a 5 cross-fold validation prediction accuracy of 64%.
  • How Angry are Your Customers? Sentiment Analysis of Support Tickets that Escalate
    C Werner, G Tapuc, L Montgomery, D Sharma, S Dodos, D Damian
    1st International Workshop on Affective Computing for Requirements Engineering (AffectRE)
    Banff, Canada
    Workshop Paper
    DOI
    Software support ticket escalations can be an extremely costly burden for software organizations all over the world. Consequently, there exists an interest in researching how to better enable support analysts to handle such escalations. In order to do so, we need to develop tools to reliably predict if, and when, a support ticket becomes a candidate for escalation. This paper explores the use of sentiment analysis tools on customer-support analyst conversations to find indicators of when a particular support ticket may be escalated. The results of this research indicate a considerable difference in the sentiment between escalated support tickets and non-escalated support tickets. Thus, this preliminary research provides us with the necessary information to further investigate how we can reliably predict support ticket escalations, and subsequently to provide insight to support analysts to better enable them to handle support tickets that may be escalated.
  • A Simple NLP-Based Approach to Support Onboarding and Retention in Open Source Communities.
    C Stanik, L Montgomery, D Martens, D Fucci, W Maalej
    IEEE 34th International Conference on Software Maintenance and Evoluation (ICSME)
    Madrid, Spain
    Workshop Paper
    Successful open source communities are constantly looking for new members and helping them become active developers. A common approach for developer onboarding in open source projects is to let newcomers focus on relevant yet easy-to-solve issues to familiarize themselves with the code and the community. The goal of this research is twofold. First, we aim at automatically identifying issues that newcomers can resolve by analyzing the history of resolved issues by simply using the title and description of issues. Second, we aim at automatically identifying issues, that can be resolved by newcomers who later become active developers. We mined the issue trackers of three large open source projects and extracted natural language features from the title and description of resolved issues. In a series of experiments, we optimized and compared the accuracy of four supervised classifiers to address our research goals. Random Forest, achieved up to 91% precision (F1-score 72%) towards the first goal while for the second goal, Decision Tree achieved a precision of 92% (F1-score 91%). A qualitative evaluation gave insights on what information in the issue description is helpful for newcomers. Our approach can be used to automatically identify, label, and recommend issues for newcomers in open source software projects based only on the text of the issues.
  • Escalation prediction using feature engineering: addressing support ticket escalations within IBM’s ecosystem
    L Montgomery
    University of Victoria (UVic)
    Thesis
    Large software organizations handle many customer support issues every day in the form of bug reports, feature requests, and general misunderstandings as submitted by customers. Strategies to gather, analyze, and negotiate requirements are complemented by efforts to manage customer input after products have been deployed. For the latter, support tickets are key in allowing customers to submit their issues, bug reports, and feature requests. Whenever insufficient attention is given to support issues, there is a chance customers will escalate their issues, and escalation to management is time-consuming and expensive, especially for large organizations managing hundreds of customers and thousands of support tickets. This thesis provides a step towards simplifying the job for support analysts and managers, particularly in predicting the risk of escalating support tickets. In a field study at our large industrial partner, IBM, a design science methodology was employed to characterize the support process and data available to IBM analysts in managing escalations. Through iterative cycles of design and evaluation, support analysts’ expert knowledge about their customers was translated into features of a support ticket model to be implemented into a Machine Learning model to predict support ticket escalations. The Machine Learning model was trained and evaluated on over 2.5 million support tickets and 10,000 escalations, obtaining a recall of 79.9% and an 80.8% reduction in the workload for support analysts looking to identify support tickets at risk of escalation. Further onsite evaluations were conducted through a tool developed to implement the Machine Learning techniques in industry, deployed during weekly support-ticket-management meetings. The features developed in the Support Ticket Model are designed to serve as a starting place for organizations interested in implementing the model to predict support ticket escalations, and for future researchers to build on to advance research in Escalation Prediction.
  • What do Support Analysts Know About Their Customers? On the Study and Prediction of Support Ticket Escalations in Large Software Organizations
    Best Paper Award
    L Montgomery, D Damian
    IEEE 25th International Requirements Engineering Conference (RE)
    Lisbon, Portugal
    Research Paper
    Understanding and keeping the customer happy is a central tenet of requirements engineering. Strategies to gather, analyze, and negotiate requirements are complemented by efforts to manage customer input after products have been deployed. For the latter, support tickets are key in allowing customers to submit their issues, bug reports, and feature requests. Whenever insufficient attention is given to support issues, however, their escalation to management is time-consuming and expensive, especially for large organizations managing hundreds of customers and thousands of support tickets. Our work provides a step towards simplifying the job of support analysts and managers, particularly in predicting the risk of escalating support tickets. In a field study at our large industrial partner, IBM, we used a design science methodology to characterize the support process and data available to IBM analysts in managing escalations. Through iterative cycles of design and evaluation, we translated our understanding of support analysts’ expert knowledge of their customers into features of a support ticket model to be implemented into a Machine Learning model to predict support ticket escalations. We trained and evaluated our Machine Learning model on over 2.5 million support tickets and 10,000 escalations, obtaining a recall of 79.9% and an 80.8% reduction in the workload for support analysts looking to identify support tickets at risk of escalation. Further on-site evaluations, through a prototype tool we developed to implement our Machine Learning techniques in practice, showed more efficient weekly support-ticket-management meetings. The features we developed in the Support Ticket Model are designed to serve as a starting place for organizations interested in implementing our model to predict support ticket escalations, and for future researchers to build on to advance research in escalation prediction.
  • ECrits - Visualizing Support Ticket Escalation Risk
    E Reading, L Montgomery, D Damian
    IEEE 25th International Requirements Engineering Conference (RE)
    Lisbon, Portugal
    Tool Paper
    Managing support tickets in large, multi-product organizations is difficult. Failure to meet the expectations of customers can lead to the escalation of support tickets, which is costly for IBM in terms of customer relationships and resources spent addressing the escalation. Keeping the customer happy is an important task in requirements engineering, which often comes in the form of handling their problems brought forth in support tickets. Proper attention to customers, their issues, and the bottom-up requirements that surface through bug reports can be difficult when the support process involves spending a lot of time managing customers to prevent escalations. For any given support analyst, understanding the customer is achievable through time spent looking through past and present support tickets within their organization; however, this solution does not scale up to account for all support tickets across all product teams. ECrits is a tool developed to help mitigate information overload by selectively mining customer information from support ticket repositories, displaying that data to support analysts, and doing predictive modelling on that data to suggest which support tickets are likely to escalate.
  • Sentimental ECrits: Modelling Customer Emotions to Predict Critical Situations
    G Tapuc, T Bulmer, L Montgomery, D Damian
    26th International Conference on Computer Science and Software Engineering (CASCON)
    Toronto, Canada
    Poster
  • ECrits - Modelling Escalation Risk in Problem Management Records (PMRs)
    L Montgomery, D Damian
    25th International Conference on Computer Science and Software Engineering (CASCON)
    Toronto, Canada
    Poster
  • Towards a Live Anonymous Question Queue to Address Student Apprehension
    L Montgomery, G Evans, F Harrison, D Damian
    ACM 20th Western Canadian Conference on Computing Education (WCCCE)
    Victoria, Canada
    Research Paper
    PDF
    In today’s university climate many first and second year classes have over a hundred students. Large classrooms make some students apprehensive about asking questions. An anonymous method of submitting questions to an instructor would allow students to ask their questions without feeling apprehensive. In this paper we propose a Live Anonymous Question Queue (LAQQ), a system that facilitates anonymous question submissions in real time to mitigate student apprehension, increase student participation, and provide real-time feedback to the instructor. To study the necessary features of an LAQQ, we conducted a study of a system, namely Google Moderator, which best approached our concept of an LAQQ. We deployed Google moderator in large lectures and studied its support of a number of features that we envisioned for an LAQQ. Through our class observations, interviews with instructors, and surveys with the students, our results suggest that an LAQQ system must provide support for: notification of question submission to provide awareness for the instructor, and context for questions to allow an instructor to easily answer a question. Additionally our results suggest that an LAQQ system must be accessible and usable on multiple platforms. Finally our results suggest that in order to be successful in the classroom an LAQQ system must be fully adopted by the instructor and the classroom organizational structure must change to accommodate the use of the LAQQ.