publications
Please refer to my Google Scholar profile https://scholar.google.com/citations?user=M6iVZjwAAAAJ for updated publication list.
journal
2021
-
CookingQA: Answering Questions and Recommending Recipes Based on Ingredients Khilji, Abdullah Faiz Ur Rahman, Manna, Riyanka, Laskar, Sahinur Rahman, Pakray, Partha, Das, Dipankar, Bandyopadhyay, Sivaji, and Gelbukh, Alexander Arabian Journal for Science and Engineering [Abstract] [BIBTeX] [HTML] [PDF]
In today’s world where an individual is becoming more and more busy and independent, the use of recommendation-based systems is steadily increasing. Thus, making available professional knowledge to the common man in a short-span quite necessary. The aim of our recipe recommendation system is to recommend recipes to users based on their questions. To make the recommendation model important as well as meaningful, it is pertinent to display only those recommendations that have a greater probability to be fit for the asked question. We have addressed this challenge by working on a threshold parameter generated from the recommendation engine. Apart from this, we have also included a question classification (QC) task together with the question answering (QA) module. The QA module is used to extract the requisite answers from the recommended recipe based on the class label obtained from QC. The main contribution of this work is the proposal of a robust recommendation approach by enabling analysis of threshold estimation and proposal of a suitable dataset. The final output of the recommendation system obtains benchmark results on the human evaluation (HE) metric. Our code, pretrained models and the dataset will be made publicly available.
@article{khilji2021cookingqa, author = {Khilji, Abdullah Faiz Ur Rahman and Manna, Riyanka and Laskar, Sahinur Rahman and Pakray, Partha and Das, Dipankar and Bandyopadhyay, Sivaji and Gelbukh, Alexander}, title = {CookingQA: Answering Questions and Recommending Recipes Based on Ingredients}, journal = {Arabian Journal for Science and Engineering}, year = {2021}, month = jan, day = {07}, issn = {2191-4281}, doi = {10.1007/s13369-020-05236-5}, pdflink = {https://rdcu.be/cdfp3}, html = {https://link.springer.com/article/10.1007%2Fs13369-020-05236-5} }
2020
-
Question Classification and Answer Extraction for Developing a Cooking QA System Khilji, Abdullah Faiz Ur Rahman, Manna, Riyanka, Laskar, Sahinur Rahman, Pakray, Partha, Das, Dipankar, Bandyopadhyay, Sivaji, and Gelbukh, Alexander Computación y Sistemas [Abstract] [BIBTeX] [HTML] [PDF]
In an automated Question Answering (QA) system, Question Classification (QC) is an essential module. The aim of QC is to identify the type of questions and classify them based on the expected answer type. Although the machine-learning approach overcomes the limitation of rules as is the case with the conventional rule-based approach but is restricted to the predefined class of questions. The existing approaches are too specific for the users. To address this challenge, we have developed a cooking QA system in which a recipe question is contextually classified into a particular category using deep learning techniques. The question class is then used to extract the requisite details from the recipe obtained via the rule-based approach to provide a precise answer. The main contribution of this paper is the description of the QC module of the cooking QA system. The obtained intermediate classification accuracy over the unseen data is 90% and the human evaluation accuracy of the final system output is 39.33%.
@article{khilji2020question, title = {Question Classification and Answer Extraction for Developing a Cooking QA System}, author = {Khilji, Abdullah Faiz Ur Rahman and Manna, Riyanka and Laskar, Sahinur Rahman and Pakray, Partha and Das, Dipankar and Bandyopadhyay, Sivaji and Gelbukh, Alexander}, journal = {Computaci{\'o}n y Sistemas}, volume = {24}, number = {2}, year = {2020}, html = {https://www.cys.cic.ipn.mx/ojs/index.php/CyS/article/view/3445}, pdf = {3445-7264-1-PB.pdf} }
-
Predictive approaches for the UNIX command line: curating and exploiting domain knowledge in semantics deficit data Singh, Thoudam Doren, Khilji, Abdullah Faiz Ur Rahman, Divyansha, , Singh, Apoorva Vikram, Thokchom, Surmila, and Bandyopadhyay, Sivaji Multimedia Tools and Applications [Abstract] [BIBTeX] [HTML] [Code] [PDF]
The command line has always been the most efficient method to interact with UNIX flavor based systems while offering a great deal of flexibility and efficiency as preferred by professionals. Such a system is based on manually inputting commands to instruct the computing machine to carry out tasks as desired. This human-computer interface is quite tedious especially for a beginner. And hence, the command line has not been able to garner an overwhelming reception from new users. Therefore, to improve user-friendliness and to mark a step towards a more intuitive command line system, we propose two predictive approaches that can benefit all kinds of users specially the novice ones by integrating into the command line interface. These methods are based on deep learning based predictions. The first approach is based on the sequence to sequence (Seq2seq) model with joint learning by leveraging continuous representations of a self-curated exhaustive knowledge base (KB) comprising an all-inclusive command description to enhance the embedding employed in the model. The other is based on the attention-based transformer architecture where a pretrained model is employed. This allows the model to dynamically evolve over time making it adaptable to different circumstances by learning as the system is being used. To reinforce our idea, we have experimented with our models on three major publicly available Unix command line datasets and have achieved benchmark results using GLoVe and Word2Vec embeddings. Our finding is that the transformer based framework performs better on two different datasets of the three in our experiment in a semantic deficit scenario like UNIX command line prediction. However, Seq2seq based model outperforms bidirectional encoder representations from transformers (BERT) based model on a larger dataset.
@article{singh2020predictive, author = {Singh, Thoudam Doren and Khilji, Abdullah Faiz Ur Rahman and {Divyansha} and Singh, Apoorva Vikram and Thokchom, Surmila and Bandyopadhyay, Sivaji}, title = {Predictive approaches for the UNIX command line: curating and exploiting domain knowledge in semantics deficit data}, journal = {Multimedia Tools and Applications}, year = {2020}, month = nov, day = {09}, issn = {1573-7721}, doi = {10.1007/s11042-020-10109-y}, url = {https://doi.org/10.1007/s11042-020-10109-y}, pdflink = {https://rdcu.be/b942t}, html = {https://link.springer.com/article/10.1007%2Fs11042-020-10109-y}, code = {https://github.com/abdullahkhilji/UNIX-Command-Line-Prediction} }
book chapters
2020
-
HealFavor: A Chatbot Application in Healthcare Khilji, Abdullah Faiz Ur Rahman, Laskar, Sahinur Rahman, Pakray, Partha, Kadir, Rabiah Abdul, Lydia, Maya Silvi, and Bandyopadhyay, Sivaji Analysis of Medical Modalities for Improved Diagnosis in Modern Healthcare , Accepted, Publication Due [BIBTeX]
@article{khilji2020bookchapter, title = {HealFavor: A Chatbot Application in Healthcare}, author = {Khilji, Abdullah Faiz Ur Rahman and Laskar, Sahinur Rahman and Pakray, Partha and Kadir, Rabiah Abdul and Lydia, Maya Silvi and Bandyopadhyay, Sivaji}, journal = {Analysis of Medical Modalities for Improved Diagnosis in Modern Healthcare}, year = {2020}, note = {(in press)} }
conferences
2021
-
Abstractive Text Summarization Approaches with Analysis of Evaluation Techniques Khilji, Abdullah Faiz Ur Rahman, Sinha, Utkarsh, Singh, Pintu, Ali, Adnan, and Pakray, Partha In Computational Intelligence in Communications and Business Analytics [Abstract] [BIBTeX]
In today’s world where all the information is available at our fingertips, it is becoming more and more difficult to retrieve vital information from large documents without reading the whole text. Large textual documents require a great deal of time and energy to understand and extract the key components from the text. In such a case, summarized versions of these documents provide a great deal of flexibility in understanding the context and important points of the text. In our work, we have attempted to prepare a baseline machine learning model to summarize textual documents, and have worked with various methodologies. The summarization system takes the raw text as an input and produces the predicted summary as an output. We have also used various evaluation metrics for the analysis of the predicted summary. Both extractive and abstractive based text summarization have been described and experimented with. We have also verified this baseline system on three different evaluation metrics i.e. BLEU, ROUGE, and a textual entailment method. We have also done an in-depth discussion of the three evaluation techniques used, and have systematically proved the advantages of using a semantic-based evaluation technique to calculate the overall summarization score of a text document.
@inproceedings{khiljiabstractive, author = {Khilji, Abdullah Faiz Ur Rahman and Sinha, Utkarsh and Singh, Pintu and Ali, Adnan and Pakray, Partha}, editor = {Dutta, Paramartha and Mandal, Jyotsna K. and Mukhopadhyay, Somnath}, title = {Abstractive Text Summarization Approaches with Analysis of Evaluation Techniques}, booktitle = {Computational Intelligence in Communications and Business Analytics}, year = {2021}, publisher = {Springer International Publishing}, address = {Cham}, pages = {243--258}, isbn = {978-3-030-75529-4} }
-
Human Behavior Assessment using Ensemble Models Khilji, Abdullah Faiz Ur Rahman, Khaund, Rituparna, and Sinha, Utkarsh In Proceedings of the The 18th Annual Workshop of the Australasian Language Technology Association , Accepted, Publication Due [BIBTeX]
@inproceedings{khilji2021human, title = {Human Behavior Assessment using Ensemble Models}, author = {Khilji, Abdullah Faiz Ur Rahman and Khaund, Rituparna and Sinha, Utkarsh}, booktitle = {Proceedings of the The 18th Annual Workshop of the Australasian Language Technology Association}, month = jan, year = {2021}, address = {Australia}, publisher = {Association for Computational Linguistics}, note = {(in press)} }
-
Deep Neural Network for Musical Instrument Recognition Using MFCCs Mahanta, Saranga Kingkor, Khilji, Abdullah Faiz Ur Rahman, and Pakray, Partha Computación y Sistemas [BIBTeX]
@article{mahanta2021deep, title = {Deep Neural Network for Musical Instrument Recognition Using MFCCs}, author = {Mahanta, Saranga Kingkor and Khilji, Abdullah Faiz Ur Rahman and Pakray, Partha}, journal = {Computaci{\'o}n y Sistemas}, volume = {25}, number = {2}, year = {2021} }
-
CNLP-NITS @ LongSumm 2021: TextRank Variant for Generating Long Summaries Kaushik, Darsh, Khilji, Abdullah Faiz Ur Rahman, Sinha, Utkarsh, and Pakray, Partha In Proceedings of the Second Workshop on Scholarly Document Processing [Abstract] [BIBTeX]
The huge influx of published papers in the field of machine learning makes the task of summarization of scholarly documents vital, not just to eliminate the redundancy but also to provide a complete and satisfying crux of the content. We participated in LongSumm 2021: The 2^nd Shared Task on Generating Long Summaries for scientific documents, where the task is to generate long summaries for scientific papers provided by the organizers. This paper discusses our extractive summarization approach to solve the task. We used TextRank algorithm with the BM25 score as a similarity function. Even after being a graph-based ranking algorithm that does not require any learning, TextRank produced pretty decent results with minimal compute power and time. We attained 3^rd rank according to ROUGE-1 scores (0.5131 for F-measure and 0.5271 for recall) and performed decently as shown by the ROUGE-2 scores.
@inproceedings{kaushik-etal-2021-cnlp, title = {{CNLP}-{NITS} @ {L}ong{S}umm 2021: {T}ext{R}ank Variant for Generating Long Summaries}, author = {Kaushik, Darsh and Khilji, Abdullah Faiz Ur Rahman and Sinha, Utkarsh and Pakray, Partha}, booktitle = {Proceedings of the Second Workshop on Scholarly Document Processing}, month = jun, year = {2021}, address = {Online}, publisher = {Association for Computational Linguistics}, url = {https://www.aclweb.org/anthology/2021.sdp-1.13}, doi = {10.18653/v1/2021.sdp-1.13}, pages = {103--109} }
2020
-
HealFavor: Dataset and A Prototype System for Healthcare ChatBot Khilji, Abdullah Faiz Ur Rahman, Laskar, Sahinur Rahman, Pakray, Partha, Kadir, Rabiah Abdul, Lydia, Maya Silvi, and Bandyopadhyay, Sivaji In 2020 International Conference on Data Science, Artificial Intelligence, and Business Analytics (DATABIA) [Abstract] [BIBTeX] [HTML] [Dataset]
A chatbot is a software application aimed at simulating real-time conversations. This system has been designed to address a plethora of domains where they have proved themselves worthy to complement or in some areas replace human-based information acquisition. Though some domains like travel and food have advanced with the growing consumer demand, the healthcare-based system does require significant advancement to address the issue of medical accessibility. The work aims at providing a suitable dataset as well as proposes a prototype system architecture. The prototype system with the self-created dataset is then analyzed on different parameters by numerous experts.
@inproceedings{khiljihealfavor, title = {HealFavor: Dataset and A Prototype System for Healthcare ChatBot}, author = {Khilji, Abdullah Faiz Ur Rahman and Laskar, Sahinur Rahman and Pakray, Partha and Kadir, Rabiah Abdul and Lydia, Maya Silvi and Bandyopadhyay, Sivaji}, booktitle = {2020 International Conference on Data Science, Artificial Intelligence, and Business Analytics (DATABIA)}, year = {2020}, volume = {}, number = {}, pages = {1-4}, doi = {10.1109/DATABIA50434.2020.9190281}, issn = {}, month = jul, html = {https://ieeexplore.ieee.org/document/9190281}, dataset = {https://github.com/cnlp-nits/HealFavor} }
-
Hindi-Marathi Cross Lingual Model Laskar, Sahinur Rahman, Khilji, Abdullah Faiz Ur Rahman, Pakray, Partha, and Bandyopadhyay, Sivaji In Proceedings of the Fifth Conference on Machine Translation [Abstract] [BIBTeX] [HTML] [PDF]
Machine Translation (MT) is a vital tool for aiding communication between linguistically separate groups of people. The neural machine translation (NMT) based approaches have gained widespread acceptance because of its outstanding performance. We have participated in WMT20 shared task of similar language translation on Hindi-Marathi pair. The main challenge of this task is by utilization of monolingual data and similarity features of similar language pair to overcome the limitation of available parallel data. In this work, we have implemented NMT based model that simultaneously learns bilingual embedding from both the source and target language pairs. Our model has achieved Hindi to Marathi bilingual evaluation understudy (BLEU) score of 11.59, rank-based intuitive bilingual evaluation score (RIBES) score of 57.76 and translation edit rate (TER) score of 79.07 and Marathi to Hindi BLEU score of 15.44, RIBES score of 61.13 and TER score of 75.96.
@inproceedings{laskar-EtAl:2020:WMT, author = {Laskar, Sahinur Rahman and Khilji, Abdullah Faiz Ur Rahman and Pakray, Partha and Bandyopadhyay, Sivaji}, title = {Hindi-Marathi Cross Lingual Model}, booktitle = {Proceedings of the Fifth Conference on Machine Translation}, month = nov, year = {2020}, address = {Online}, publisher = {Association for Computational Linguistics}, pages = {394--399}, html = {https://www.aclweb.org/anthology/2020.wmt-1.45}, pdflink = {http://www.statmt.org/wmt20/pdf/2020.wmt-1.45.pdf} }
-
EnAsCorp1.0: English-Assamese Corpus Laskar, Sahinur Rahman, Khilji, Abdullah Faiz Ur Rahman, Pakray, Partha, and Bandyopadhyay, Sivaji In Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages [Abstract] [BIBTeX] [HTML] [PDF]
The corpus preparation is one of the important challenging task for the domain of machine translation especially in low resource language scenarios. Country like India where multiple languages exists, machine translation attempts to minimize the communication gap among people with different linguistic backgrounds. Although Google Translation covers automatic translation of various languages all over the world but it lags in some languages including Assamese. In this paper, we have developed EnAsCorp1.0, corpus of English-Assamese low resource pair where parallel and monolingual data are collected from various online sources. We have also implemented baseline systems with statistical machine translation and neural machine translation approaches for the same corpus.
@inproceedings{laskar-etal-2020-enascorp1, title = {{E}n{A}s{C}orp1.0: {E}nglish-{A}ssamese Corpus}, author = {Laskar, Sahinur Rahman and Khilji, Abdullah Faiz Ur Rahman and Pakray, Partha and Bandyopadhyay, Sivaji}, booktitle = {Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages}, month = dec, year = {2020}, address = {Suzhou, China}, publisher = {Association for Computational Linguistics}, url = {https://www.aclweb.org/anthology/2020.loresmt-1.9}, pages = {62--68}, pdflink = {https://www.aclweb.org/anthology/2020.loresmt-1.9.pdf}, html = {https://www.aclweb.org/anthology/2020.loresmt-1.9/} }
-
Multimodal Neural Machine Translation for English to Hindi Laskar, Sahinur Rahman, Khilji, Abdullah Faiz Ur Rahman, Pakray, Partha, and Bandyopadhyay, Sivaji In Proceedings of the 7th Workshop on Asian Translation [Abstract] [BIBTeX] [HTML] [PDF]
Machine translation (MT) focuses on the automatic translation of text from one natural language to another natural language. Neural machine translation (NMT) achieves state-of-the-art results in the task of machine translation because of utilizing advanced deep learning techniques and handles issues like long-term dependency, and context-analysis. Nevertheless, NMT still suffers low translation quality for low resource languages. To encounter this challenge, the multi-modal concept comes in. The multi-modal concept combines textual and visual features to improve the translation quality of low resource languages. Moreover, the utilization of monolingual data in the pre-training step can improve the performance of the system for low resource language translations. Workshop on Asian Translation 2020 (WAT2020) organized a translation task for multimodal translation in English to Hindi. We have participated in the same in two-track submission, namely text-only and multi-modal translation with team name CNLP-NITS. The evaluated results are declared at the WAT2020 translation task, which reports that our multi-modal NMT system attained higher scores than our text-only NMT on both challenge and evaluation test set. For the challenge test data, our multi-modal neural machine translation system achieves Bilingual Evaluation Understudy (BLEU) score of 33.57, Rank-based Intuitive Bilingual Evaluation Score (RIBES) 0.754141, Adequacy-Fluency Metrics (AMFM) score 0.787320 and for evaluation test data, BLEU, RIBES, and, AMFM score of 40.51, 0.803208, and 0.820980 for English to Hindi translation respectively.
@inproceedings{laskar-etal-2020-multimodal, title = {Multimodal Neural Machine Translation for {E}nglish to {H}indi}, author = {Laskar, Sahinur Rahman and Khilji, Abdullah Faiz Ur Rahman and Pakray, Partha and Bandyopadhyay, Sivaji}, booktitle = {Proceedings of the 7th Workshop on Asian Translation}, month = dec, year = {2020}, address = {Suzhou, China}, publisher = {Association for Computational Linguistics}, url = {https://www.aclweb.org/anthology/2020.wat-1.11}, pages = {109--113}, pdflink = {https://www.aclweb.org/anthology/2020.wat-1.11.pdf}, html = {https://www.aclweb.org/anthology/2020.wat-1.11/} }
-
Zero-Shot Neural Machine Translation: Russian-Hindi @LoResMT 2020 Laskar, Sahinur Rahman, Khilji, Abdullah Faiz Ur Rahman, Pakray, Partha, and Bandyopadhyay, Sivaji In Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages [Abstract] [BIBTeX] [HTML] [PDF]
Neural machine translation (NMT) is a widely accepted approach in the machine translation (MT) community, translating from one natural language to another natural language. Although, NMT shows remarkable performance in both high and low resource languages, it needs sufficient training corpus. The availability of a parallel corpus in low resource language pairs is one of the challenging tasks in MT. To mitigate this issue, NMT attempts to utilize a monolingual corpus to get better at translation for low resource language pairs. Workshop on Technologies for MT of Low Resource Languages (LoResMT 2020) organized shared tasks of low resource language pair translation using zero-shot NMT. Here, the parallel corpus is not used and only monolingual corpora is allowed. We have participated in the same shared task with our team name CNLP-NITS for the Russian-Hindi language pair. We have used masked sequence to sequence pre-training for language generation (MASS) with only monolingual corpus following the unsupervised NMT architecture. The evaluated results are declared at the LoResMT 2020 shared task, which reports that our system achieves the bilingual evaluation understudy (BLEU) score of 0.59, precision score of 3.43, recall score of 5.48, F-measure score of 4.22, and rank-based intuitive bilingual evaluation score (RIBES) of 0.180147 in Russian to Hindi translation. And for Hindi to Russian translation, we have achieved BLEU, precision, recall, F-measure, and RIBES score of 1.11, 4.72, 4.41, 4.56, and 0.026842 respectively.
@inproceedings{laskar-etal-2020-zero, title = {Zero-Shot Neural Machine Translation: {R}ussian-{H}indi @{L}o{R}es{MT} 2020}, author = {Laskar, Sahinur Rahman and Khilji, Abdullah Faiz Ur Rahman and Pakray, Partha and Bandyopadhyay, Sivaji}, booktitle = {Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages}, month = dec, year = {2020}, address = {Suzhou, China}, publisher = {Association for Computational Linguistics}, url = {https://www.aclweb.org/anthology/2020.loresmt-1.5}, pages = {38--42}, pdflink = {https://www.aclweb.org/anthology/2020.loresmt-1.5.pdf}, html = {https://www.aclweb.org/anthology/2020.loresmt-1.5/} }
-
A Hybrid Classification Approach using Topic Modeling and Graph Convolution Networks Singh, Thoudam Doren, Divyansha, , Singh, Apoorva Vikram, and Khilji, Abdullah Faiz Ur Rahman In 2020 International Conference on Computational Performance Evaluation (ComPE) [Abstract] [BIBTeX] [HTML]
Text classification has become a key operation in various natural language processing tasks. The efficiency of most classification algorithms predominantly confide in the quality of input features. In this work, we propose a novel multi-class text classification technique that harvests features from two distinct feature extraction methods. Firstly, a structured heterogeneous text graph built based on document-word relations and word co-occurrences is leveraged using a Graph Convolution Network (GCN). Secondly, the documents are topic modeled to use the document-topic score as features into the classification model. The concerned graph is constructed using Point-Wise Mutual Information (PMI) between pair of word co-occurrences and Term Frequency-Inverse Document Frequency (TF-IDF) score for words in the documents for word co-occurrences. Experimentation reveals that our text classification model outperforms the existing techniques for five benchmark text classification data sets.
@inproceedings{singhhybrid, author = {Singh, Thoudam Doren and Divyansha and Singh, Apoorva Vikram and Khilji, Abdullah Faiz Ur Rahman}, booktitle = {2020 International Conference on Computational Performance Evaluation (ComPE)}, title = {A Hybrid Classification Approach using Topic Modeling and Graph Convolution Networks}, year = {2020}, volume = {}, number = {}, pages = {285-289}, doi = {10.1109/ComPE49325.2020.9200037}, issn = {}, month = jul, html = {https://ieeexplore.ieee.org/document/9200037} }
-
Urdu Fake News Detection using Generalized Autoregressors Khilji, Abdullah Faiz Ur Rahman, Laskar, Sahinur Rahman, Pakray, Partha, and Bandyopadhyay, Sivaji In The 2020 Fake News Detection in the Urdu Language Task, Forum for Information Retrieval Evaluation 2020 , Accepted, Publication Due [BIBTeX]
@inproceedings{khiljiurdu, title = {Urdu Fake News Detection using Generalized Autoregressors}, author = {Khilji, Abdullah Faiz Ur Rahman and Laskar, Sahinur Rahman and Pakray, Partha and Bandyopadhyay, Sivaji}, booktitle = {The 2020 Fake News Detection in the Urdu Language Task, Forum for Information Retrieval Evaluation 2020}, year = {2020}, note = {(in press)} }
-
Debunking Fake News by Leveraging Speaker Credibility and BERT Based Model Singh, Thoudam Doren, Divyansha, , Singh, Apoorva Vikram, Sachan, Anubahv, and Khilji, Abdullah Faiz Ur Rahman In IEEE/WIC/ACM International Joint Conference On Web Intelligence And Intelligent Agent Technology, (WI-IAT ’20) , Accepted, Publication Due [BIBTeX]
@inproceedings{singhdebunking, author = {Singh, Thoudam Doren and Divyansha and Singh, Apoorva Vikram and Sachan, Anubahv and Khilji, Abdullah Faiz Ur Rahman}, booktitle = {IEEE/WIC/ACM International Joint Conference On Web Intelligence And Intelligent Agent Technology, (WI-IAT '20)}, title = {Debunking Fake News by Leveraging Speaker Credibility and BERT Based Model}, year = {2020}, note = {(in press)} }
pre prints
2020
-
Seq2Seq and Joint Learning Based Unix Command Line Prediction System Singh, Thoudam Doren, Khilji, Abdullah Faiz Ur Rahman, Divyansha, , Singh, Apoorva Vikram, Thokchom, Surmila, and Bandyopadhyay, Sivaji [BIBTeX]
@misc{singh2020seq2seq, title = {Seq2Seq and Joint Learning Based Unix Command Line Prediction System}, author = {Singh, Thoudam Doren and Khilji, Abdullah Faiz Ur Rahman and Divyansha and Singh, Apoorva Vikram and Thokchom, Surmila and Bandyopadhyay, Sivaji}, year = {2020}, eprint = {2006.11558}, archiveprefix = {arXiv}, primaryclass = {cs.CL} }