Publications & Reports

Academic Research

2019-2020

During my academic phase (2019-2020), I published peer-reviewed research in Natural Language Processing and data mining. This work demonstrates my ability to tackle complex technical problems, conduct rigorous analysis, and communicate findings, skills that directly inform my approach to backend architecture and system design today.

Published Research

1. Deep Learning Based Question Answering System in Bengali
Journal of Information and Telecommunication, Volume 5, Issue 2 (2021), Taylor & Francis
DOI: 10.1080/24751839.2020.1833136
Published: November 23, 2020

2. Soil Analysis and Unconfined Compression Test Study Using Data Mining Techniques
International Conference on Computational Collective Intelligence (ICCCI 2020), Springer CCIS Series
DOI: 10.1007/978-3-030-63119-2_4
Published: November 19, 2020

3. Analysis of Soil and Various Geo-technical Properties using Data Mining Techniques
2020 IEEE 10th International Conference on Intelligent Systems (IS)
DOI: 10.1109/IS48319.2020.9199941
Published: September 18, 2020

Technical Reports & Research Projects

1. Visualizing Bangla Word Embeddings using BERT
Abstract

BERT, Bidirectional Encoder Representation from Transformers, is a masked language model that has created significant impact in the Machine Learning community by exhibiting state-of-the-art results across a wide assortment of NLP problems, including Question Answering (SQuAD v1.1), Natural Language Inference (MNLI), and others. In this project, we use BERT to extract features and word embedding vectors from Bangla text input.

2. Text Summarization on COVID-19 Articles
Abstract

Summarization has long been a significant area of concern in the field of Natural Language Processing, mostly due to its dependency on human intervention. In this study, we use a denoising autoencoder, BART, to carry out abstractive summarization on a dataset of COVID-19 articles, using ROUGE scores as an evaluation metric. The primary focus has been on medical articles based on COVID-19, aimed at providing significant support for research purposes during the pandemic.

3. Analyzing Automatic Text Summarization: Where are We and What's Next
Abstract

Summarization is the task of gathering bits of text into a shorter adaptation that contains the principal information from the source document. Significant progress has been achieved recently using NLP to attain high-standard summaries. This paper provides an extensive analysis of trends in text summarization, focusing on the two main kinds: extractive and abstractive summarization. While extractive summarization has seen significant progress with established evaluation metrics like ROUGE, achieving truly high-quality summarization capability requires advancing abstractive summarization approaches to be closer to human-like summarization.

4. Techniques Applied to Topic Segmentation
Abstract

Topic segmentation is an important initial step in most natural language processing tasks. It aims to determine the borders between topic blocks in a text, helping in semantic analysis. This technique is used to improve information access and detection of topic segments from entire text data. When segments are determined, they can be used to create structured representations and improve downstream NLP tasks.

North South University Logo

You can find my Resume here!

My Resume