Self Attention
From intuition to Scaled Dot-Product Attention. Part of the Transformers & Attention series, this article explores the core idea behind modern Transformer architectures: self-attention.
Demystifying AI and Machine Learning, One Concept at a Time.
Practical insights into classical machine learning and modern AI covering NLP & LLMs, Computer Vision, MLOps/LLMOps, and AWS deployment.
From intuition to Scaled Dot-Product Attention. Part of the Transformers & Attention series, this article explores the core idea behind modern Transformer architectures: self-attention.
This article kicks off the Transformers & Attention series by exploring why attention emerged and how it became the foundation of modern Transformer-based models like BERT and GPT.
From encoders to masked attention and decoders, this article breaks down the key building blocks of the Transformer architecture - the foundation of modern NLP and todayβs large language models
Computer vision pipelines for segmentation using modern deep learning architectures.
Forecasting fundamentals and exploratory analysis using classic time-series datasets.
Core topics that power ML: linear algebra, probability & statistics, calculus, numerical methods, and optimization.
Understand the Normal Distribution through intuition and mathematics, and see why it plays a central role in machine learning.
Introduces random variables through simple experiments like coin tosses and dice rolls, building a clear foundation for understanding discrete and continuous random variables
This article builds intuition for continuous probability distributions by contrasting them with discrete cases and introducing probability density functions. It explains how probabilities over intervals emerge naturally from real-world measurements and variability.
Welcome, everyone!
I am Ramendra, and I love being called Rami.
I am a passionate learner and educator, and I created this blog to share knowledge across Mathematics, Artificial Intelligence, and Machine Learning. The content here spans a broad spectrum-from mathematical and statistical foundations to modern AI, Machine Learning, and Data Science applications.
In todayβs world, intelligent systems with heavy mathematics under the hood quietly power much of what we interact with every day. I believe that understanding these foundations whether you come from a technical or non-technical background empowers you to better navigate and shape the future.
Through this blog, I aim to demystify AI and Machine Learning, breaking the notion that they are complex or accessible only to a select few. With curiosity, consistency, and the right mindset, anyone can begin this journey.
The real magic happens when mathematics meets data, algorithms, and computational power from personal machines to scalable cloud computing platforms. This is where ideas evolve into meaningful, real-world impact.
I am a Data Scientist currently working at Volkswagen (Scania), focused on real world AI solutions from classical ML to transformer-based NLP (BERT/GPT variants), RAG, and production MLOps/LLMOps on AWS.
Multi-modal RAG pipelines, AWS Bedrock agents, Document processing with OCR & multilingual NLP.
Machine Learning, MLOps, Deep Learning, CV, NLP: BERT, GPT, LLMs | Python: NumPy, Pandas, Matplotlib, Scikit-Learn, TensorFlow/Keras, PyTorch, AWS (SageMaker, Bedrock, EC2, ECR-ECS-Fargate)
Delhi, NCR,India
karna.ramenk@gmail.com