Welcome to Pods and Prompts




Hi everyone ðŸ‘‹,

Welcome to my blog, Pods and Prompts! This is the space where I’ll be sharing my thoughts and experiences on Generative AI, Large Language Models (LLMs), and the infrastructure that powers AI applications. Everything I write here reflects my personal views and is in no way related to my past or present employers.

A little about me: I’m Nithin Anil, an AI/ML engineer with over 13 years in the software industry. Over the years, I’ve worked with AWS, Azure, and GCP, built and managed large-scale distributed big data systems, and designed microservice-based architectures.

For the past couple of years, my focus has been on Generative AI applications, and one of my strengths is managing infrastructure for highly available, low-latency AI systems. I’m also deeply experienced in Kubernetes and Infrastructure as Code. System design for millisecond or even microsecond latency is one of my favourite challenges.

Beyond work, I’m passionate about teaching and mentoring. I enjoy giving talks and sessions on emerging technologies for students and software engineers. Fun fact: Early in my career, I was an educator at Infosys Global Education Centre in Mysore. Someday, I’d love to pivot fully into teaching. If you’d like me to deliver a technical session on any of my areas of expertise, feel free to reach out!

On the personal side, I’m a big fan of TV shows and movies, especially the feel-good kind. The Big Bang Theory and The Office are all-time favourites, and recently, I finished watching Foundation on Apple TV. Lately, I’ve also gotten into vampire stories, thanks to Lokah: Chapter 1 - Chandra. Right now, I’m reading The Vampire Lestat by Anne Rice. If you have recommendations in this space (or just good feel-good shows), I’d love to hear them!

I also keep a close eye on technology and AI news, so expect me to sprinkle some of that here, too.

Thanks for stopping by, and I hope you’ll enjoy reading my posts as much as I’ll enjoy writing them. 

Comments

Popular posts from this blog

Deploying AI Agents in Production Using Open Source Architecture

Semantic caching for LLM Applications and AI Agents