Posting a referral for a Machine Learning Engineer...
# 09-job-board
a
Posting a referral for a Machine Learning Engineer candidate (Umar Jamil) looking for roles in Europe or remote. Feel free to email him directly: Here is a bit about him:
I am an Italian software engineer specialized in machine learning, with over a decade of working experience, both as an engineer and as a manager.
Throughout my career, I have co-founded the mobile app development company SUAPP Studio, with 3 apps published and was part of the founding team of Shapps, a fintech startup with a seed funding of approx. 120k USD. In 2023, I managed to grow my YouTube channel to 10,000+ subscribers by teaching machine learning and by implementing deep learning models from scratch using only Python and PyTorch. I have published open source implementations of deep learning models like LLaMA, Stable Diffusion, the Transformer model and BERT, while explaining their inner workings on my channel.
I started my career in 2009 in Italy as a software engineer for web (ASP.NET) and mobile applications (iOS, Xamarin). Since 2018, I have been managing teams of software engineers in China in building solutions for industrial automation and robotics using technologies like .NET, C#, SQL Server and Oracle.
Open Source Contributions
• Running an online community on YouTube with 10k+ subscribers by teaching deep learning and machine learning concepts. My videos have more than 250,000+ views, with hundreds of forks and stars on GitHub. My slides are used by universities all over the world for teaching.
• Made videos about the architecture of the Transformer model, Variational Autoencoder, Diffusion Models, Segment Anything, Stable Diffusion, LongNet, LoRA, LLaMA, BERT, Retrieval Augmented Generation, Vector DBs, and more.
• Implemented Large Language Models like LLaMA 2 from scratch using only Python and PyTorch while teaching concepts like the KV-Cache, Rotary Positional Encoding, Grouped Query Attention.
• Implemented Stable Diffusion from scratch using only PyTorch, while teaching the DDPM formulation, text2image, image2image and in-painting, Classifier-Free Guidance, and noise schedulers.
• Implemented a Transformer model for language translation using only PyTorch while teaching the most common inference methods like Greedy, BEAM search, Top K, and Top P. The model has also been trained on a cluster of GPUs using Distributed Data Parallel training with PyTorch.
• Implemented quantization techniques on models using Post-Training Quantization (PTQ) and Quantization-Aware Training (QAT) using PyTorch.
• Developed a pipeline to build datasets for training lip-reading models using Whisper and WhisperX, testing it on a Transformer-based and a Bi-GRU model with CTC loss.
👀 1