Software engineer specialised in computer vision, machine learning & optimisation. Hands on experience with robot hardware & software development. Developing ML algorithms as tools at scale & for realtime applications. Undertaken various projects over the years ranging from robotics at CERN to Deep Neural Networks deployed on Android & the web. Published & presented a paper discussing a GPU based ML algorithm (ADMM) for resource allocation problems at the ACC19. Actively contribute to open source AI/ML tools & frameworks.
Perception engineer developing deep learning models for problems ranging from monocular depth estimation to object recognition (Cameras & LIDAR), deployable as distributed tools & for realtime automotive hardware. Actively read & evaluate state-of-the-art research papers to implement improvements to the algorithms currently in use
Worked under Prof. Andrea Vidaldi on the challenging problem of estimating depth using a single camera with deep convolutional neural networks (supervised/unsupervised structure from motion (SFM) using deep encoder-decoder architectures)
Responsible for devising a software platform for use throughout JLR (from design to functional safety) to aid in deciding the required sensor set and their positions on different vehicles
Worked with a team of engineers from Oxford and CERN to find a solution for high power beam losses due to interactions with unknown particles inside the LHC at CERN causing significant loss of money/research time. Relying on large amounts of data from previous experiments, developed and modelled a solution which was then presented at CERN to a panel of scientists and engineers. Recommendations considered for further development work
Specialised in optimisation, control, computer vision and machine learning.
• Investigated parallelised programming using NVIDIA GPUs (CUDA C/C++) with Model Predictive Control and Machine learning techniques (ADMM) to optimise energy usage in HEVs. Explored the possibility of using V2X communication (traffic data, etc.) to learn from past data and mitigate uncertainties using chance constraint optimisation. • Developed a novel robust GPU based ADMM algorithm able to handle large uncertainties, results showed speed up of over 18X using GPUs vs CPUs, making an impractical scenario based approach for learning from large amount of past data implementable. Academic paper discussing algorithm and results accepted at ACC19.
View Project• Utilised Google’s MobileNet to create an image classifier for Nexar challenge which was trained using images (NEXET dataset) of different types of vehicles (cars, buses, etc.). The model was deployed on an android phone using Tensorflow.
View ProjectImage classifier |
MobileNet model used to create an image classifier for Nexar challenge which was trained using rear view images of different types of vehicles (cars, buses, etc.) |
Depth |
Monocular depth estimation from a single image input using deep encoder/decoder models, projected as a pointcloud using three.js |
Style |
Real-time image stylisation using optimised deep style transfer models |
Profile |
Create your own stylised profile picture using ML! (deep semantic segmentation/style transfer models) |