Welcome to Kaytek's Easy AI (Artificial Intelligence) Landing Page
Artificial Intelligence is nothing but Augmented Intelligence
Published Articles on AI / ML (Machine Learning) / DL (Deep Learning)
Understanding the Resnet (Residual Network) Block in Convolutional Neural Networks (CNN) for AI - Computer Vision
For solving Artificial Intelligence (AI) Computer Vision problems, Convolutional Neural Networks (CNN's) have been always popularly used. Within CNN's there was a breakthrough a few years back when the Resnet (Residual Network) Block was introduced as an architectural innovation to reduce the problems of adding extra layers in the network architecture.
The original technical paper released on Resnet unfortunately did not contain an easy explanation of the terminology used in the Resnet Block diagram (Figure 2) in the same. To help ease understanding of the same, an article Resnet Block Explanation with a Terminology Deep Dive along with an accompanying presentation has been released on Medium. - 23rd October 2018.
Neural Networks are the secret sauce of Artificial Intelligence (AI) - Article (Yet Another) Neural Network Terminology Upto WX + B Stage - 27th September 2018.
Artificial Intelligence (AI) Maths captures real world knowledge - Article Entity Embeddings package real world knowledge for Artificial Intelligence (AI) algorithms - 1st August 2018.
Whatsapp Meets Google Artificial Intelligence (AI) - Article You are obsessed with Whatsapp analysing more than 700000 lines of Whatsapp chats using Google AI's Natural Language Cloud Services - 27th March 2018.
Approaches to AI / ML / DL Education - Specific Technology versus Generic Learnings
One of the valid concerns expressed by experts is the choice of technologies to focus for people entering this field. In one specific podcast, the discussion was on Pytorch versus Tensorflow.
There is already an abundance of AI / ML / DL educational offerings and technologies out there. The pace of innovation will not slow down.
The learning approach should always be to extract generic learnings from specific AI / ML / DL technologies so that future learnings and re-learnings are easier and faster when the next AI / ML / DL tool or technology arrives.
It may be much much tougher, take a much longer time, but the results in terms of long term conceptual learnings will be worth the effort. DL Giants such as Geoffrey Hinton spent years toiling away before getting meaningful results. The impatience and haste shown by many beginners to the field reminds one of the kindergarten story of The 3 Little Pigs.
Thoughts on FastAI Courses - The 'Top Down' Versus 'Bottoms Up' Approach
Jeremy Howard, the founder of FastAI with over 30 years of ML & coding experience is obviously well qualified. He has positioned the course as different from all the other courses in the market via a 'Top Down' approach which is code heavy and digs into the Maths whenever required on a need to basis. However,'Being different' need not mean 'Being easier to understand'.
For beginners; the 'Top Down' approach parachutes learners immediately to the peak of Mount Everest. Without having struggled to the top. From the peak, we get a great view standing on top of the mountain of FastAI and Pytorch based on his 3 decades plus experience. The results of Fast AI as demonstrated by consistent world class contemporary benchmarks are indeed remarkable. They give us an immediate starting point to use this world class library.
We are dazzled with a 'Shock & Awe' feeling. However, we have not really climbed our way to the top. We need to do a detailed deep dive into the code. Which means that a bottoms up approach will help people who are struggling currently to get their basics & foundations right. The hierarchy of code understanding on a bottoms up basis should be in this sequence : Python - Numpy - Pytorch - FastAI.
It is interesting that Jeremy Howard has planned the next version of fast.ai part 2 to be a 'Bottoms Up' approach.
Last updated on 24th October 2018.
Created on 17th October 2018.