Why Tesla replaced Python with C++for its Autopilot Software

In the Year 2020, Elon tweeted that “Our NN (Neural Net) is written in Python for rapid iteration, then converted to C++/C/raw CUDA for speed (inference)”

In this tweet Elon basically was saying their Neural Network, which is a part of AI of Tesla AutoPilot, was largely written in Python for rapid iteration means rapid development, which again proves that Python is super fast in development due to it’s simplicity, and its AI ecosystem but it’s inference, the part of AI of autopilot software of Tesla has been converted to C++.

So what is Inference, AI of any system, learns patterns and makes its AI base stronger and stronger

But every AI has to put its learning into execution, which is “AI is live mode.”

This Live mode requires a lot of processing speed, because applying all learning instantly is required, and for this engine, Tesla has used C++, because C++ is extremely fast in execution.

Python, on the other hand, is mostly interpreted and will not be a good choice for Inference.

Learning and other parts are still in Python, because learning may be slow, and it will not impact much, and using Python for other parts is a good choice because the AI ecosystem of Python is one of the biggest AI ecosystems in the world.

Where CUDA is used: CUDA is NVIDIA platform that utilizes GPU directly.

Tesla makes their own own systems, whether its python AI library of C++ code

Tesla Autopilot is often a topic of curiosity among programmers, and this tweet gave some insights into which languages are included in its development

Tesla introduced Autopilot in 2014, but it could hit the market only in 2015

One of the leading developers at Tesla, Andrej Karpathy” was responsible for replacing critical parts of the Autopilot Inference

Be with DSS for latest updates

‘Thanks!

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top