Tesla’s debated vision-based full self-driving approach is finally yielding results.

Like many things about Elon Musk, Tesla’s approach to achieving autonomous driving is polarizing. Bucking the map-based trend set by industry veterans such as Waymo, Tesla opted to dedicate its resources in pursuing a vision-based approach to achieve full self-driving instead. This involves a lot of hard, tedious work on Tesla’s part, but today, there are indications that the company’s controversial strategy is finally paying off.

In a recent talk, Tesla AI Director Andrej Karpathy discussed the key differences between the map-based approach of Waymo and Tesla’s camera-based strategy. According to Karpathy, Waymo’s use of pre-mapped data and LiDAR make scaling difficult, since vehicles’ autonomous capabilities are practically tied to a geofenced area. Tesla’s vision-based approach, which uses cameras and artificial intelligence, is not. This means that Autopilot and FSD improvements can be rolled out to the fleet, and they would function anywhere.

This rather ambitious plan for Tesla’s full self-driving system has caught a lot of skepticism in the past, with critics pointing out that map-based FSD is the way to go. Tesla, in response, dug its heels in and doubled down on its vision-based initiative. This, in a way, resulted in Autopilot improvements and the rollout of FSD features taking a lot of time, particularly since training the neural networks, which recognize objects and driving behavior on the road, requires massive amounts of real-world data.

An autonomous Tesla Model 3. (Credit: Tesla)

Perhaps this is the reason why Elon Musk’s estimates for a “feature complete” version of Full Self-Driving has been moved to this year, or why Smart Summon was delayed for months before its release. But inasmuch as this seemed to be the case, recent software updates and features from Tesla suggest that the pace of improvement for Autopilot and Full Self-Driving features seems to be accelerating.

Take Traffic Light and Stop Sign Control, for example. The feature made its debut this April, and despite travel still being somewhat restricted due to the pandemic, Tesla was able to get enough data from its fleet that the company is now looking to remove green light confirmation for the feature. The period between the feature’s initial release and its upcoming update lasted less than two months, which is significant considering that it is related to inner-city driving, one of the more challenging aspects of FSD.

Part of this may be due to the fact that Tesla’s fleet is simply far larger now. With Model 3 production in full swing in both the United States and in China, and with the Model Y now ramping, Tesla’s high-volume vehicles are getting to more and more customers. Each of these cars helps gather training data for the company’s neural network, which, in turn, aids in refining features like Traffic Light and Stop Sign Control. What’s interesting to note is that Tesla’s fleet is not even gathering data at its full potential, since several areas across the globe are still recovering from the effects of the coronavirus.

Tesla’s Traffic Light and Stop Sign Control in action. (Credit: YouTube | Dirty Tesla)

Once Tesla rolls out a version of Traffic Light and Stop Sign Control without green light confirmation, the company would pretty much be just one step away from having an FSD suite that’s feature-complete. The company’s US vehicle configurator shows this, with the FSD page only listing “Autosteer on city streets” as the lone feature of Full Self-Driving that is still dubbed “upcoming.” Considering the pace of improvement being rolled out to Autopilot and FSD improvements, it would not be surprising if Autosteer on city streets gets released soon as well.

It should be noted that every Autopilot improvement and FSD feature is the result of an insane amount of neural network training. Neural nets require mass amounts of data to learn driving behavior well, and so far, it appears that Tesla has now reached a point where its fleet could provide ample training data at a relatively quick timeframe. Tesla seems to be pretty optimistic about this too, as hinted at by the company’s recent update, which included the option to activate the Model 3 and Model Y’s cabin camera.

Elon Musk has stated in the past that the Model 3 and Model Y’s cabin camera will be used primarily when the Robotaxi Network goes online. Together with the quick improvements of features like Traffic Light and Stop Sign Control, it definitely appears that Tesla’s data-intensive, controversial, vision-based approach to full self-driving is finally coming together. And just like everything that the company has achieved so far, it was the result of very hard, very tedious work.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *