Elon has tricked himself into thinking the automated statistics machine is capable of human level cognition. He thinks cars will only need eyeballs like humans have and that things like directly measuring what's physically in front of you and comparing it to a 3D point cloud scan is useless.
Welp, he's wrong. He won't admit it. More people will have to die and/or Tesla will have to face bankruptcy before they fire him and start adding lidar (etc) back in.
Real sad because by then they probably won't have the cash to pay for the insane upfront investment that Google has been plowing into this for 16 years now.
Back when they started, lidar cost a lot of money. They could not have equipped all cars with it.
The issue came when he promised every car would become a robotaxi. This means he either has to retrofit them all with lidar, or solve it with the current sensor set. It might be ego as well, but adding lidar will also expose them to class action suits.
The promise that contributed to the soaring valuation, now looks like a curse that stops him from changing anything. It feels a bit poetic.
> Back when they started, lidar cost a lot of money. They could not have equipped all cars with it.
But radar and ultrasound did not cost a lot and he got rid of those too, suggesting it was more than cost that made him go vision only.
Heck, they even use vision for rain sensing instead of the cheap and more effective sensor everyone else uses (which is just some infrared LEDs and photodiodes that measures the change in internal reflection at the outer side of the windshield when the critical angle changes when the windshield gets wet).
I don't want to defend Tesla, but ... The problem with LIDAR is a human problem. The real issue that LIDAR has fundamentally different limitations than human sensors have, and this makes any decision based on them extremely unpredictable ... and humans react on predictions.
A LIDAR can get near-exact distances between objects with error margins of something like 0.2%, even 100m away. It takes an absolute expert human to accurately judge distance between themselves and an object even 5 meters away. You can see this in the youtube movies of the "Tesla beep". It used to be the case that if the Tesla autopilot judged a collision between 2 objects inevitable, it had a characteristic beep.
The result was that this beep would go off ... the humans in the car know it means a crash is imminent, but can't tell what's going on, where the crash is going to happen, then 2 seconds "nothing" happens, and then cars crash, usually 20-30 meters in front of the Tesla car. Usually the car then safely stops. Humans report that this is somewhere between creepy and a horror-like situation.
But worse yet is when the reverse happens. Distance judgement is the strength of LIDARs. But they have weaknesses that humans don't have. Angular resolution, especially in 3D. Unlike human eyes, a LIDAR sees nothing in between it's pixels, and because the 3d world is so big even 2 meters away the distance between pixels is already in the multiple cm range. Think of a lidar as a ball with laser beams, infinitely thin, coming out of it. The pixels give you the distance until that laser hits something. Because of how waves work, that means any object that is IN ONE PLANE smaller than 5 centimers is totally invisible to lidar at 2 meters distance. At 10 meters it's already up to over 25 cm. You know what object is smaller than 25 cm in one plane? A human standing up, or walking. Never mind a child. If you look at the sensor data you see them appear and disappear, exactly the way you'd expect sensor noise to act.
You can disguise this limitation by purposefully putting your lidar at an angle, but that angle can't be very big.
The net effect of this limitation is that a LIDAR doesn't miss a small dog at 20 meters distance, but fails to see a child (or anything of roughly a pole shape, like a traffic sign) at 3 to 5 meters distance. The same for things composed of beams without a big reflective surface somewhere ... like a bike. A bike at 5 meters is totally invisible for a LIDAR. Oh and perhaps even worse, a LIDAR just doesn't see cliffs. It doesn't see staircases going down, or that the surface you're on ends somewhere in front of you. It's strange. A LIDAR that can perfectly track every bird, even at a kilometer distance, cannot see a child at 5 meters. Or, when it's about walking robots, LIDAR robots have a very peculiar behavior: they walk into ... an open door, rather than through it 10% of the time. Makes perfect sense if you look at the LIDAR data they see, but very weird when you see it happen.
Worse yet is how humans respond to this. We all know this, but: how does a human react when they're in a queue and the person in front of them (or car in front of their car) stops ... and they cannot tell why it stops? We all know what follows is an immediate and very aggressive reaction. Well, you cannot predict what a lidar sees, so robots with lidars constantly get into that situation. Or, if it's a lidar robot attempting to go through a door, you predict it'll avoid running into anything. Then the robot hits the wood ... and you hit the robot ... and the person behind you hits you.
> It takes an absolute expert human to accurately judge distance between themselves and an object even 5 meters away.
Huh? The most basic skill of any driver is the ability to see if you're at a collision course with any other vehicle. I can accurately judge this at distances of at least 50 meters, and I'm likely vastly underestimating the distance. It is very apparent when this is the case. I can't tell if the distance between us is 45 vs 51 meters, but that is information with 0 relevance to anything.
> The result was that this beep would go off ... the humans in the car know it means a crash is imminent, but can't tell what's going on, where the crash is going to happen, then 2 seconds "nothing" happens, and then cars crash, usually 20-30 meters in front of the Tesla car. Usually the car then safely stops. Humans report that this is somewhere between creepy and a horror-like situation.
This is a non-issue and certainly not horror-like. All one's got to do is train themselves to slow down / brake when they hear the beep. And you're trying to paint this extremely useful safety feature as something bad?
> Worse yet is how humans respond to this. We all know this, but: how does a human react when they're in a queue and the person in front of them (or car in front of their car) stops ... and they cannot tell why it stops? We all know what follows is an immediate and very aggressive reaction.
What are you trying to say here? If the car in front of me brakes I brake too. I do not need to know the reason it braked, I simply brake too, because I have to. It works out fine every time because I have to drive in such a way to be able to stop in time in case the car in front of me applies 100% braking at any time. Basic driving.
Generally, what you're describing as predicting is more accurately called assuming. Assuming that things will go how one wants them to go. I call that sort of driving optimistic: optimistically assuming that the car in front of me will continue going forward and that there is nothing behind that huge truck that's blocking my view of the upcoming intersection, so I can freely gas it through.
That mindset is of course wrong; we must drive pessimistically, assuming that any car may apply max braking at any time and that if any part of our line of sight is obstructed, the worst case scenario is happening behind it - there is a high speed object coming towards us at a collision course that will reveal itself from behind the obstruction at the last second. Therefore, we must slow down when coming around a line of sight obstruction.
One would've thought that unproven and potentially dangerous technology like this--self-driving cars--would've required many years of testing before being allowed on public roads.
And yet here we are where the testing grounds are our public roadways and we, the public, are the guinea pigs.
I cut elon a tiny bit of slack because I remember ten years ago when a lot of us stupidly believed that deep learning just needed to be scaled up and self-driving was literally only 5 years away. Elon's problem was that he bet the farm on that assumption and has buried himself so deep in promises that he has seemingly no choice but to double down at every opportunity.
I've never believed that but I said the opposite - these cars will never drive themselves. Elon has caused an unknown but not small number of deaths through his misleading marketing. I cut him no slack.
I used to tell the fanboys "Automated driving is like making children. Trying is much more fun than succeeding." ten years ago. But building a golem _was_ exciting to be honest.
> .. the issue of door handles. On Teslas, they retract into the doors while the cars are being driven. The system depends on battery power.
I never will understand this horrible decision. It isn't good design if it kills people. I wonder why this isn't regulated. They could at least implement a "push to pop up" functionality that works without battery power or have a narrow slot under the handle.
Seems Tesla is a status symbol. I’ve taken an Uber a few times in different model Teslas. I’m sorry to say, they were all a piece of crap. Everything rattled like crazy. Super loud while driving. The door handles are not intuitive. Not sure why anyone would buy one other than a status symbol.
Because a bunch of people have for inexplicable reasons tied their meaningful parts of their self identity to Elon and his grievance filled simplistic worldview and they get very very upset when flaws in it are pointed out and they respond by trying to make sure nobody can publicly criticise it.
> “Autopilot appears to automatically disengage a fraction of a second before the impact as the crash becomes inevitable,”
This is probably core to their legal strategy. No matter how much data the cars collect they can always safely destroy most because this allows them to pretend the autonomous driving systems weren’t involved in the crash.
At this point it’s beyond me why people still trust the brand and the system. Musk really only disrupted the “fake it” part of “fake it till you make it”.
Then make sure you don’t read till the end of the article where this behavior is supported. Maybe it is just a coincidence that Teslas always record data except when there’s a suspicion they caused the crash, and then the data was lost, didn’t upload, was irrelevant, or self driving wasn’t involved.
> The YouTuber Mark Rober, a former engineer at Nasa, replicated this behaviour in an experiment on 15 March 2025. He simulated a range of hazardous situations, in which the Model Y performed significantly worse than a competing vehicle. The Tesla repeatedly ran over a crash-test dummy without braking. The video went viral, amassing more than 14m views within a few days.
> The real surprise came after the experiment. Fred Lambert, who writes for the blog Electrek, pointed out the same autopilot disengagement that the NHTSA had documented. “Autopilot appears to automatically disengage a fraction of a second before the impact as the crash becomes inevitable,” Lambert noted.
In my previous comment I was wondering why would anyone still trust Tesla’s claims and not realistically assume the worst. It’s because plenty of people will only worry about it when it happens to them. It’s not an issue in itself until after your burned to a crisp in your car.
No, turning off autopilot during a crash isn't subterfuge. The subterfuge would be using that to lie about autopilot's involvement. I'm pretty sure that has never happened, and their past data has included anyone using autopilot in the vicinity of a crash, much more than one second.
The article cites an example of a Tesla engineer dying in a crash where witnesses (including a survivor) say he had FSD turned on. Elon claimed the witnesses were wrong.
Turning off the system just before a crash when it’s unavoidable allows them to say “the system wasn’t active when the crash occurred” and implicitly label a lot of data “irrelevant”. Which they do a lot of times according to the article, without providing any of that data. That’s beyond subterfuge. They don’t just kill people, they destroy evidence of their guilt and shift the blame to the victim. How much stock one needs to own to pretend they don’t understand this?
Tesla bragged about the cars giving a ton of data, and showed it when this suited the company and it was good for the image. But every time it was controversial like an unexplainable accident potentially caused by the car itself the data was somehow not transmitted, or lost, or irrelevant.
I’m not sure why you have such a hard time understanding the issue, or insist on what you’re “pretty sure” about when all evidence (they cite the NHTSA and experiments conducted privately by a NASA engineer, as well as the string of coincidental data unavailability for controversial accidents) points to the contrary. The article provides evidence and discussion on all these points. Nonetheless you ignore all that and stick to your “I’m pretty sure” with fanboy abandon. Sets a really low bar for future conversations.
I've seen so many Teslas do so many stupid things on motorways that I do everything I can not to be behind, in front of, or beside one. Can't imagine why anyone would get inside one.
Pretty sure if firefighters got there in time they could break the glass, unless they meant the battery fire was so fierce they couldn’t approach the vehicle.
Window glass in most modern vehicles is laminated rather than a simple tempered pane - makes them less likely to shatter in a rollover, and thereby eject occupants, but harder to break through in an emergency.
TBH I see this more as a “firefighters aren’t being given the right tools” issue, as this is far from unique to Tesla, and the tools have existed since laminated side glass became a requirement - but don’t seem to yet be part of standard issue or training.
Who cares if your social media toy has bugs in production ? These are several tons metal things going fast amongst humans, with high-energy batteries that like to explode. This can't have bugs in production.
My self driving Tesla does better than most Tiktok-brained drivers. I know because I'm watching, just like the car. Two is better than one and I enjoy it thoroughly.
"hard left"? In what universe are you? A hard left newspaper would call for making all companies worker-owned, and I don't see anything like that from the Guardian.
I love how the guardian has the ability to made anything sound like vapid nonsense.
What would be good is if the guardian talked to domain experts about the sensor suite and why they are not suitable for "self drive" or even that the self driving isn't certified for level3 autonomy.
The other thing thats deeply annoying is that of course everything is recorded, because thats how they build the dataset. crucially it'll have the disengage command recorded, at what time and with a specific reason.
Why? because that is a really high quality signal that something is wrong, and can be fed into the dadtaset as a negative example.
Now, if they do disengage before crashes, there will be a paper trail and testing to make that work, and probably a whole bunch of simulation work as well.
But the gruan as ever, is only skin deep analysis.
Elon has tricked himself into thinking the automated statistics machine is capable of human level cognition. He thinks cars will only need eyeballs like humans have and that things like directly measuring what's physically in front of you and comparing it to a 3D point cloud scan is useless.
Welp, he's wrong. He won't admit it. More people will have to die and/or Tesla will have to face bankruptcy before they fire him and start adding lidar (etc) back in.
Real sad because by then they probably won't have the cash to pay for the insane upfront investment that Google has been plowing into this for 16 years now.
Back when they started, lidar cost a lot of money. They could not have equipped all cars with it.
The issue came when he promised every car would become a robotaxi. This means he either has to retrofit them all with lidar, or solve it with the current sensor set. It might be ego as well, but adding lidar will also expose them to class action suits.
The promise that contributed to the soaring valuation, now looks like a curse that stops him from changing anything. It feels a bit poetic.
> Back when they started, lidar cost a lot of money. They could not have equipped all cars with it.
But radar and ultrasound did not cost a lot and he got rid of those too, suggesting it was more than cost that made him go vision only.
Heck, they even use vision for rain sensing instead of the cheap and more effective sensor everyone else uses (which is just some infrared LEDs and photodiodes that measures the change in internal reflection at the outer side of the windshield when the critical angle changes when the windshield gets wet).
I don't want to defend Tesla, but ... The problem with LIDAR is a human problem. The real issue that LIDAR has fundamentally different limitations than human sensors have, and this makes any decision based on them extremely unpredictable ... and humans react on predictions.
A LIDAR can get near-exact distances between objects with error margins of something like 0.2%, even 100m away. It takes an absolute expert human to accurately judge distance between themselves and an object even 5 meters away. You can see this in the youtube movies of the "Tesla beep". It used to be the case that if the Tesla autopilot judged a collision between 2 objects inevitable, it had a characteristic beep.
The result was that this beep would go off ... the humans in the car know it means a crash is imminent, but can't tell what's going on, where the crash is going to happen, then 2 seconds "nothing" happens, and then cars crash, usually 20-30 meters in front of the Tesla car. Usually the car then safely stops. Humans report that this is somewhere between creepy and a horror-like situation.
But worse yet is when the reverse happens. Distance judgement is the strength of LIDARs. But they have weaknesses that humans don't have. Angular resolution, especially in 3D. Unlike human eyes, a LIDAR sees nothing in between it's pixels, and because the 3d world is so big even 2 meters away the distance between pixels is already in the multiple cm range. Think of a lidar as a ball with laser beams, infinitely thin, coming out of it. The pixels give you the distance until that laser hits something. Because of how waves work, that means any object that is IN ONE PLANE smaller than 5 centimers is totally invisible to lidar at 2 meters distance. At 10 meters it's already up to over 25 cm. You know what object is smaller than 25 cm in one plane? A human standing up, or walking. Never mind a child. If you look at the sensor data you see them appear and disappear, exactly the way you'd expect sensor noise to act.
You can disguise this limitation by purposefully putting your lidar at an angle, but that angle can't be very big.
The net effect of this limitation is that a LIDAR doesn't miss a small dog at 20 meters distance, but fails to see a child (or anything of roughly a pole shape, like a traffic sign) at 3 to 5 meters distance. The same for things composed of beams without a big reflective surface somewhere ... like a bike. A bike at 5 meters is totally invisible for a LIDAR. Oh and perhaps even worse, a LIDAR just doesn't see cliffs. It doesn't see staircases going down, or that the surface you're on ends somewhere in front of you. It's strange. A LIDAR that can perfectly track every bird, even at a kilometer distance, cannot see a child at 5 meters. Or, when it's about walking robots, LIDAR robots have a very peculiar behavior: they walk into ... an open door, rather than through it 10% of the time. Makes perfect sense if you look at the LIDAR data they see, but very weird when you see it happen.
Worse yet is how humans respond to this. We all know this, but: how does a human react when they're in a queue and the person in front of them (or car in front of their car) stops ... and they cannot tell why it stops? We all know what follows is an immediate and very aggressive reaction. Well, you cannot predict what a lidar sees, so robots with lidars constantly get into that situation. Or, if it's a lidar robot attempting to go through a door, you predict it'll avoid running into anything. Then the robot hits the wood ... and you hit the robot ... and the person behind you hits you.
Humans and lidars don't work well together.
Wasn't the angular resolution solved by having spinning lidars?
> It takes an absolute expert human to accurately judge distance between themselves and an object even 5 meters away.
Huh? The most basic skill of any driver is the ability to see if you're at a collision course with any other vehicle. I can accurately judge this at distances of at least 50 meters, and I'm likely vastly underestimating the distance. It is very apparent when this is the case. I can't tell if the distance between us is 45 vs 51 meters, but that is information with 0 relevance to anything.
> The result was that this beep would go off ... the humans in the car know it means a crash is imminent, but can't tell what's going on, where the crash is going to happen, then 2 seconds "nothing" happens, and then cars crash, usually 20-30 meters in front of the Tesla car. Usually the car then safely stops. Humans report that this is somewhere between creepy and a horror-like situation.
This is a non-issue and certainly not horror-like. All one's got to do is train themselves to slow down / brake when they hear the beep. And you're trying to paint this extremely useful safety feature as something bad?
> Worse yet is how humans respond to this. We all know this, but: how does a human react when they're in a queue and the person in front of them (or car in front of their car) stops ... and they cannot tell why it stops? We all know what follows is an immediate and very aggressive reaction.
What are you trying to say here? If the car in front of me brakes I brake too. I do not need to know the reason it braked, I simply brake too, because I have to. It works out fine every time because I have to drive in such a way to be able to stop in time in case the car in front of me applies 100% braking at any time. Basic driving.
Generally, what you're describing as predicting is more accurately called assuming. Assuming that things will go how one wants them to go. I call that sort of driving optimistic: optimistically assuming that the car in front of me will continue going forward and that there is nothing behind that huge truck that's blocking my view of the upcoming intersection, so I can freely gas it through.
That mindset is of course wrong; we must drive pessimistically, assuming that any car may apply max braking at any time and that if any part of our line of sight is obstructed, the worst case scenario is happening behind it - there is a high speed object coming towards us at a collision course that will reveal itself from behind the obstruction at the last second. Therefore, we must slow down when coming around a line of sight obstruction.
One would've thought that unproven and potentially dangerous technology like this--self-driving cars--would've required many years of testing before being allowed on public roads.
And yet here we are where the testing grounds are our public roadways and we, the public, are the guinea pigs.
Nothing new under the sun.
https://thevictoriancyclist.wordpress.com/2015/06/21/cycling...
[dead]
I cut elon a tiny bit of slack because I remember ten years ago when a lot of us stupidly believed that deep learning just needed to be scaled up and self-driving was literally only 5 years away. Elon's problem was that he bet the farm on that assumption and has buried himself so deep in promises that he has seemingly no choice but to double down at every opportunity.
I've never believed that but I said the opposite - these cars will never drive themselves. Elon has caused an unknown but not small number of deaths through his misleading marketing. I cut him no slack.
A lot of Elon’s bets were protests against Google’s hegemony which seems to worked out - robotaxi has started and openai is crushing google search.
Someone in his position cannot afford fallacious thinking like that. Or so one would think.
I used to tell the fanboys "Automated driving is like making children. Trying is much more fun than succeeding." ten years ago. But building a golem _was_ exciting to be honest.
> .. the issue of door handles. On Teslas, they retract into the doors while the cars are being driven. The system depends on battery power.
I never will understand this horrible decision. It isn't good design if it kills people. I wonder why this isn't regulated. They could at least implement a "push to pop up" functionality that works without battery power or have a narrow slot under the handle.
Seems Tesla is a status symbol. I’ve taken an Uber a few times in different model Teslas. I’m sorry to say, they were all a piece of crap. Everything rattled like crazy. Super loud while driving. The door handles are not intuitive. Not sure why anyone would buy one other than a status symbol.
Why's it flagged?
Pretty much every post that even hints Tesla, Musk, or Spacex gets flagged on here.
Because a bunch of people have for inexplicable reasons tied their meaningful parts of their self identity to Elon and his grievance filled simplistic worldview and they get very very upset when flaws in it are pointed out and they respond by trying to make sure nobody can publicly criticise it.
> “Autopilot appears to automatically disengage a fraction of a second before the impact as the crash becomes inevitable,”
This is probably core to their legal strategy. No matter how much data the cars collect they can always safely destroy most because this allows them to pretend the autonomous driving systems weren’t involved in the crash.
At this point it’s beyond me why people still trust the brand and the system. Musk really only disrupted the “fake it” part of “fake it till you make it”.
I'll worry about that possible subterfuge if it actually happens a single time ever.
It's something to keep in mind but it's not an issue itself.
Then make sure you don’t read till the end of the article where this behavior is supported. Maybe it is just a coincidence that Teslas always record data except when there’s a suspicion they caused the crash, and then the data was lost, didn’t upload, was irrelevant, or self driving wasn’t involved.
> The YouTuber Mark Rober, a former engineer at Nasa, replicated this behaviour in an experiment on 15 March 2025. He simulated a range of hazardous situations, in which the Model Y performed significantly worse than a competing vehicle. The Tesla repeatedly ran over a crash-test dummy without braking. The video went viral, amassing more than 14m views within a few days.
> The real surprise came after the experiment. Fred Lambert, who writes for the blog Electrek, pointed out the same autopilot disengagement that the NHTSA had documented. “Autopilot appears to automatically disengage a fraction of a second before the impact as the crash becomes inevitable,” Lambert noted.
In my previous comment I was wondering why would anyone still trust Tesla’s claims and not realistically assume the worst. It’s because plenty of people will only worry about it when it happens to them. It’s not an issue in itself until after your burned to a crisp in your car.
No, turning off autopilot during a crash isn't subterfuge. The subterfuge would be using that to lie about autopilot's involvement. I'm pretty sure that has never happened, and their past data has included anyone using autopilot in the vicinity of a crash, much more than one second.
The article cites an example of a Tesla engineer dying in a crash where witnesses (including a survivor) say he had FSD turned on. Elon claimed the witnesses were wrong.
Turning off the system just before a crash when it’s unavoidable allows them to say “the system wasn’t active when the crash occurred” and implicitly label a lot of data “irrelevant”. Which they do a lot of times according to the article, without providing any of that data. That’s beyond subterfuge. They don’t just kill people, they destroy evidence of their guilt and shift the blame to the victim. How much stock one needs to own to pretend they don’t understand this?
Tesla bragged about the cars giving a ton of data, and showed it when this suited the company and it was good for the image. But every time it was controversial like an unexplainable accident potentially caused by the car itself the data was somehow not transmitted, or lost, or irrelevant.
I’m not sure why you have such a hard time understanding the issue, or insist on what you’re “pretty sure” about when all evidence (they cite the NHTSA and experiments conducted privately by a NASA engineer, as well as the string of coincidental data unavailability for controversial accidents) points to the contrary. The article provides evidence and discussion on all these points. Nonetheless you ignore all that and stick to your “I’m pretty sure” with fanboy abandon. Sets a really low bar for future conversations.
I've seen so many Teslas do so many stupid things on motorways that I do everything I can not to be behind, in front of, or beside one. Can't imagine why anyone would get inside one.
Pretty sure if firefighters got there in time they could break the glass, unless they meant the battery fire was so fierce they couldn’t approach the vehicle.
Window glass in most modern vehicles is laminated rather than a simple tempered pane - makes them less likely to shatter in a rollover, and thereby eject occupants, but harder to break through in an emergency.
TBH I see this more as a “firefighters aren’t being given the right tools” issue, as this is far from unique to Tesla, and the tools have existed since laminated side glass became a requirement - but don’t seem to yet be part of standard issue or training.
https://www.firehouse.com/rescue/vehicle-extrication/product...
It’s ridiculous that Tesla can beta test their shitty software in public and I have to be subjected to it
I grew up in an era of lawn darts, leaded gasoline, and Oxycontin. The end user is a test subject of all products.
Lawn darts was fun though.
Let me guess, you always perfect code? Maybe just html but it’s perfect, right?
Who cares if your social media toy has bugs in production ? These are several tons metal things going fast amongst humans, with high-energy batteries that like to explode. This can't have bugs in production.
https://en.m.wikipedia.org/wiki/Social_media%27s_role_in_the...
This is true for most software nowadays
Sure, but I'm not directly affected by someone's buggy phone software. If a self driving Tesla crashes into me, that does affect me.
My self driving Tesla does better than most Tiktok-brained drivers. I know because I'm watching, just like the car. Two is better than one and I enjoy it thoroughly.
I find it bit disappointing that you even need to restate this. People here should know better.
100% sure that buggy phones killed more people than Teslas.
[flagged]
[flagged]
"hard left"? In what universe are you? A hard left newspaper would call for making all companies worker-owned, and I don't see anything like that from the Guardian.
[flagged]
I’m sure the child who gets obliterated by a Tesla cares about the distinction.
The thing that should not be taken seriously are tesla cars. FSD and autopilot are marketing term for the same underlying piece of crap technology
Do you happen to own TSLA?
[flagged]
[flagged]
I love how the guardian has the ability to made anything sound like vapid nonsense.
What would be good is if the guardian talked to domain experts about the sensor suite and why they are not suitable for "self drive" or even that the self driving isn't certified for level3 autonomy.
The other thing thats deeply annoying is that of course everything is recorded, because thats how they build the dataset. crucially it'll have the disengage command recorded, at what time and with a specific reason.
Why? because that is a really high quality signal that something is wrong, and can be fed into the dadtaset as a negative example.
Now, if they do disengage before crashes, there will be a paper trail and testing to make that work, and probably a whole bunch of simulation work as well.
But the gruan as ever, is only skin deep analysis.
It’s a book excerpt
https://archive.is/jqbM2