Register | Sign In


Understanding through Discussion


EvC Forum active members: 65 (9162 total)
5 online now:
Newest Member: popoi
Post Volume: Total: 915,817 Year: 3,074/9,624 Month: 919/1,588 Week: 102/223 Day: 0/13 Hour: 0/0


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   Self-Driving Cars
Percy
Member
Posts: 22392
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


Message 68 of 142 (829998)
03-19-2018 10:12 PM


Driverless Car Causes First Fatality
From today’s Washington Post: Self-driving Uber vehicle strikes and kills pedestrian
I purchased a slightly self-driving car about a year ago. In cruise control mode it will maintain a safe distance from the car in front of you. If they stop you stop. Usually. The technology is not perfect. Do not let up your guard.
What will a self-driving car do when it encounters a policeman directing traffic?
Percy

Replies to this message:
 Message 70 by Stile, posted 03-20-2018 11:31 AM Percy has replied

  
Percy
Member
Posts: 22392
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


(1)
Message 71 of 142 (830050)
03-20-2018 1:04 PM
Reply to: Message 70 by Stile
03-20-2018 11:31 AM


Re: Driverless Car Causes First Fatality
Here's an article that says the woman appeared suddenly, Uber ‘likely’ not at fault in deadly self-driving car crash, police chief says:
quote:
Police have viewed footage from two of the vehicle’s cameras, one facing forward toward the street, and the other inside the car facing the driver. Based on the footage, Moir said that the driver had little time to react. The driver said it was like a flash, the person walked out in front of them, she said. His first alert to the collision was the sound of the collision.
She added, It’s very clear it would have been difficult to avoid this collision in any kind of mode [autonomous or human-driven] based on how she came from the shadows right into the roadway.
Here's an older article with less detail, Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam:
quote:
The Uber car, a Volvo XC90 sport utility vehicle outfitted with the company’s sensing system, was in autonomous mode with a human safety driver at the wheel but carrying no passengers when it struck Elaine Herzberg, a 49-year-old woman, on Sunday around 10 p.m.
Sgt. Ronald Elcock, a Tempe police spokesman, said during a news conference that a preliminary investigation showed that the vehicle was moving around 40 miles per hour when it struck Ms. Herzberg, who was walking with her bicycle on the street. He said it did not appear as though the car had slowed down before impact and that the Uber safety driver had shown no signs of impairment. The weather was clear and dry.
--Percy

This message is a reply to:
 Message 70 by Stile, posted 03-20-2018 11:31 AM Stile has seen this message but not replied

Replies to this message:
 Message 72 by ringo, posted 03-20-2018 1:17 PM Percy has seen this message but not replied

  
Percy
Member
Posts: 22392
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


Message 77 of 142 (830129)
03-21-2018 10:09 PM


Video of the Pedestrian Collision
This article contains the video of the pedestrian collision: Tempe Police release footage of fatal crash from inside self-driving Uber
It looks to me like the vehicle failed to detect the pedestrian.
Percy

Replies to this message:
 Message 78 by Stile, posted 03-22-2018 10:17 AM Percy has replied
 Message 84 by Minnemooseus, posted 03-24-2018 8:10 PM Percy has replied

  
Percy
Member
Posts: 22392
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


(1)
Message 79 of 142 (830140)
03-22-2018 11:47 AM
Reply to: Message 78 by Stile
03-22-2018 10:17 AM


Re: Video of the Pedestrian Collision
My slightly self-driving car (cruise control distance maintenance) does not detect anything moving across its path. If a car runs a stop sign my car will not brake. The Tesla accident in Florida where a man was killed also involved a vehicle moving across the car's path.
My car also will not brake for stopped cars. If I'm traveling down the highway on cruise control with no cars in front of me and then chance upon a traffic jam that isn't moving, I have to brake myself - the car will not brake. If I don't brake then at the last minute it does flash up a big red "BRAKE" visual warning on the dashboard and sounds an audio warning chime.
In the same situation on the highway with no cars in front of me, if I catch up to moving cars, even cars moving very slowly, my car will brake. It is only stationary cars it won't brake for.
My car also gets "confused" when one lane becomes two and the car that was in front of me chooses the other lane. My car seems slow to detect the new car in front and begins accelerating toward it.
From watching ads on TV I know other cars have better crash avoidance systems, and in fairness to my car what it has is not a crash avoidance system - it's a cruise control distance maintenance system.
My opinion is beginning to lean toward not having fully autonomous vehicles. Crash avoidance systems alone should greatly reduce vehicle fatalities.
I wonder if an autonomous vehicle can pull into a parking place at the supermarket or pull into my garage?
The systems used by Uber and Waymo and so forth use cameras, radar and lidar. From what I've read they would have had no trouble detecting that pedestrian, so software was likely responsible.
--Percy

This message is a reply to:
 Message 78 by Stile, posted 03-22-2018 10:17 AM Stile has seen this message but not replied

  
Percy
Member
Posts: 22392
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


(1)
Message 82 of 142 (830183)
03-24-2018 12:23 PM
Reply to: Message 81 by Stile
03-22-2018 3:27 PM


Re: Video of the Pedestrian Collision
Adding to what Stile says, the car's computer has to build a model of the world around it, as shown in this TED talk by Waymo's former director. I've positioned this to begin at the right spot. If you watch a minute or two you'll get the idea:
How often does the computer have to update the model? I'll make some basic assumptions. The fastest speed the system has to deal with is 200 mph, and the model must be updated often enough that nothing moves more than an inch between one frame of the model and the next. A simple calculation tells us that the model must be updated around 3500 times per seconds, or once roughly every .3 milliseconds.
3D graphical modelling is a solved problem, but as every gamer knows if you're particular about the quality of the game simulations then you pay the big bucks for some serious graphical power. A standard PC can't smoothly handle the graphical requirements without a special graphics processor (GPU), the part of your computer made by Gigabyte, NVidia, etc.
Could a GPU deliver an updated view of the world every .3 milliseconds? I guess so, because NVidia has a webpage about their hardware: NVIDIA DRIVE: Scalable AI platform for Autonomous Driving
And here's a page about NVidia's Xavier GPU: Introducing Xavier, the NVIDIA AI Supercomputer for the Future of Autonomous Transportation
These webpages make references to AI and deep learning. The term "AI" resurfaces every few years as a marketing/promotional tool, even though nothing approaching true AI has been achieved. That being said, the NVidia pages are referring to learning through neural nets, a way of configuring autonomous car behavior by driving the car around. Feedback from the driver when he takes over would be especially valuable.
One article I read about Uber's autonomous efforts said that situations they found particularly difficult were construction zones and being adjacent to tall vehicles. I can imagine the difficulty of construction zones. Just yesterday I came across one of the most common construction situations in my neck of the woods. A utility vehicle was taking up half the road. Cones were out, and two men were stationed at opposite ends of the vehicle with "Slow/Stop" signs so that traffic can alternately proceed in the single remaining lane. I believe it will be many years before autonomous vehicles can handle this situation.
And how well will autonomous vehicles follow the dictum, "Don't hit the damn pothole that will throw my front end out of alignment"?
I don't know why Uber has trouble with tall vehicles. I imagine that's the case where you're next to a tractor trailer. Maybe they can't tell the difference between that and being inside a tunnel or driving along a tall hedgerow (think Britain) or along a tall wall. Can these GPU's tell the difference between an adjacent truck versus an adjacent wall?
--Percy

This message is a reply to:
 Message 81 by Stile, posted 03-22-2018 3:27 PM Stile has seen this message but not replied

  
Percy
Member
Posts: 22392
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


Message 83 of 142 (830190)
03-24-2018 2:04 PM


The Downside of Human Instructors
Waymo and Uber and GM and all the rest have human drivers busily tooling about the country's roads logging hundreds of thousands of miles in an effort to fine tune the programming of their autonomous vehicles. It may be an exercise doomed to failure. More serious than the mere technical obstacles may be the human one: humans just aren't reliable instructors.
Waymo vehicles require human intervention on average every 5000 miles. During periods when their intervention isn't required, which it almost always isn't, they are required to sit watchful, attentive and ready at an instant's notice to intervene.
But research shows that the on average people can only maintain vigilance for a monotonous task for less than half an hour. After the first half hour an autonomous vehicle's safety driver becomes of less and less utility. He is likely allowing situations go by where intervention would have improved the software.
Source: The Unavoidable Folly of Making Humans Train Self-Driving Cars
In the full repertoire of situations a vehicle might encounter, if even only one is unmanageable for autonomous vehicles then they are not autonomous, since they will require a safety driver. If all autonomous vehicles require safety drivers, then how are they any different from normal Ubers, Lyfts, Curbs and taxis? I think it will be many, many years before Waymo and the rest conquer the human-directing-traffic problem.
Yet despite this many companies seem to have targeted Level 5, which means full autonomy. Why? Because of the handoff problem. This is the interval between when an autonomous vehicle decides it requires human intervention and when the human is actually ready to take over. Accepting control of a formerly autonomous vehicle is harder than it seems at first glance. Say your're fully absorbed in your iPad game helping your hero maneuver along a cliff face while battling a dragon when the alert buzzer goes off. How long before you even become aware of the buzzer? And when you finally do, how long before you register what it means, set the iPad down, look out at the road, put your hands on the wheel, position your foot on the pedals, and assess what is happening?
Or even worse, say you're wearing your virtual reality helmet?
And so because of the handoff problem, many autonomous vehicle companies are pursuing Level 5 autonomy. I think this is a mistake and can see two possible outcomes in the near term (the next few years):
  1. This one I think unlikely. Realizing belatedly but eventually after the expenditure of many billions of dollars, the autonomous car industry realizes that Level 5 autonomy is not achievable for at least the next 20 or 30 years and backs off to Level 2 autonomy. That's basically lane maintenance, stopping distance maintenance, crash avoidance. Vehicular deaths will plummet.
  2. This is the one I think likely. Because billions of dollars are involved government will be helpless in preventing the industry from moving ahead with Level 5 autonomy. Vehicular deaths will plummet, but construction zones and humans-directing-traffic will be an ongoing problem that will take many years to solve.
Some companies like Audi think they can solve the handoff problem and are pursuing Level 3 autonomy.
Sources:
In my opinion there will never be any such thing as a car with no steering wheel or pedals, because without them there would be no way to do simple maneuvers, like moving the car in the driveway that is blocking Uncle Fred.
I have read that Waymo has some driverless cars out there. This could only be possible on roads mapped down to the finest detail on which they know there is no construction or accidents. As soon as there's construction or an accident then driverless cars must reroute or pull to the side and park. But how fast can Waymo know that there's been an accident and that a passing motorist is already directing traffic?
Here's a video that says Waymo has solved the problems of construction zones and emergency vehicles. I'm skeptical myself:
--Percy
Edited by Percy, : Typo.

Replies to this message:
 Message 85 by NoNukes, posted 03-24-2018 8:36 PM Percy has replied
 Message 89 by Stile, posted 03-26-2018 12:36 PM Percy has replied

  
Percy
Member
Posts: 22392
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


Message 86 of 142 (830204)
03-24-2018 8:53 PM
Reply to: Message 84 by Minnemooseus
03-24-2018 8:10 PM


Re: Video of the Pedestrian Collision
Minnemooseus writes:
My impression: At least what the camera captured, the headlights didn't seem to be covering the area very well, especially to the left side where the pedestrian was.
Yeah, weird. Seems like the headlights on the vehicle had a very short range. But Lidar all by itself should have detected the pedestrian. I saw some comments by industry experts who seemed to agree that it looked more like a software than a sensor problem.
It also looks like the pedestrian failed to detect the vehicle. Why would someone walk out in front of a moving vehicle like that??? Busy texting???
We still have so little information on the pedestrian beyond her name. I saw some speculation that she was homeless. Maybe more information will become available soon.
--Percy

This message is a reply to:
 Message 84 by Minnemooseus, posted 03-24-2018 8:10 PM Minnemooseus has seen this message but not replied

Replies to this message:
 Message 90 by Stile, posted 03-26-2018 12:44 PM Percy has seen this message but not replied

  
Percy
Member
Posts: 22392
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


Message 87 of 142 (830205)
03-24-2018 9:08 PM
Reply to: Message 85 by NoNukes
03-24-2018 8:36 PM


Re: The Downside of Human Instructors
NoNukes writes:
But research shows that the on average people can only maintain vigilance for a monotonous task for less than half an hour. After the first half hour an autonomous vehicle's safety driver becomes of less and less utility. He is likely allowing situations go by where intervention would have improved the software.
The solution for this seems rather obvious. Put several folks in the car and allow them to switch off as needed. This problem is not of a nature that makes solving the problem impossible; just a little more expensive. If needed, you can enforce the limitation in software by preventing the machine from learning lessons when the supervisor is not at his best.
I've read that safety drivers are supposed to keep their hands within an inch of the steering wheel, their foot next to the brake, and their eyes on the road. I can see driver rotation alleviating attention fatigue the first week or two on the job, but might not the sheer passivity and monotony of the exercise for week after week make the effective attention period for each shift shorter and shorter?
And so because of the handoff problem, many autonomous vehicle companies are pursuing Level 5 autonomy. I think this is a mistake and can see two possible outcomes in the near term (the next few years):
I think unfortunate rather than mistaken fits better here. I don't see a viable alternative.
To pursuing Level 5? Wouldn't Level 2 be as effective in saving lives?
In my opinion there will never be any such thing as a car with no steering wheel or pedals, because without them there would be no way to do simple maneuvers, like moving the car in the driveway that is blocking Uncle Fred.
I agree. But if that were the only limitation, then wow, what a fantastic change that could be for all of us! All of that driving time spent in productive activities while still retaining the point to point convenience of operating your own vehicle.
It'd be wonderful, but I wonder if the problem isn't more like AI than chess in difficulty.
--Percy

This message is a reply to:
 Message 85 by NoNukes, posted 03-24-2018 8:36 PM NoNukes has replied

Replies to this message:
 Message 95 by NoNukes, posted 03-26-2018 5:25 PM Percy has replied

  
Percy
Member
Posts: 22392
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


Message 92 of 142 (830308)
03-26-2018 3:16 PM
Reply to: Message 89 by Stile
03-26-2018 12:36 PM


Re: The Downside of Human Instructors
The approach where when the car doesn't know what to do it pulls over and stops so the human can take over would be Level 3.
--Percy

This message is a reply to:
 Message 89 by Stile, posted 03-26-2018 12:36 PM Stile has replied

Replies to this message:
 Message 93 by Stile, posted 03-26-2018 3:30 PM Percy has seen this message but not replied

  
Percy
Member
Posts: 22392
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


Message 96 of 142 (830319)
03-26-2018 9:41 PM
Reply to: Message 95 by NoNukes
03-26-2018 5:25 PM


Re: The Downside of Human Instructors
NoNukes writes:
I can see driver rotation alleviating attention fatigue the first week or two on the job, but might not the sheer passivity and monotony of the exercise for week after week make the effective attention period for each shift shorter and shorter?
Even if that were true, it is not an insurmountable obstacle.
Insurmountable wasn't how I was thinking of it - like many problems it is solvable if sufficient resources are dedicated to it. But that's the issue - will sufficient resources be brought to bear? There's a broader question of to what degree will safety be sacrificed to the bottom line. The history of car manufacturers is that they didn't focus on safety until government regulation forced them to.
During my career in software development I observed that cost cutting forced by economic downturns always focused on areas that would not directly impact revenue. Quality assurance was one of those areas. Autonomous driving is mostly a software problem.
Car manufacturers have a long and continuous record of poor performance when it comes to safety. Ford just recalled 1.4 million cars where the steering wheel can come loose. We're still working our way through the Tanaka airbag recall. Volkswagen engaged in deception with emission controls. Toyota recalled 9 million cars for faulty accelerator pedals in 2010. And the 2009 Toyota Prius had a little known software recall involving interaction between the braking system (which is a combination of normal and regenerative braking) and uneven surfaces.
I don't think the performance of car manufacturers will be any better with autonomous driving capabilities. There will be recalls of both software and hardware. On the design side, when the autonomous driving software is dropped into a new model that has the newer or different versions of the radar and the lidar and the optical cameras, how much testing will they do? Will the same software work for a low-slung sports car as for a high sitting SUV? Car manufacturers will want to incorporate the sensors currently in the box on the roof into the car body - how much of an effect will that have?
Here's a scenario: During the 2025 model year Ford discovers they can save money by swapping out the NVidia GPU for the Gigabyte GPU. How much testing will they do?
To pursuing Level 5? Wouldn't Level 2 be as effective in saving lives?
I don't believe it would be as effective in saving lives. With minor, but meaningful exceptions, computers are already better drivers than are humans. So the issue of having humans have to step in just is not going to go away.
Could you rephrase this - wasn't sure what you meant concerning Level 5 (no human intervention) saving more lives but there always being a need for human intervention.
About "With minor, but meaningful exceptions, computers are already better drivers than are humans," I don't know what a "minor but meaningful exception" is, but I do think there are many cases where computers are not better drivers than humans. Obviously computers excel at some tasks, and arguably they will someday excel at all tasks, but can they handle snow? Heavy rain? Mud splashed up on the sensors? Does Lidar work through mud? Can the system detect the traffic light hanging over the intersection that a strong wind has twisted to face the wrong way? Will it "see" the stop sign on the country road that vegetation is hiding?
By the way, I figured out that it should be no big deal to move the Level 5 autonomous vehicle with no steering wheel or pedals that is blocking Uncle Fred. The cameras provide a 360 degree view of the surroundings, so on the display you simply drag/drop the outline of the vehicle to where (including orientation) you want it to be.
Computers can do amazing things these days, and that can cause us to forget just how stupid they can be. I was witness to some of the early days of speech recognition research back in the mid-1970s. People on the project were sure that conversations with computers were just a decade away (delusions about AI were also rampant at the time), yet here we are more than forty years later and I can only give Siri the most rudimentary of instructions. Turns out a video of that effort is still on the Internet. The opening narrator doesn't introduce himself, so I'll mention that he's Professor Raj Reddy - he must be long retired by now:
--Percy

This message is a reply to:
 Message 95 by NoNukes, posted 03-26-2018 5:25 PM NoNukes has replied

Replies to this message:
 Message 97 by NoNukes, posted 03-26-2018 10:06 PM Percy has seen this message but not replied

  
Percy
Member
Posts: 22392
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


Message 98 of 142 (830322)
03-26-2018 10:58 PM


Uber’s no good very bad week

Replies to this message:
 Message 99 by jar, posted 03-27-2018 6:59 AM Percy has seen this message but not replied

  
Percy
Member
Posts: 22392
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


(1)
Message 101 of 142 (830350)
03-27-2018 2:39 PM
Reply to: Message 100 by Stile
03-27-2018 9:15 AM


Re: Video of the Pedestrian Collision
Stile writes:
But I am saying that, if everything worked as intended, it should have identified "an object" and began to take countermeasures (swerving? braking?).
Whether or not it could do everything fast enough to avoid the accident entirely... I'm not sure.
But it definitely should be good enough to detect the object and start doing something.
I believe the news report is that the vehicle did not react or even slow down at all before the collision.
I looked a bit into how far autonomous vehicles can detect objects. I started by looking at braking distance because that would tell me the distance available to stop. Obviously it would not be acceptable for autonomous vehicles to use maximum braking force for normal situations like red lights, stop signs, slowing traffic, etc. In my part of the country we do have 50 mph roads with traffic lights. Obviously cars actually travel at 55-60 mph on these roads. A comfortable stopping distance at 60 mph is 350 feet (Braking distance, friction and behavior). This distance must be increased if the road has a downgrade or is curved or both, so let's call it 500 feet. Therefore an autonomous vehicle must be able to see traffic lights, stop signs, stopped traffic, construction, pedestrians, etc., from a distance of at least 500 feet.
But cars travel faster than 60 mph, so let's bump that distance up to 1000 feet.
My slightly self-driving car when cruise controlling at 80 mph and approaching slower traffic begins to slow down at a distance of what feels like around 400-600 feet (the range is broad because naturally I can only estimate). My car uses millimeter-wave radar and can "see" vehicles as small as motorcyles. It can probably see further than 400-600 feet, but there's no way I would know since the car would not see the need to slow down at greater distances.
Radar obviously has the distance, but radar cannot recognize a traffic light or a stop sign.
Regarding radar and pedestrians, radar obviously reflects best off metal objects, and Radar Cross Section for Pedestrian in 76GHz Band says that a pedestrian's reflection is 15-20 dB down from the rear of a vehicle (76 GHz is a wavelength of 3.9 mm). 15-20 dB down is a lot - about a third to a quarter as strong. Radar is not the best way to see a pedestrian.
So what about lidar? Police use lidar speed guns that can see over a thousand feet. Autonomous vehicles use lidar to create a "point cloud" from which the GPU builds a schematic view of the vehicle's surroundings. Since the amount of detail increases with the cube of the distance, lidar can only look out so far, otherwise the amount of detail would overwhelm the GPU and any computational analysis of the scene.
I couldn't find anything definitive about lidar distance, but How Driverless Cars See the World Around Them says, "It [lidar] provides information only about objects that are relatively close." That tells me that seeing a traffic light or stop sign at 1000 feet might be asking a lot, but the GPU can probably be programmed to look further forward than it does off to the sides and rear, and the GPU and computer can be programmed to place additional emphasis on areas of the scene where traffic lights and stop signs are normally found. I'll guess that lidar can see traffic lights and stop signs and pedestrians at distances up to a thousand feet. But of course lidar can't tell which colored light of a traffic light is on, or read a stop sign (though it can see the shape, which is probably all it needs).
How Driverless Cars See the World Around Them also says, "And when multiple autonomous vehicles drive the same road, their lidar signals can interfere with one another." Well, that's very disturbing. If true then that problem has to be fixed before autonomous vehicles can become a reality.
Lidar also has difficulty telling snowflakes and raindrops from real objects, though Ford claims a solution: Driverless cars have a new way to navigate in rain or snow.
The lidar in autonomous vehicles uses infrared frequencies for its lasers. I think water, slush, mud and snow can strongly attenuate lidar. Anyone who has ever had a satellite dish in snow country knows the annoyance of having to sweep the snow out of the dish periodically during a snowstorm. And in any part of the country heavy rainstorms can also block the signal. Though the frequencies differ, I think lidar suffers from analogous issues.
That leaves visual cameras for telling the color of a traffic light. Once lidar has identified an object as a traffic light, the visual cameras should have little difficulty discerning the color that is on.
Here's a paper from Google on Traffic Light Mapping and Detection.
Naturally rain and mud and snow also interfere with visual cameras. If mud or slush or snow splashes up your visual camera, what then?
Anyway, bottom line about pedestrians, there seems little reason lidar wouldn't be able to detect a pedestrian on a clear night as much as 1000 feet away, particularly the one struck by the Uber vehicle. It was a software problem, possibly in the GPU, but most likely in the computer software dedicated to scene analysis.
--Percy

This message is a reply to:
 Message 100 by Stile, posted 03-27-2018 9:15 AM Stile has replied

Replies to this message:
 Message 102 by Stile, posted 03-27-2018 3:03 PM Percy has seen this message but not replied
 Message 109 by Percy, posted 05-07-2018 10:09 PM Percy has seen this message but not replied

  
Percy
Member
Posts: 22392
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


(2)
Message 106 of 142 (830436)
03-29-2018 10:05 AM
Reply to: Message 104 by Phat
03-29-2018 9:02 AM


Re: Uber’s no good very bad week
Phat writes:
...which tasks should we entrust to the robotic computers and which should we keep for ourselves...
My own opinion: Level 2 is as far as we should go. To me that means safe distance maintenance and crash avoidance. No wheel control.
...that annoying Siri.
To me the most annoying thing about Siri was that in 2011 (release as a native part of iPhones) she was just starting 1st grade, and now 7 years later she's only halfway through 1st grade.
--Percy

This message is a reply to:
 Message 104 by Phat, posted 03-29-2018 9:02 AM Phat has seen this message but not replied

  
Percy
Member
Posts: 22392
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


(1)
Message 109 of 142 (832678)
05-07-2018 10:09 PM
Reply to: Message 101 by Percy
03-27-2018 2:39 PM


Re: Video of the Pedestrian Collision
Percy writes:
It was a software problem, possibly in the GPU, but most likely in the computer software dedicated to scene analysis.
Did I call it or what: Uber vehicle reportedly saw but ignored woman it struck:
quote:
The only possibilities that made sense were:
A: Fault in the object recognition system, which may have failed to classify Herzberg and her bike as a pedestrian. This seems unlikely since bikes and people are among the things the system should be most competent at identifying.
B: Fault in the car’s higher logic, which makes decisions like which objects to pay attention to and what to do about them. No need to slow down for a parked bike at the side of the road, for instance, but one swerving into the lane in front of the car is cause for immediate action. This mimics human attention and decision making and prevents the car from panicking at every new object detected.
The sources cited by The Information say that Uber has determined B was the problem. Specifically, it was that the system was set up to ignore objects that it should have attended to; Herzberg seems to have been detected but considered a false positive.
Percy

This message is a reply to:
 Message 101 by Percy, posted 03-27-2018 2:39 PM Percy has seen this message but not replied

Replies to this message:
 Message 110 by Stile, posted 05-09-2018 11:15 AM Percy has replied

  
Percy
Member
Posts: 22392
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


(2)
Message 111 of 142 (832779)
05-10-2018 9:55 AM
Reply to: Message 110 by Stile
05-09-2018 11:15 AM


Re: Video of the Pedestrian Collision
If you're breaking things down generally then yeah, sure, an accident with a self-driving car could be hardware or software or a combination.
I should mention one little wrinkle about the hardware. While there is at some level a clear and explicit line of demarcation between hardware and software, that line is probably not one that Uber programmers have any control over or visibility into. I'm not speaking from direct knowledge about the Lidar and Radar and Camera systems that the self-driving car companies employ, but in all likelihood their software doesn't interact directly with the perception systems. Rather, the those systems probably have their own APIs, and Uber links those APIs into its own software and makes calls to them. This raises questions. How well are those APIs documented (i.e., how easy would it be for programmers to misunderstand what an API routine is doing)? What is the quality level of those API routines?
And concerning Uber specifically, their self-driving cars happened so suddently that I bet Uber did not write the scene composition software. They likely bought it from someone. In fact, they likely bought a lot of their software from other sources, then attempted to integrate it. That's the way large software systems happen these days. They aren't written from scratch - they're amalgamations of software from many different sources.
In some ways this is the realization of a dream. When I started programming nearly a half century ago it was already understood how low productivity was when every new software system began from scratch, and there was already the hope of drawing upon preprogrammed modules. c was an early example - the basic language had no I/O, and that was added with an included header file to provide the interface API and a module to link in that implemented the routines of the API.
Over time this dream has become a reality, but all dreams have the possibility of becoming nightmares, and that is the case here, at least in part. Huge software systems can be built almost overnight simply by combining software garnered from many sources, but at the cost of a loss of control. You can't enforce quality of the acquired software. When there's a new release of the acquired software, will it be the same quality as the previous version? Have new bugs been introduced (undoubtedly).
And the acquired software will likely do most of what you need it to do, but not all, and it will not do it in ways that you would prefer. The data provided by one set of software will often not be precisely what is required by other sets of software. In the end a great deal of glue software must be written. There's ample opportunity for mistakes of all types.
Programmers are never given enough time to do a good job on the software, and software testing is almost always inadequate because QA departments are not profit centers, and so when layoff time rolls around the departments that get hit the hardest are personnel, finance, program management and QA. Programmers frequently get stuck testing their own software, which is a major no-no because the guy who programmed it has major blindspots about where the weaknesses in his software lie. Also, QA is it's own speciality, and just because you're a crack programmer doesn't mean you're any good at QA.
Much software is released prematurely, meaning that the customers become a reluctant adjunct to the software company's QA efforts. The standard estimate is that a bug is ten times more costly to fix when detected in the field (i.e., by a customer) than when detected before release, but this fact is rarely heeded. Companies get burned, fix their policies, then over time they pick away at those policies to speed up release cycles, and pretty soon they're back where they started.
If the software in question is something like a schedule calendar or photo album then the consequences of bugs are minor, but if the software is for a nuclear power plant or a space shuttle or a self-driving car then the consequences of bugs can be deadly.
Programmers have little leverage. Vaguely expressed concerns about possible remaining problems in the software will always go unheeded, because the programmer can't know the specific consequences, like that under certain circumstances it could fail to properly classify an object as something to be avoided and will plow right into it, and if that object is a person then it could kill them. Rather, all he can say is, "If we don't delay the release x weeks (thereby delaying revenue) and spend y dollars on more testing (thereby increasing the cost center's debits and making managers look bad) then something bad might happen." And managers will comfort themselves that there's always a driver in the car monitoring things and ready to take over in a split second, though obviously they should know that's just overoptimistic bullshit.
Completely self-driving cars are a utopian dream for the foreseeable future. What they can already do is amazing, but what they can't do is formidable and frightening. Google and Tesla and Uber and all the rest can do all the development and testing they want, but for a long time people are still going to have to drive their own cars. But just crash avoidance systems alone will significantly reduce injuries and deaths due to accidents.
--Percy

This message is a reply to:
 Message 110 by Stile, posted 05-09-2018 11:15 AM Stile has seen this message but not replied

Replies to this message:
 Message 114 by Phat, posted 12-12-2018 9:47 AM Percy has seen this message but not replied
 Message 117 by Phat, posted 12-12-2018 10:09 AM Percy has seen this message but not replied

  
Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024