Register | Sign In


Understanding through Discussion


EvC Forum active members: 65 (9162 total)
4 online now:
Newest Member: popoi
Post Volume: Total: 915,817 Year: 3,074/9,624 Month: 919/1,588 Week: 102/223 Day: 13/17 Hour: 0/1


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   Self-Driving Cars
Percy
Member
Posts: 22392
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


Message 112 of 142 (833695)
05-25-2018 1:18 PM


Blowing own horn again...
I already practically broke my arm patting myself on the back in Message 109:
Percy in Message 109 writes:
Percy writes:
It was a software problem, possibly in the GPU, but most likely in the computer software dedicated to scene analysis.
Did I call it or what: Uber vehicle reportedly saw but ignored woman it struck:
But the NTSB report about the Uber crash just came out (at bottom of NTSB: Uber Self-Driving Car Had Disabled Emergency Brake System Before Fatal Crash) and it says:
quote:
According to data obtained from the self-driving system, the system first registered radar and LIDAR observations of the pedestrian about 6 seconds before impact, when the vehicle was traveling at 43 mph. As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path. At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision (see figure 2).
So the problems with scene analysis were far worse than I could ever have imagined, and far slower, too. It detected an unidentified object 6 seconds before impact, but didn't figure out it was a bicycle until 1.3 seconds before impact. What was it doing for 4.7 seconds?
But it gets worse, though this next part has nothing to do with scene analysis:
quote:
According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.
The automatic emergency braking was disabled, so the driver had to do the braking. But as the video shows, the cyclist was practically invisible until the car was on top of her. The system wasn't designed to alert the driver to brake, but even if it had the driver would have had only 1.3 seconds to hit brakes and turn the wheel.
So Uber's system was bad in many ways.
But Tesla isn't covering themselves with glory, either. Did everyone hear about the Tesla that killed its driver (who wasn't paying attention, a major no-no, but still) by colliding with an already-collapsed crash cushion? From Tesla says Autopilot was on during deadly California crash:
quote:
There was a major announcement from Tesla Friday evening about last week's crash in Mountain View, California, that killed an engineer from Apple. The company confirms the autopilot "was" engaged when the Model X slammed into a collapsed safety barrier.
Thirty-eight-year-old Apple engineer and father of two, Walter Huang, died one week ago on his way to work when his Tesla Model X slammed into a crash cushion that had collapsed in an accident eleven days before -- basically like hitting a brick wall, the experts say.
...
Friday morning, a science director at an environmental startup took the ABC7 News I-Team's Dan Noyes in his Model X on the same route Huang drove last week to Apple. He was heading to the 85 carpool lane off 101 in Mountain View. "I see what the issue is," said Sean Price. "That line in the pavement could potentially be a problem," he said, pointing out a break between the asphalt and concrete and two white lines.
So possibly the Tesla thought the joint between pavement sections was the lane divider line and followed what it thought was the lane right into a collapsed crash cushion. This not only seems likely to me, I'm certain of it. My minimally-self driving car (cruise control with auto-distance maintenance) also has "crossing out of lane without signaling" detection (it beeps). This lane maintenance detection regularly goes off when the car moves across pavement joints. All the pothole patching crews are out right now, so it also regularly goes off these days as I cross actual dividing lines to go around repair crews.
These repair crews have men stationed at each end of the repair area with "Stop/Slow" signs that I'm still very doubtful these self-driving cars can properly handle. And a policeman with his hand up? Not a prayer.
Then a week or two ago there was the Tesla crash at 60 mph into the rear of a firetruck stopped at a red light. From Tesla in Autopilot mode sped up before crashing into stopped fire truck, police report says:
quote:
Data from the Model S electric sedan show the car picked up speed for 3.5 seconds before crashing into the fire truck in suburban Salt Lake City, the report said. The driver manually hit the brakes a fraction of a second before impact.
Police suggested that the car was following another vehicle and dropped its speed to 55 mph to match the leading vehicle. They say the leading vehicle then probably changed lanes and the Tesla automatically sped up to its preset speed of 60 mph without noticing the stopped cars ahead.
I've experienced the same thing in my minimally self-driving car. The cruise control feature is only to be used on the highway, but I use it everywhere. You can't trust it when the car in front of you moves out of your lane and the car tries to sync up with the next car in front. A sudden acceleration is common. Sometimes it detects the next car up and slows down again, sometimes not. And if the next car up is stopped at a light? Nothing. No braking. Sounds a lot like that Tesla that hit the rear of the firetruck. In my previous post I wrote about software systems today being amalgamations of software pieces from a variety of vendors. It isn't impossible that the bit of software responsible for that firetruck crash is the same as in my own car.
--Percy

Replies to this message:
 Message 113 by Percy, posted 12-12-2018 9:10 AM Percy has seen this message but not replied

  
Percy
Member
Posts: 22392
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


(1)
Message 113 of 142 (845100)
12-12-2018 9:10 AM
Reply to: Message 112 by Percy
05-25-2018 1:18 PM


Re: Blowing own horn again...
Nobody listens to me. There was no reply to my last post detailing problems in autonomous vehicles. And there was no reply to the comment I posted a few days ago to the Washington Post article Waymo launches nation’s first commercial self-driving taxi service in Arizona where I said this:
quote:
I was pleasantly surprised at all the skeptical comments - I was expecting a lot of people would drink the Kool Aid. Completely autonomous vehicles are in our future the same way as rocket packs and flying cars. The technology is amazing, but a technology that can only handle 99.9% of all situations still means thousands of accidents a day across a country the size of US, and they're nowhere near 99.9% yet. It's easy to list situations these cars can't handle:
- Policeman directing traffic
- Construction site where the guy with the "Slow/Stop" sign gets it backwards
- Traffic lights are out
- Fog
- Heavy rain
- Snow
- Snow partially or completely obscuring all road lines or even where the edges of the road are
- Stuck in snow (can these cars rock themselves out?)
- Stop sign obscured by vegetation
- Weather worn or handwritten detour sign
- Bad or old data, such as having a one-way street going in the wrong direction, or having the wrong speed limit, or roads or interchanges that have been redesigned.
Vehicles will become completely autonomous about the same time you can have an intelligent conversation with your phone - in other words, no time soon. Companies like Waymo are fooling themselves that the technology is just a few tweaks away from going mainstream, and their unjustified optimism appears in all their public communications.
Very effective crash avoidance systems already exist and are all we really need to achieve most of the safety goals of completely autonomous vehicles. They'll substantially reduce the vehicle fatality rate and that will be a wonderful thing.
But being able to watch a video while your car drives you to work is just not in the cards anytime soon.
All I got was crickets.
I think we're mostly safe from the efforts by companies like Google and Apple and Tesla and Uber and the biggies (GM, Ford, Chrysler, etc.) to introduce fully autonomous vehicles. Their semi-autonomous vehicles are quietly driving on our roads with backup drivers behind the wheels, and they're not causing too many accidents. In a little more time, probably around a couple years, these companies will realize that the technology is far in the future, probably 20-30 years.
Back in the 1960's Richard Greenblatt wrote a chess-playing program that everyone knew as the Greenblatt program, though apparently it's true name was Mac Hack because he wrote it while working at MIT on Project Mac ("Multi Level Access Computer" or "Machine-Aided Cognition"). The Greenblatt program played pretty good chess, and it once beat a human player in a tournament. I played tournament chess in high school and achieved a 1346 rating (strictly pedestrian - you're not someone interesting until you reach around 1700) from the USCF (United States Chess Federation).
When I reached college I encountered the Greenblatt program and played it several times, never winning even though the Greenblatt program's rating was only 1243 (of course by that time I was no longer taking chess seriously and had forgotten a lot). It was unerring in finding little two and three move combinations that were fatal. The Greenblatt program was a great emissary for computer chess, and its capabilities convinced people that computers would be beating humans within a few years.
But the years turned into decades, and while many chess programs (chess systems, actually - many took the approach of having hardware move generation add-ons) could beat almost all humans, none could compete on the grandmaster level. That is none until IBM's Deep Blue came along. In its first encounter in 1996 with Gary Kasparov, then world champion, it took the first game, at which point Kasparov said words to effect, "Oh, I see what it's doing," then he won the match handily 4-2.
But Kasparov played a rematch with a beefed up Deep Blue the following year, and Deep Blue won 3½-2½. IBM then retired Deep Blue to focus their efforts on Watson, and there hasn't been a high profile chess match between computer and human since.
But the important point was how long it took between the first demonstration of a capable chess playing program, the Greenblatt program in 1967, and the emergence of a world-class chess playing program in 1997, Deep Blue thirty years later.
It's the same with autonomous vehicles. The one's being tested today are merely demonstrating the early promise. Truly autonomous vehicles are likely 20-30 years off.

A digression: While fact checking the above I learned that Alan Kotok, who also attended MIT, had worked with Richard Greenblatt a bit on his chess program, and that was when I discovered that Kotok died back in 2006. No one is likely to have ever heard his name (unless you've read the book Hackers by Stephen Levy), but he was a genius.
When I was just out of school I worked on the same team with Alan Kotok, and he was instrumental in helping my career take off. My project team (DEC would give a lot of responsibility to very young people) wrote a timing analysis program for use by the project team developing the next generation DECSystem 20. It delivered its results textually and was barely used. Alan suggested using the graphical capabilities of the newly available VT120 terminal to present results graphically and instantly the program was the toast of the town. I would walk up and down the aisles of the cubicle farm of the DECSystem 20 team and see the program's graphical display on terminal after terminal.
A couple years later during a presentation about a different project to a skeptical senior advisory group I was floundering, and Kotok spoke up and offered a defense I hadn't even considered. The project went forward. I couple years later I left DEC and never saw Alan Kotok again.
I have never forgotten Alan Kotok and it is very sad that he is gone. For every Steven Jobs and Bill Gates we forget that there are legions of genius-level top-notch individuals working in the background, and Alan Kotok was one of them.
--Percy

This message is a reply to:
 Message 112 by Percy, posted 05-25-2018 1:18 PM Percy has seen this message but not replied

Replies to this message:
 Message 118 by kjsimons, posted 12-12-2018 10:14 AM Percy has replied
 Message 119 by NosyNed, posted 12-12-2018 10:36 AM Percy has replied
 Message 121 by PaulK, posted 12-12-2018 11:43 AM Percy has seen this message but not replied
 Message 122 by Stile, posted 12-12-2018 12:14 PM Percy has seen this message but not replied

  
Percy
Member
Posts: 22392
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


Message 124 of 142 (845151)
12-12-2018 3:13 PM
Reply to: Message 116 by Diomedes
12-12-2018 10:09 AM


Re: Part Of The Problem
Diomedes writes:
One great example I always think about when it comes to bad predictions is the movie 2001 A Space Odyssey. It came out in 1968. It depicted a future (in 2001) with space habitats, a presence on the moon, routine commercial space travel, etc. Well, here we are in 2018 and we don't have hardly any of that. With the exception of the International Space Station, which is no where near as advanced as what was depicted in the movie.
In the winter of early 1968 my sophomore high school class bussed into New York City to see 2001: A Space Odyssey in Cinerama, a surround screen and surround sound experience. The shuttle approach to the space station to the music of Straus's Blue Danube was spectacular and unforgettable:
I was so stunned I bought the movie soundtrack (though I had no record player) and bought and learned the piano music to the Blue Danube.
The movie's visionaries conceived of a shuttle vehicle nearly identical to the eventual space shuttle not launched until 14 years later, but we've never seen anything like the space station, not in the year 2001 and not now and not for the foreseeable future.
But the real interesting portion was the Hal 9000 computer. Now in 2018, our computer technology is impressive. The have the World Wide Web. We have computers in our pockets in the form of cell phones. Near instant communication with anyone. But when it comes to AI, or Artificial Intelligence, we don't have anything remotely close to what the Hal 9000 was. That was a fully sentient, artificial intelligence. As Percy mention above, the only thing that has some commonality with Hal is Watson. Yet it is clearly not self aware. It is basically a very advanced big data mechanism.
When I was at Carnegie Mellon in the mid-1970's there was a project called Hearsay whose goal was to understand human speech. While I was there they mastered the small chess vocabulary, as in "Pawn to queen four" and "Bishop takes knight." Project leads spoke of the promise of speech recognition, saying it was already on the horizon, just a few more years. Now, forty years later we're just getting there. And that's just speech recognition - speech comprehension is a much bigger task.
I played around with Apple's speech recognition using Swift while I was writing the RideGuru app (see Calling All Rideshare Fans), and it was really good at recognizing addresses like "95 East Main Street, Springfield, Massachusetts," but really bad at addresses with place names with odd spellings and/or silent letters, so I left speech recognition out of the app.
But playing with Siri just now I can see that Apple's speech recognition is still pretty powerful. You might imagine that "17 Gloucester Road, Worcester, Massachusetts" would give it trouble because Gloucester is pronounced "Gloster" and Worcester is pronounced "Wooster", but Siri has no trouble with it ("Hey Siri, show 17 Gloucester Road, Worcester, Massachusetts").
But try, "Hey Siri, show 17 Cowesett Road, Warwick, RI" and it will get it wrong time after time no matter how carefully you pronounce "Cowesett." If you've got Android give it a try and see if it does any better. We've got an Alexa Dot that we use a little, and it is much smarter than Siri at answering questions, for one example, "How many Jews were killed during World War II?" But I don't know if it's also better at speech recognition.
This is actually an issue that is becoming more prevalent in software. For Gen Xers like myself, many of us were hobbyists that put computers together by ourselves. Before companies like Dell existed. And we often times had to hand code software without the benefit of more adept development environments like Microsoft Visual Studio or Java Eclipse. Now these dev environments expedite coding and make things easier. But often times, they obfuscate a lot of the particulars of the low level code itself. Millennials having grown up in an environment where the low level code is done for them often are ill equipped to handle certain types of problems.
Give me that old time religion (meaning assembler code).
--Percy

This message is a reply to:
 Message 116 by Diomedes, posted 12-12-2018 10:09 AM Diomedes has replied

Replies to this message:
 Message 125 by Tangle, posted 12-12-2018 3:32 PM Percy has seen this message but not replied
 Message 126 by Diomedes, posted 12-12-2018 3:34 PM Percy has seen this message but not replied

  
Percy
Member
Posts: 22392
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


(1)
Message 128 of 142 (845159)
12-12-2018 4:23 PM
Reply to: Message 118 by kjsimons
12-12-2018 10:14 AM


Re: Blowing own horn again...
kjsimons writes:
'm also a computer guy from way back (also wrote code on PDP-11s, DEC and Data Generals early on)...
I worked for DEC but knew people from DG and Prime - all their company headquarters were in the Boston suburbs. When I joined DEC in 1977 Ed de Castro's (DG's founder) departure was still a recent memory. The story as I heard it was that around the mid-1960's DEC started two competing projects to design the PDP-8 successor. The project team that designed the PDP-11 was selected, and the project team that designed the Nova, led by Ed de Castro, went off and founded DG. DEC would have done well to do whatever it took to retain Ed de Castro, because DG was a tough competitor and a thorn in DEC's side for years.
If you still remember DG's Nova and SuperNova machines fondly and haven't yet read Tracy Kidder's The Soul of a New Machine then it is well worth reading. I met a couple of the principles, but it was years ago and I no longer remember their names. The one name I do remember from the book is Tom West, the project lead. The character I best remember, even though he played a minor role, was the technician who would destroy tools once he felt they were "used up." The vignet I best remember was when one of the hardware designers (salaried, i.e., no overtime) found a technican's (hourly, i.e., overtime) paystub in a wastebasket and discovered the technician was making more than he did (remember paper paychecks?).
--Percy

This message is a reply to:
 Message 118 by kjsimons, posted 12-12-2018 10:14 AM kjsimons has not replied

  
Percy
Member
Posts: 22392
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


Message 129 of 142 (845160)
12-12-2018 4:59 PM
Reply to: Message 119 by NosyNed
12-12-2018 10:36 AM


Re: sneaking up on us
I think the crash avoidance systems are much better than people. And on a calm clear day on a smooth road with lines that aren't worn away and no construction, then current autonomous capabilities should outperform a human driver.
What often happens with new technologies is that high initial expectations decline and existing capabilities improve. At some point diminishing expectations meet improving capabilities, and then the new technology takes off.
But take the simple case of a policeman with his hand up indicating stop. It will be a long time before autonomous vehicles recognize this situation, and without that capability these cars should not be permitted on the road without a backup driver. Google claims to have solved this problem, but I don't believe them. I don't believe Tesla's stats, either.
Today me and another car yielded to each other. The driver in the other car signaled me to go and we make eye contact, so I went. I think it will be a long while before LIDAR plus cameras can handle that.
Two cars pull up at the same time at a 4-way stop at right angles to each other. The car to the right doesn't go (doesn't matter why). What does the car to the left do?
Four cars pull up to a 4-way stop at the same time. Reminds me of the joke about the donkey midway between two haystacks. It's the vehicular equivalent of a deadly embrace, or, since it's all just software, it *is* the deadly embrace. This one has simple solutions, I just mentioned it because it seems humorous.
--Percy

This message is a reply to:
 Message 119 by NosyNed, posted 12-12-2018 10:36 AM NosyNed has not replied

Replies to this message:
 Message 130 by Tangle, posted 12-12-2018 5:21 PM Percy has seen this message but not replied

  
Percy
Member
Posts: 22392
From: New Hampshire
Joined: 12-23-2000
Member Rating: 5.2


Message 135 of 142 (845205)
12-13-2018 10:38 AM
Reply to: Message 134 by AZPaul3
12-13-2018 9:35 AM


AZPaul3 writes:
Hope it doesn't happen in any self driven car I'm a passenger in...
It will. Hopefully not you, but people are going to get hurt, people are going to die, from these things. Tort and insurance legislation will be used to absorb that risk temporarily while the smart guys figure out how to make these things safer.
You're absolutely right - people are going to die because of software bugs and glitches, hardware failures, unanticipated situations, etc. But car accidents should decline, and the severity of the car accidents that do happen should also decline. The death and injury rate should decline precipitously. We currently have around 35,000 vehicular related deaths per year, and that should drop below 10,000, probably way below 10,000.
But how are people going to feel about that, about the possibility of dying because of some car failure instead of their own mistake? The people penalized the most will be those who today are the most safe and careful drivers: those who don't tailgate, always look both ways, drive at reasonable speeds, etc. Their chances of vehicular death will decline, too, but not as much as your more, uh, enthusiastic drivers.
But whether we're talking about safe or enthusiastic drivers, how will they feel about having their degree of safety taken out of their hands?
--Percy

This message is a reply to:
 Message 134 by AZPaul3, posted 12-13-2018 9:35 AM AZPaul3 has replied

Replies to this message:
 Message 136 by NosyNed, posted 12-13-2018 10:45 AM Percy has seen this message but not replied
 Message 140 by AZPaul3, posted 12-13-2018 2:08 PM Percy has seen this message but not replied
 Message 141 by Pressie, posted 12-14-2018 5:34 AM Percy has seen this message but not replied

  
Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024