|
Register | Sign In |
|
QuickSearch
EvC Forum active members: 63 (9162 total) |
| |
popoi | |
Total: 916,387 Year: 3,644/9,624 Month: 515/974 Week: 128/276 Day: 2/23 Hour: 0/0 |
Thread ▼ Details |
|
Thread Info
|
|
|
Author | Topic: Self-Driving Cars | |||||||||||||||||||||||||||||||||||||||||||||||||||
Percy Member Posts: 22479 From: New Hampshire Joined: Member Rating: 4.7 |
I already practically broke my arm patting myself on the back in Message 109:
Percy in Message 109 writes: Percy writes: It was a software problem, possibly in the GPU, but most likely in the computer software dedicated to scene analysis. Did I call it or what: Uber vehicle reportedly saw but ignored woman it struck: But the NTSB report about the Uber crash just came out (at bottom of NTSB: Uber Self-Driving Car Had Disabled Emergency Brake System Before Fatal Crash) and it says:
quote: So the problems with scene analysis were far worse than I could ever have imagined, and far slower, too. It detected an unidentified object 6 seconds before impact, but didn't figure out it was a bicycle until 1.3 seconds before impact. What was it doing for 4.7 seconds? But it gets worse, though this next part has nothing to do with scene analysis:
quote: The automatic emergency braking was disabled, so the driver had to do the braking. But as the video shows, the cyclist was practically invisible until the car was on top of her. The system wasn't designed to alert the driver to brake, but even if it had the driver would have had only 1.3 seconds to hit brakes and turn the wheel. So Uber's system was bad in many ways. But Tesla isn't covering themselves with glory, either. Did everyone hear about the Tesla that killed its driver (who wasn't paying attention, a major no-no, but still) by colliding with an already-collapsed crash cushion? From Tesla says Autopilot was on during deadly California crash:
quote: So possibly the Tesla thought the joint between pavement sections was the lane divider line and followed what it thought was the lane right into a collapsed crash cushion. This not only seems likely to me, I'm certain of it. My minimally-self driving car (cruise control with auto-distance maintenance) also has "crossing out of lane without signaling" detection (it beeps). This lane maintenance detection regularly goes off when the car moves across pavement joints. All the pothole patching crews are out right now, so it also regularly goes off these days as I cross actual dividing lines to go around repair crews. These repair crews have men stationed at each end of the repair area with "Stop/Slow" signs that I'm still very doubtful these self-driving cars can properly handle. And a policeman with his hand up? Not a prayer. Then a week or two ago there was the Tesla crash at 60 mph into the rear of a firetruck stopped at a red light. From Tesla in Autopilot mode sped up before crashing into stopped fire truck, police report says:
quote: I've experienced the same thing in my minimally self-driving car. The cruise control feature is only to be used on the highway, but I use it everywhere. You can't trust it when the car in front of you moves out of your lane and the car tries to sync up with the next car in front. A sudden acceleration is common. Sometimes it detects the next car up and slows down again, sometimes not. And if the next car up is stopped at a light? Nothing. No braking. Sounds a lot like that Tesla that hit the rear of the firetruck. In my previous post I wrote about software systems today being amalgamations of software pieces from a variety of vendors. It isn't impossible that the bit of software responsible for that firetruck crash is the same as in my own car. --Percy
|
|||||||||||||||||||||||||||||||||||||||||||||||||||
Percy Member Posts: 22479 From: New Hampshire Joined: Member Rating: 4.7
|
Nobody listens to me. There was no reply to my last post detailing problems in autonomous vehicles. And there was no reply to the comment I posted a few days ago to the Washington Post article Waymo launches nation’s first commercial self-driving taxi service in Arizona where I said this:
quote: All I got was crickets. I think we're mostly safe from the efforts by companies like Google and Apple and Tesla and Uber and the biggies (GM, Ford, Chrysler, etc.) to introduce fully autonomous vehicles. Their semi-autonomous vehicles are quietly driving on our roads with backup drivers behind the wheels, and they're not causing too many accidents. In a little more time, probably around a couple years, these companies will realize that the technology is far in the future, probably 20-30 years. Back in the 1960's Richard Greenblatt wrote a chess-playing program that everyone knew as the Greenblatt program, though apparently it's true name was Mac Hack because he wrote it while working at MIT on Project Mac ("Multi Level Access Computer" or "Machine-Aided Cognition"). The Greenblatt program played pretty good chess, and it once beat a human player in a tournament. I played tournament chess in high school and achieved a 1346 rating (strictly pedestrian - you're not someone interesting until you reach around 1700) from the USCF (United States Chess Federation). When I reached college I encountered the Greenblatt program and played it several times, never winning even though the Greenblatt program's rating was only 1243 (of course by that time I was no longer taking chess seriously and had forgotten a lot). It was unerring in finding little two and three move combinations that were fatal. The Greenblatt program was a great emissary for computer chess, and its capabilities convinced people that computers would be beating humans within a few years. But the years turned into decades, and while many chess programs (chess systems, actually - many took the approach of having hardware move generation add-ons) could beat almost all humans, none could compete on the grandmaster level. That is none until IBM's Deep Blue came along. In its first encounter in 1996 with Gary Kasparov, then world champion, it took the first game, at which point Kasparov said words to effect, "Oh, I see what it's doing," then he won the match handily 4-2. But Kasparov played a rematch with a beefed up Deep Blue the following year, and Deep Blue won 3½-2½. IBM then retired Deep Blue to focus their efforts on Watson, and there hasn't been a high profile chess match between computer and human since. But the important point was how long it took between the first demonstration of a capable chess playing program, the Greenblatt program in 1967, and the emergence of a world-class chess playing program in 1997, Deep Blue thirty years later. It's the same with autonomous vehicles. The one's being tested today are merely demonstrating the early promise. Truly autonomous vehicles are likely 20-30 years off. A digression: While fact checking the above I learned that Alan Kotok, who also attended MIT, had worked with Richard Greenblatt a bit on his chess program, and that was when I discovered that Kotok died back in 2006. No one is likely to have ever heard his name (unless you've read the book Hackers by Stephen Levy), but he was a genius. When I was just out of school I worked on the same team with Alan Kotok, and he was instrumental in helping my career take off. My project team (DEC would give a lot of responsibility to very young people) wrote a timing analysis program for use by the project team developing the next generation DECSystem 20. It delivered its results textually and was barely used. Alan suggested using the graphical capabilities of the newly available VT120 terminal to present results graphically and instantly the program was the toast of the town. I would walk up and down the aisles of the cubicle farm of the DECSystem 20 team and see the program's graphical display on terminal after terminal. A couple years later during a presentation about a different project to a skeptical senior advisory group I was floundering, and Kotok spoke up and offered a defense I hadn't even considered. The project went forward. I couple years later I left DEC and never saw Alan Kotok again. I have never forgotten Alan Kotok and it is very sad that he is gone. For every Steven Jobs and Bill Gates we forget that there are legions of genius-level top-notch individuals working in the background, and Alan Kotok was one of them. --Percy
|
|||||||||||||||||||||||||||||||||||||||||||||||||||
Percy Member Posts: 22479 From: New Hampshire Joined: Member Rating: 4.7 |
Diomedes writes: One great example I always think about when it comes to bad predictions is the movie 2001 A Space Odyssey. It came out in 1968. It depicted a future (in 2001) with space habitats, a presence on the moon, routine commercial space travel, etc. Well, here we are in 2018 and we don't have hardly any of that. With the exception of the International Space Station, which is no where near as advanced as what was depicted in the movie. In the winter of early 1968 my sophomore high school class bussed into New York City to see 2001: A Space Odyssey in Cinerama, a surround screen and surround sound experience. The shuttle approach to the space station to the music of Straus's Blue Danube was spectacular and unforgettable:
I was so stunned I bought the movie soundtrack (though I had no record player) and bought and learned the piano music to the Blue Danube. The movie's visionaries conceived of a shuttle vehicle nearly identical to the eventual space shuttle not launched until 14 years later, but we've never seen anything like the space station, not in the year 2001 and not now and not for the foreseeable future.
But the real interesting portion was the Hal 9000 computer. Now in 2018, our computer technology is impressive. The have the World Wide Web. We have computers in our pockets in the form of cell phones. Near instant communication with anyone. But when it comes to AI, or Artificial Intelligence, we don't have anything remotely close to what the Hal 9000 was. That was a fully sentient, artificial intelligence. As Percy mention above, the only thing that has some commonality with Hal is Watson. Yet it is clearly not self aware. It is basically a very advanced big data mechanism. When I was at Carnegie Mellon in the mid-1970's there was a project called Hearsay whose goal was to understand human speech. While I was there they mastered the small chess vocabulary, as in "Pawn to queen four" and "Bishop takes knight." Project leads spoke of the promise of speech recognition, saying it was already on the horizon, just a few more years. Now, forty years later we're just getting there. And that's just speech recognition - speech comprehension is a much bigger task. I played around with Apple's speech recognition using Swift while I was writing the RideGuru app (see Calling All Rideshare Fans), and it was really good at recognizing addresses like "95 East Main Street, Springfield, Massachusetts," but really bad at addresses with place names with odd spellings and/or silent letters, so I left speech recognition out of the app. But playing with Siri just now I can see that Apple's speech recognition is still pretty powerful. You might imagine that "17 Gloucester Road, Worcester, Massachusetts" would give it trouble because Gloucester is pronounced "Gloster" and Worcester is pronounced "Wooster", but Siri has no trouble with it ("Hey Siri, show 17 Gloucester Road, Worcester, Massachusetts"). But try, "Hey Siri, show 17 Cowesett Road, Warwick, RI" and it will get it wrong time after time no matter how carefully you pronounce "Cowesett." If you've got Android give it a try and see if it does any better. We've got an Alexa Dot that we use a little, and it is much smarter than Siri at answering questions, for one example, "How many Jews were killed during World War II?" But I don't know if it's also better at speech recognition.
This is actually an issue that is becoming more prevalent in software. For Gen Xers like myself, many of us were hobbyists that put computers together by ourselves. Before companies like Dell existed. And we often times had to hand code software without the benefit of more adept development environments like Microsoft Visual Studio or Java Eclipse. Now these dev environments expedite coding and make things easier. But often times, they obfuscate a lot of the particulars of the low level code itself. Millennials having grown up in an environment where the low level code is done for them often are ill equipped to handle certain types of problems. Give me that old time religion (meaning assembler code). --Percy
|
|||||||||||||||||||||||||||||||||||||||||||||||||||
Percy Member Posts: 22479 From: New Hampshire Joined: Member Rating: 4.7
|
kjsimons writes: 'm also a computer guy from way back (also wrote code on PDP-11s, DEC and Data Generals early on)... I worked for DEC but knew people from DG and Prime - all their company headquarters were in the Boston suburbs. When I joined DEC in 1977 Ed de Castro's (DG's founder) departure was still a recent memory. The story as I heard it was that around the mid-1960's DEC started two competing projects to design the PDP-8 successor. The project team that designed the PDP-11 was selected, and the project team that designed the Nova, led by Ed de Castro, went off and founded DG. DEC would have done well to do whatever it took to retain Ed de Castro, because DG was a tough competitor and a thorn in DEC's side for years. If you still remember DG's Nova and SuperNova machines fondly and haven't yet read Tracy Kidder's The Soul of a New Machine then it is well worth reading. I met a couple of the principles, but it was years ago and I no longer remember their names. The one name I do remember from the book is Tom West, the project lead. The character I best remember, even though he played a minor role, was the technician who would destroy tools once he felt they were "used up." The vignet I best remember was when one of the hardware designers (salaried, i.e., no overtime) found a technican's (hourly, i.e., overtime) paystub in a wastebasket and discovered the technician was making more than he did (remember paper paychecks?). --Percy
|
|||||||||||||||||||||||||||||||||||||||||||||||||||
Percy Member Posts: 22479 From: New Hampshire Joined: Member Rating: 4.7 |
I think the crash avoidance systems are much better than people. And on a calm clear day on a smooth road with lines that aren't worn away and no construction, then current autonomous capabilities should outperform a human driver.
What often happens with new technologies is that high initial expectations decline and existing capabilities improve. At some point diminishing expectations meet improving capabilities, and then the new technology takes off. But take the simple case of a policeman with his hand up indicating stop. It will be a long time before autonomous vehicles recognize this situation, and without that capability these cars should not be permitted on the road without a backup driver. Google claims to have solved this problem, but I don't believe them. I don't believe Tesla's stats, either. Today me and another car yielded to each other. The driver in the other car signaled me to go and we make eye contact, so I went. I think it will be a long while before LIDAR plus cameras can handle that. Two cars pull up at the same time at a 4-way stop at right angles to each other. The car to the right doesn't go (doesn't matter why). What does the car to the left do? Four cars pull up to a 4-way stop at the same time. Reminds me of the joke about the donkey midway between two haystacks. It's the vehicular equivalent of a deadly embrace, or, since it's all just software, it *is* the deadly embrace. This one has simple solutions, I just mentioned it because it seems humorous. --Percy
|
|||||||||||||||||||||||||||||||||||||||||||||||||||
Percy Member Posts: 22479 From: New Hampshire Joined: Member Rating: 4.7 |
AZPaul3 writes: Hope it doesn't happen in any self driven car I'm a passenger in...
It will. Hopefully not you, but people are going to get hurt, people are going to die, from these things. Tort and insurance legislation will be used to absorb that risk temporarily while the smart guys figure out how to make these things safer. You're absolutely right - people are going to die because of software bugs and glitches, hardware failures, unanticipated situations, etc. But car accidents should decline, and the severity of the car accidents that do happen should also decline. The death and injury rate should decline precipitously. We currently have around 35,000 vehicular related deaths per year, and that should drop below 10,000, probably way below 10,000. But how are people going to feel about that, about the possibility of dying because of some car failure instead of their own mistake? The people penalized the most will be those who today are the most safe and careful drivers: those who don't tailgate, always look both ways, drive at reasonable speeds, etc. Their chances of vehicular death will decline, too, but not as much as your more, uh, enthusiastic drivers. But whether we're talking about safe or enthusiastic drivers, how will they feel about having their degree of safety taken out of their hands? --Percy
|
|
|
Do Nothing Button
Copyright 2001-2023 by EvC Forum, All Rights Reserved
Version 4.2
Innovative software from Qwixotic © 2024