Understanding through Discussion


Welcome! You are not logged in. [ Login ]
EvC Forum active members: 88 (8890 total)
Current session began: 
Page Loaded: 02-17-2019 7:34 PM
208 online now:
AZPaul3, DrJones*, dwise1, edge, GDR, Meddle, Theodoric (7 members, 201 visitors)
Chatting now:  Chat room empty
Newest Member: WookieeB
Post Volume:
Total: 847,615 Year: 2,652/19,786 Month: 734/1,918 Week: 21/301 Day: 21/38 Hour: 1/3


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Prev1
...
567
8
910Next
Author Topic:   Self-Driving Cars
Percy
Member
Posts: 18249
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.0


(2)
Message 106 of 142 (830436)
03-29-2018 10:05 AM
Reply to: Message 104 by Phat
03-29-2018 9:02 AM


Re: Ubers no good very bad week
Phat writes:

...which tasks should we entrust to the robotic computers and which should we keep for ourselves...

My own opinion: Level 2 is as far as we should go. To me that means safe distance maintenance and crash avoidance. No wheel control.

...that annoying Siri.

To me the most annoying thing about Siri was that in 2011 (release as a native part of iPhones) she was just starting 1st grade, and now 7 years later she's only halfway through 1st grade.

--Percy


This message is a reply to:
 Message 104 by Phat, posted 03-29-2018 9:02 AM Phat has acknowledged this reply

    
Phat
Member
Posts: 12033
From: Denver,Colorado USA
Joined: 12-30-2003
Member Rating: 1.4


Message 107 of 142 (830439)
03-29-2018 10:33 AM
Reply to: Message 105 by jar
03-29-2018 9:15 AM


Re: Ubers no good very bad week
Perhaps they could someday have a lane on major highways dedicated solely to driverless vehicles.

Chance as a real force is a myth. It has no basis in reality and no place in scientific inquiry. For science and philosophy to continue to advance in knowledge, chance must be demythologized once and for all. RC Sproul
"A lie can travel half way around the world while the truth is putting on its shoes." Mark Twain "
~"If that's not sufficient for you go soak your head."~Faith
Paul was probably SO soaked in prayer nobody else has ever equaled him.~Faith :)

This message is a reply to:
 Message 105 by jar, posted 03-29-2018 9:15 AM jar has responded

Replies to this message:
 Message 108 by jar, posted 03-29-2018 10:57 AM Phat has acknowledged this reply

  
jar
Member
Posts: 30934
From: Texas!!
Joined: 04-20-2004


Message 108 of 142 (830441)
03-29-2018 10:57 AM
Reply to: Message 107 by Phat
03-29-2018 10:33 AM


Re: Ubers no good very bad week
Phat writes:

Perhaps they could someday have a lane on major highways dedicated solely to driverless vehicles.

Better yet, have a separate and walled off lane for the non-driverless cars.


My Sister's Website: Rose Hill Studios My Website: My Website

This message is a reply to:
 Message 107 by Phat, posted 03-29-2018 10:33 AM Phat has acknowledged this reply

  
Percy
Member
Posts: 18249
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.0


(1)
Message 109 of 142 (832678)
05-07-2018 10:09 PM
Reply to: Message 101 by Percy
03-27-2018 2:39 PM


Re: Video of the Pedestrian Collision
Percy writes:

It was a software problem, possibly in the GPU, but most likely in the computer software dedicated to scene analysis.

Did I call it or what: Uber vehicle reportedly saw but ignored woman it struck:

quote:
The only possibilities that made sense were:

A: Fault in the object recognition system, which may have failed to classify Herzberg and her bike as a pedestrian. This seems unlikely since bikes and people are among the things the system should be most competent at identifying.

B: Fault in the cars higher logic, which makes decisions like which objects to pay attention to and what to do about them. No need to slow down for a parked bike at the side of the road, for instance, but one swerving into the lane in front of the car is cause for immediate action. This mimics human attention and decision making and prevents the car from panicking at every new object detected.

The sources cited by The Information say that Uber has determined B was the problem. Specifically, it was that the system was set up to ignore objects that it should have attended to; Herzberg seems to have been detected but considered a false positive.


Percy


This message is a reply to:
 Message 101 by Percy, posted 03-27-2018 2:39 PM Percy has acknowledged this reply

Replies to this message:
 Message 110 by Stile, posted 05-09-2018 11:15 AM Percy has responded

    
Stile
Member
Posts: 3367
From: Ontario, Canada
Joined: 12-02-2004
Member Rating: 4.3


Message 110 of 142 (832746)
05-09-2018 11:15 AM
Reply to: Message 109 by Percy
05-07-2018 10:09 PM


Re: Video of the Pedestrian Collision
The plot thickens!

Percy's Article writes:

The sources cited by The Information say that Uber has determined B was the problem. Specifically, it was that the system was set up to ignore objects that it should have attended to; Herzberg seems to have been detected but considered a false positive.

I wonder where the issue lies here.

Options I can think of (can certainly be more than 1 going on at a time):

1 - Hardware (radar, lidar, visions system, sensors...) was not purchased at a level it should have been for the application.
That is, the company saved money on "cheaper equipment" that could only be right most-of-the-time instead of all-the-time for this scenario.
-fault is on designers

2 - Programmers were not very good. Bad programmers = bad programming = they simply "didn't think" that this scenario would come up.
-fault is on programmers

3 - Programmers were good, but not given enough time to setup the system to the levels the equipment is capable of... they were pushed to get something out that was "good enough" even though it could have been better given more time/money.
-fault is on leaders (owners/managers...)

Taking the quote literally "the system was set up to ignore objects that it should have attended to" implies to me that it's more on the programmers and/or leaders. But it's possible this wording is not meant to be taken that literally and it's still a design problem.


This message is a reply to:
 Message 109 by Percy, posted 05-07-2018 10:09 PM Percy has responded

Replies to this message:
 Message 111 by Percy, posted 05-10-2018 9:55 AM Stile has acknowledged this reply
 Message 115 by Phat, posted 12-12-2018 9:53 AM Stile has acknowledged this reply

    
Percy
Member
Posts: 18249
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.0


(2)
Message 111 of 142 (832779)
05-10-2018 9:55 AM
Reply to: Message 110 by Stile
05-09-2018 11:15 AM


Re: Video of the Pedestrian Collision
If you're breaking things down generally then yeah, sure, an accident with a self-driving car could be hardware or software or a combination.

I should mention one little wrinkle about the hardware. While there is at some level a clear and explicit line of demarcation between hardware and software, that line is probably not one that Uber programmers have any control over or visibility into. I'm not speaking from direct knowledge about the Lidar and Radar and Camera systems that the self-driving car companies employ, but in all likelihood their software doesn't interact directly with the perception systems. Rather, the those systems probably have their own APIs, and Uber links those APIs into its own software and makes calls to them. This raises questions. How well are those APIs documented (i.e., how easy would it be for programmers to misunderstand what an API routine is doing)? What is the quality level of those API routines?

And concerning Uber specifically, their self-driving cars happened so suddently that I bet Uber did not write the scene composition software. They likely bought it from someone. In fact, they likely bought a lot of their software from other sources, then attempted to integrate it. That's the way large software systems happen these days. They aren't written from scratch - they're amalgamations of software from many different sources.

In some ways this is the realization of a dream. When I started programming nearly a half century ago it was already understood how low productivity was when every new software system began from scratch, and there was already the hope of drawing upon preprogrammed modules. c was an early example - the basic language had no I/O, and that was added with an included header file to provide the interface API and a module to link in that implemented the routines of the API.

Over time this dream has become a reality, but all dreams have the possibility of becoming nightmares, and that is the case here, at least in part. Huge software systems can be built almost overnight simply by combining software garnered from many sources, but at the cost of a loss of control. You can't enforce quality of the acquired software. When there's a new release of the acquired software, will it be the same quality as the previous version? Have new bugs been introduced (undoubtedly).

And the acquired software will likely do most of what you need it to do, but not all, and it will not do it in ways that you would prefer. The data provided by one set of software will often not be precisely what is required by other sets of software. In the end a great deal of glue software must be written. There's ample opportunity for mistakes of all types.

Programmers are never given enough time to do a good job on the software, and software testing is almost always inadequate because QA departments are not profit centers, and so when layoff time rolls around the departments that get hit the hardest are personnel, finance, program management and QA. Programmers frequently get stuck testing their own software, which is a major no-no because the guy who programmed it has major blindspots about where the weaknesses in his software lie. Also, QA is it's own speciality, and just because you're a crack programmer doesn't mean you're any good at QA.

Much software is released prematurely, meaning that the customers become a reluctant adjunct to the software company's QA efforts. The standard estimate is that a bug is ten times more costly to fix when detected in the field (i.e., by a customer) than when detected before release, but this fact is rarely heeded. Companies get burned, fix their policies, then over time they pick away at those policies to speed up release cycles, and pretty soon they're back where they started.

If the software in question is something like a schedule calendar or photo album then the consequences of bugs are minor, but if the software is for a nuclear power plant or a space shuttle or a self-driving car then the consequences of bugs can be deadly.

Programmers have little leverage. Vaguely expressed concerns about possible remaining problems in the software will always go unheeded, because the programmer can't know the specific consequences, like that under certain circumstances it could fail to properly classify an object as something to be avoided and will plow right into it, and if that object is a person then it could kill them. Rather, all he can say is, "If we don't delay the release x weeks (thereby delaying revenue) and spend y dollars on more testing (thereby increasing the cost center's debits and making managers look bad) then something bad might happen." And managers will comfort themselves that there's always a driver in the car monitoring things and ready to take over in a split second, though obviously they should know that's just overoptimistic bullshit.

Completely self-driving cars are a utopian dream for the foreseeable future. What they can already do is amazing, but what they can't do is formidable and frightening. Google and Tesla and Uber and all the rest can do all the development and testing they want, but for a long time people are still going to have to drive their own cars. But just crash avoidance systems alone will significantly reduce injuries and deaths due to accidents.

--Percy


This message is a reply to:
 Message 110 by Stile, posted 05-09-2018 11:15 AM Stile has acknowledged this reply

Replies to this message:
 Message 114 by Phat, posted 12-12-2018 9:47 AM Percy has acknowledged this reply
 Message 117 by Phat, posted 12-12-2018 10:09 AM Percy has acknowledged this reply

    
Percy
Member
Posts: 18249
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.0


Message 112 of 142 (833695)
05-25-2018 1:18 PM


Blowing own horn again...
I already practically broke my arm patting myself on the back in Message 109:

Percy in Message 109 writes:

Percy writes:

It was a software problem, possibly in the GPU, but most likely in the computer software dedicated to scene analysis.

Did I call it or what: Uber vehicle reportedly saw but ignored woman it struck:

But the NTSB report about the Uber crash just came out (at bottom of NTSB: Uber Self-Driving Car Had Disabled Emergency Brake System Before Fatal Crash) and it says:

quote:
According to data obtained from the self-driving system, the system first registered radar and LIDAR observations of the pedestrian about 6 seconds before impact, when the vehicle was traveling at 43 mph. As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path. At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision (see figure 2).

So the problems with scene analysis were far worse than I could ever have imagined, and far slower, too. It detected an unidentified object 6 seconds before impact, but didn't figure out it was a bicycle until 1.3 seconds before impact. What was it doing for 4.7 seconds?

But it gets worse, though this next part has nothing to do with scene analysis:

quote:
According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

The automatic emergency braking was disabled, so the driver had to do the braking. But as the video shows, the cyclist was practically invisible until the car was on top of her. The system wasn't designed to alert the driver to brake, but even if it had the driver would have had only 1.3 seconds to hit brakes and turn the wheel.

So Uber's system was bad in many ways.

But Tesla isn't covering themselves with glory, either. Did everyone hear about the Tesla that killed its driver (who wasn't paying attention, a major no-no, but still) by colliding with an already-collapsed crash cushion? From Tesla says Autopilot was on during deadly California crash:

quote:
There was a major announcement from Tesla Friday evening about last week's crash in Mountain View, California, that killed an engineer from Apple. The company confirms the autopilot "was" engaged when the Model X slammed into a collapsed safety barrier.

Thirty-eight-year-old Apple engineer and father of two, Walter Huang, died one week ago on his way to work when his Tesla Model X slammed into a crash cushion that had collapsed in an accident eleven days before -- basically like hitting a brick wall, the experts say.
...
Friday morning, a science director at an environmental startup took the ABC7 News I-Team's Dan Noyes in his Model X on the same route Huang drove last week to Apple. He was heading to the 85 carpool lane off 101 in Mountain View. "I see what the issue is," said Sean Price. "That line in the pavement could potentially be a problem," he said, pointing out a break between the asphalt and concrete and two white lines.


So possibly the Tesla thought the joint between pavement sections was the lane divider line and followed what it thought was the lane right into a collapsed crash cushion. This not only seems likely to me, I'm certain of it. My minimally-self driving car (cruise control with auto-distance maintenance) also has "crossing out of lane without signaling" detection (it beeps). This lane maintenance detection regularly goes off when the car moves across pavement joints. All the pothole patching crews are out right now, so it also regularly goes off these days as I cross actual dividing lines to go around repair crews.

These repair crews have men stationed at each end of the repair area with "Stop/Slow" signs that I'm still very doubtful these self-driving cars can properly handle. And a policeman with his hand up? Not a prayer.

Then a week or two ago there was the Tesla crash at 60 mph into the rear of a firetruck stopped at a red light. From Tesla in Autopilot mode sped up before crashing into stopped fire truck, police report says:

quote:
Data from the Model S electric sedan show the car picked up speed for 3.5 seconds before crashing into the fire truck in suburban Salt Lake City, the report said. The driver manually hit the brakes a fraction of a second before impact.

Police suggested that the car was following another vehicle and dropped its speed to 55 mph to match the leading vehicle. They say the leading vehicle then probably changed lanes and the Tesla automatically sped up to its preset speed of 60 mph without noticing the stopped cars ahead.


I've experienced the same thing in my minimally self-driving car. The cruise control feature is only to be used on the highway, but I use it everywhere. You can't trust it when the car in front of you moves out of your lane and the car tries to sync up with the next car in front. A sudden acceleration is common. Sometimes it detects the next car up and slows down again, sometimes not. And if the next car up is stopped at a light? Nothing. No braking. Sounds a lot like that Tesla that hit the rear of the firetruck. In my previous post I wrote about software systems today being amalgamations of software pieces from a variety of vendors. It isn't impossible that the bit of software responsible for that firetruck crash is the same as in my own car.

--Percy


Replies to this message:
 Message 113 by Percy, posted 12-12-2018 9:10 AM Percy has acknowledged this reply

    
Percy
Member
Posts: 18249
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.0


(1)
Message 113 of 142 (845100)
12-12-2018 9:10 AM
Reply to: Message 112 by Percy
05-25-2018 1:18 PM


Re: Blowing own horn again...
Nobody listens to me. There was no reply to my last post detailing problems in autonomous vehicles. And there was no reply to the comment I posted a few days ago to the Washington Post article Waymo launches nations first commercial self-driving taxi service in Arizona where I said this:

quote:
I was pleasantly surprised at all the skeptical comments - I was expecting a lot of people would drink the Kool Aid. Completely autonomous vehicles are in our future the same way as rocket packs and flying cars. The technology is amazing, but a technology that can only handle 99.9% of all situations still means thousands of accidents a day across a country the size of US, and they're nowhere near 99.9% yet. It's easy to list situations these cars can't handle:

- Policeman directing traffic
- Construction site where the guy with the "Slow/Stop" sign gets it backwards
- Traffic lights are out
- Fog
- Heavy rain
- Snow
- Snow partially or completely obscuring all road lines or even where the edges of the road are
- Stuck in snow (can these cars rock themselves out?)
- Stop sign obscured by vegetation
- Weather worn or handwritten detour sign
- Bad or old data, such as having a one-way street going in the wrong direction, or having the wrong speed limit, or roads or interchanges that have been redesigned.

Vehicles will become completely autonomous about the same time you can have an intelligent conversation with your phone - in other words, no time soon. Companies like Waymo are fooling themselves that the technology is just a few tweaks away from going mainstream, and their unjustified optimism appears in all their public communications.

Very effective crash avoidance systems already exist and are all we really need to achieve most of the safety goals of completely autonomous vehicles. They'll substantially reduce the vehicle fatality rate and that will be a wonderful thing.

But being able to watch a video while your car drives you to work is just not in the cards anytime soon.


All I got was crickets.

I think we're mostly safe from the efforts by companies like Google and Apple and Tesla and Uber and the biggies (GM, Ford, Chrysler, etc.) to introduce fully autonomous vehicles. Their semi-autonomous vehicles are quietly driving on our roads with backup drivers behind the wheels, and they're not causing too many accidents. In a little more time, probably around a couple years, these companies will realize that the technology is far in the future, probably 20-30 years.

Back in the 1960's Richard Greenblatt wrote a chess-playing program that everyone knew as the Greenblatt program, though apparently it's true name was Mac Hack because he wrote it while working at MIT on Project Mac ("Multi Level Access Computer" or "Machine-Aided Cognition"). The Greenblatt program played pretty good chess, and it once beat a human player in a tournament. I played tournament chess in high school and achieved a 1346 rating (strictly pedestrian - you're not someone interesting until you reach around 1700) from the USCF (United States Chess Federation).

When I reached college I encountered the Greenblatt program and played it several times, never winning even though the Greenblatt program's rating was only 1243 (of course by that time I was no longer taking chess seriously and had forgotten a lot). It was unerring in finding little two and three move combinations that were fatal. The Greenblatt program was a great emissary for computer chess, and its capabilities convinced people that computers would be beating humans within a few years.

But the years turned into decades, and while many chess programs (chess systems, actually - many took the approach of having hardware move generation add-ons) could beat almost all humans, none could compete on the grandmaster level. That is none until IBM's Deep Blue came along. In its first encounter in 1996 with Gary Kasparov, then world champion, it took the first game, at which point Kasparov said words to effect, "Oh, I see what it's doing," then he won the match handily 4-2.

But Kasparov played a rematch with a beefed up Deep Blue the following year, and Deep Blue won 3½-2½. IBM then retired Deep Blue to focus their efforts on Watson, and there hasn't been a high profile chess match between computer and human since.

But the important point was how long it took between the first demonstration of a capable chess playing program, the Greenblatt program in 1967, and the emergence of a world-class chess playing program in 1997, Deep Blue thirty years later.

It's the same with autonomous vehicles. The one's being tested today are merely demonstrating the early promise. Truly autonomous vehicles are likely 20-30 years off.


A digression: While fact checking the above I learned that Alan Kotok, who also attended MIT, had worked with Richard Greenblatt a bit on his chess program, and that was when I discovered that Kotok died back in 2006. No one is likely to have ever heard his name (unless you've read the book Hackers by Stephen Levy), but he was a genius.

When I was just out of school I worked on the same team with Alan Kotok, and he was instrumental in helping my career take off. My project team (DEC would give a lot of responsibility to very young people) wrote a timing analysis program for use by the project team developing the next generation DECSystem 20. It delivered its results textually and was barely used. Alan suggested using the graphical capabilities of the newly available VT120 terminal to present results graphically and instantly the program was the toast of the town. I would walk up and down the aisles of the cubicle farm of the DECSystem 20 team and see the program's graphical display on terminal after terminal.

A couple years later during a presentation about a different project to a skeptical senior advisory group I was floundering, and Kotok spoke up and offered a defense I hadn't even considered. The project went forward. I couple years later I left DEC and never saw Alan Kotok again.

I have never forgotten Alan Kotok and it is very sad that he is gone. For every Steven Jobs and Bill Gates we forget that there are legions of genius-level top-notch individuals working in the background, and Alan Kotok was one of them.

--Percy


This message is a reply to:
 Message 112 by Percy, posted 05-25-2018 1:18 PM Percy has acknowledged this reply

Replies to this message:
 Message 118 by kjsimons, posted 12-12-2018 10:14 AM Percy has responded
 Message 119 by NosyNed, posted 12-12-2018 10:36 AM Percy has responded
 Message 121 by PaulK, posted 12-12-2018 11:43 AM Percy has acknowledged this reply
 Message 122 by Stile, posted 12-12-2018 12:14 PM Percy has acknowledged this reply

    
Phat
Member
Posts: 12033
From: Denver,Colorado USA
Joined: 12-30-2003
Member Rating: 1.4


Message 114 of 142 (845103)
12-12-2018 9:47 AM
Reply to: Message 111 by Percy
05-10-2018 9:55 AM


Part Of The Problem
I took a minute to read your last two latest posts. I didnt know that you had a semi-automatic car...I guess that those are common these days. Your estimation of the technology being 20-30 years off sounds quite logical--and you being from the field of study that you were in makes you well equipped to analyse these news stories and provide reasonable critique. Here is my 2 cents:

Many of the younger Millennials who are in the industry that designs and builds this technology are, in my opinion, far removed from the nuts and bolts technology of an actual car, not to mention the early technology involved in computer design. They have grown up around virtual reality gaming and how computers work in that environment, yet are not as aware of the real world limitations of such technology. Driver reaction time is one primary example. In an RPG game, the "driver" is the player and is totally immersed in the game. In the example of a self-driving car, there seems to be a fantasy of...as you say...watching videos and texting on your phone while the car gets you across town. The problem is, among other things, the real world of the town that must be navigated is not designed and built within the confines of the "game". One may well build a SIMS car for their SIMS characters in a game, having built and designed the game from the floor up. One cannot as easily integrate the dynamic demands of an onboard computer to navigate around any city. Do you understand what I'm attempting to say?


Chance as a real force is a myth. It has no basis in reality and no place in scientific inquiry. For science and philosophy to continue to advance in knowledge, chance must be demythologized once and for all. RC Sproul
"A lie can travel half way around the world while the truth is putting on its shoes." Mark Twain "
~"If that's not sufficient for you go soak your head."~Faith

You can "get answers" by watching the ducks. That doesn't mean the answers are coming from them.~Ringo

Subjectivism may very well undermine Christianity.
In the same way that "allowing people to choose what they want to be when they grow up" undermines communism.
~Stile


This message is a reply to:
 Message 111 by Percy, posted 05-10-2018 9:55 AM Percy has acknowledged this reply

Replies to this message:
 Message 116 by Diomedes, posted 12-12-2018 10:09 AM Phat has not yet responded

  
Phat
Member
Posts: 12033
From: Denver,Colorado USA
Joined: 12-30-2003
Member Rating: 1.4


Message 115 of 142 (845104)
12-12-2018 9:53 AM
Reply to: Message 110 by Stile
05-09-2018 11:15 AM


Re: Video of the Pedestrian Collision
Stie writes:

I wonder where the issue lies here.

Options I can think of (can certainly be more than 1 going on at a time):

1 - Hardware (radar, lidar, visions system, sensors...) was not purchased at a level it should have been for the application.
That is, the company saved money on "cheaper equipment" that could only be right most-of-the-time instead of all-the-time for this scenario.
-fault is on designers

2 - Programmers were not very good. Bad programmers = bad programming = they simply "didn't think" that this scenario would come up.
-fault is on programmers

3 - Programmers were good, but not given enough time to setup the system to the levels the equipment is capable of... they were pushed to get something out that was "good enough" even though it could have been better given more time/money.
-fault is on leaders (owners/managers...)

Taking the quote literally "the system was set up to ignore objects that it should have attended to" implies to me that it's more on the programmers and/or leaders. But it's possible this wording is not meant to be taken that literally and it's still a design problem.

I agree. They wanted to design a system as cost-effectively as possible. It would be like building an RPG game trying to get it competitive with the market. The glitch in the "system" is human as much as it is programming. The technology may or may not exist to build the right vehicles, but if they are under pressure to get it marketable and "just good enough" they have the wrong approach. And again...the way I see it the other glitch is designing the technology to coexist with the real world. Not an artificial simulation of the real world.

Chance as a real force is a myth. It has no basis in reality and no place in scientific inquiry. For science and philosophy to continue to advance in knowledge, chance must be demythologized once and for all. RC Sproul
"A lie can travel half way around the world while the truth is putting on its shoes." Mark Twain "
~"If that's not sufficient for you go soak your head."~Faith

You can "get answers" by watching the ducks. That doesn't mean the answers are coming from them.~Ringo

Subjectivism may very well undermine Christianity.
In the same way that "allowing people to choose what they want to be when they grow up" undermines communism.
~Stile


This message is a reply to:
 Message 110 by Stile, posted 05-09-2018 11:15 AM Stile has acknowledged this reply

  
Diomedes
Member
Posts: 799
From: Central Florida, USA
Joined: 09-13-2013
Member Rating: 3.5


(1)
Message 116 of 142 (845105)
12-12-2018 10:09 AM
Reply to: Message 114 by Phat
12-12-2018 9:47 AM


Re: Part Of The Problem
Your estimation of the technology being 20-30 years off sounds quite logical--and you being from the field of study that you were in makes you well equipped to analyse these news stories and provide reasonable critique.

I work in the software industry and I concur. We are a long ways off when it comes to fully automonous vehicles. What I see likely occurring in the near to mid term is that cars/trucks will have some autopilot capabilities built into them. This will actually be beneficial to long haul drives. But they won't have the necessary AI to perform complex tasks yet. We will get there eventually, but there are a myriad of problems one has to overcome.

Often times, people get drawn into the hype when charismatic tech leaders like Musk tout the futuristic capabilities of their technology. But a lot of that are just sales pitches. Pie in the sky boasts are part in parcel with getting visibility for your brand.

One great example I always think about when it comes to bad predictions is the movie 2001 A Space Odyssey. It came out in 1968. It depicted a future (in 2001) with space habitats, a presence on the moon, routine commercial space travel, etc. Well, here we are in 2018 and we don't have hardly any of that. With the exception of the International Space Station, which is no where near as advanced as what was depicted in the movie.

But the real interesting portion was the Hal 9000 computer. Now in 2018, our computer technology is impressive. The have the World Wide Web. We have computers in our pockets in the form of cell phones. Near instant communication with anyone. But when it comes to AI, or Artificial Intelligence, we don't have anything remotely close to what the Hal 9000 was. That was a fully sentient, artificial intelligence. As Percy mention above, the only thing that has some commonality with Hal is Watson. Yet it is clearly not self aware. It is basically a very advanced big data mechanism.

So yes, we have made progress. But we have a long way to go.

Many of the younger Millennials who are in the industry that designs and builds this technology are, in my opinion, far removed from the nuts and bolts technology of an actual car, not to mention the early technology involved in computer design.

This is actually an issue that is becoming more prevalent in software. For Gen Xers like myself, many of us were hobbyists that put computers together by ourselves. Before companies like Dell existed. And we often times had to hand code software without the benefit of more adept development environments like Microsoft Visual Studio or Java Eclipse. Now these dev environments expedite coding and make things easier. But often times, they obfuscate a lot of the particulars of the low level code itself. Millennials having grown up in an environment where the low level code is done for them often are ill equipped to handle certain types of problems.


This message is a reply to:
 Message 114 by Phat, posted 12-12-2018 9:47 AM Phat has not yet responded

Replies to this message:
 Message 124 by Percy, posted 12-12-2018 3:13 PM Diomedes has responded

  
Phat
Member
Posts: 12033
From: Denver,Colorado USA
Joined: 12-30-2003
Member Rating: 1.4


Message 117 of 142 (845107)
12-12-2018 10:09 AM
Reply to: Message 111 by Percy
05-10-2018 9:55 AM


Self Driven Humans
If the software in question is something like a schedule calendar or photo album then the consequences of bugs are minor, but if the software is for a nuclear power plant or a space shuttle or a self-driving car then the consequences of bugs can be deadly.
You are saying it better than I can. Technology is great, but humans need to figure out what they want it to do and what they want to be relieved of the responsibility of doing. Anyone who would buy a self-driving car needs to ask themselves the stakes that they are investing.

Completely self-driving cars are a utopian dream for the foreseeable future. What they can already do is amazing, but what they can't do is formidable and frightening. Google and Tesla and Uber and all the rest can do all the development and testing they want, but for a long time, people are still going to have to drive their own cars. But just crash avoidance systems alone will significantly reduce injuries and deaths due to accidents.
Always the optimist! The goal is to forge ahead with the technology with blinders off. We cant be in a hurry to adopt the technology.

Chance as a real force is a myth. It has no basis in reality and no place in scientific inquiry. For science and philosophy to continue to advance in knowledge, chance must be demythologized once and for all. RC Sproul
"A lie can travel half way around the world while the truth is putting on its shoes." Mark Twain "
~"If that's not sufficient for you go soak your head."~Faith

You can "get answers" by watching the ducks. That doesn't mean the answers are coming from them.~Ringo

Subjectivism may very well undermine Christianity.
In the same way that "allowing people to choose what they want to be when they grow up" undermines communism.
~Stile


This message is a reply to:
 Message 111 by Percy, posted 05-10-2018 9:55 AM Percy has acknowledged this reply

  
kjsimons
Member
Posts: 663
From: Orlando,FL
Joined: 06-17-2003


(1)
Message 118 of 142 (845108)
12-12-2018 10:14 AM
Reply to: Message 113 by Percy
12-12-2018 9:10 AM


Re: Blowing own horn again...
I'm also a computer guy from way back (also wrote code on PDP-11s, DEC and Data Generals early on) and I'm very skeptical about self driving cars. They are still having, what I consider major, issues with just the auto braking systems of cars, even on the high end Teslas. Car and Driver just did a comparison of several and the results were not impressive. Some have many false alarms (i.e. brake for no reason) and none work at highway speeds (which is why Teslas keep crashing into stopped vehicles/barriers at high speeds). Now in their defense, these systems aren't designed to work at higher speeds but this is partly due to the increased cost as they would need more computing power to figure out what the real world situation was and so it could respond.
This message is a reply to:
 Message 113 by Percy, posted 12-12-2018 9:10 AM Percy has responded

Replies to this message:
 Message 128 by Percy, posted 12-12-2018 4:23 PM kjsimons has not yet responded

  
NosyNed
Member
Posts: 8829
From: Canada
Joined: 04-04-2003
Member Rating: 4.2


(2)
Message 119 of 142 (845112)
12-12-2018 10:36 AM
Reply to: Message 113 by Percy
12-12-2018 9:10 AM


sneaking up on us
I don't know enough to be sure of any guesses about the future but what I think is happening is the automated driving systems are gradually moving up. At first they are better than the inattentive driver not looking ahead and they autobrake. This happens today. Are they successful every time? No. Are they successful more than the average driver? Ah, I don't think we have the statistics available on that but if they are not they will be in very few years.

The systems will not be perfect for a long, long time, I agree. But they have a pretty low bar to leap. Can they get better than the average driver in the majority of scenarios? Not such a difficult task.

https://www.driving.co.uk/...aught-asleep-behind-wheel-70mph
In that one case the car was much better than the driver. Score a point for "AI".

Are we beginning to get the needed overall statistics?
https://www.insurancejournal.com/...al/2018/10/08/503583.htm
The source is Tesla and I'd rather see insurance company statistics since they have a rather different view but:

Tesla said it recorded one accident for every 3.34 million miles driven when the autopilot was engaged. That is a vastly better record than the one compiled by humans.

and
In Tesla cars that do not have the autopilot engaged, the company said it recorded one accident or crash-like event every 1.92 million miles.

Perhaps that is a whole bunch more points for the "AI" or maybe having autopilot on actually makes drivers more attentive which is the opposite of what I would expect.

Additionally:

The most recent National Highway Traffic Safety Administration data shows one auto crash for every 492,000 miles driven in the U.S. without an autonomous assist.

Are autopilots already 2 to 10 times safer than the average human? There are already hints.


This message is a reply to:
 Message 113 by Percy, posted 12-12-2018 9:10 AM Percy has responded

Replies to this message:
 Message 129 by Percy, posted 12-12-2018 4:59 PM NosyNed has not yet responded

  
Tangle
Member
Posts: 6608
From: UK
Joined: 10-07-2011
Member Rating: 3.9


Message 120 of 142 (845118)
12-12-2018 11:23 AM


I'd have to spend more time than I want to to show this but it's always the case that new technologies always take an awful lot longer than you'd expect to be adopted. If you're interested it's covered by the diffusion of innovation theory.

I can't remember now, but it was an extraordinarily long time after colour television was invented until it reached mass market penetration. And we had b&w Tvs already.

But there's a lot of evidence showing that the innovation curve is getting faster and, of course, there's vast quantities of investment and effort going into both AI and AI cars now which tends to speed things up.


Je suis Charlie. Je suis Ahmed. Je suis Juif. Je suis Parisien. I am Mancunian. I am Brum. I am London.I am Finland. Soy Barcelona

"Life, don't talk to me about life" - Marvin the Paranoid Android

"Science adjusts it's views based on what's observed.
Faith is the denial of observation so that Belief can be preserved."
- Tim Minchin, in his beat poem, Storm.


Replies to this message:
 Message 123 by NosyNed, posted 12-12-2018 12:33 PM Tangle has not yet responded

  
Prev1
...
567
8
910Next
Newer Topic | Older Topic
Jump to:


Copyright 2001-2018 by EvC Forum, All Rights Reserved

™ Version 4.0 Beta
Innovative software from Qwixotic © 2019