Register | Sign In


Understanding through Discussion


EvC Forum active members: 63 (9162 total)
1 online now:
Newest Member: popoi
Post Volume: Total: 916,387 Year: 3,644/9,624 Month: 515/974 Week: 128/276 Day: 2/23 Hour: 1/1


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   Self-Driving Cars
ringo
Member (Idle past 432 days)
Posts: 20940
From: frozen wasteland
Joined: 03-23-2005


Message 91 of 142 (830293)
03-26-2018 1:04 PM
Reply to: Message 90 by Stile
03-26-2018 12:44 PM


Re: Video of the Pedestrian Collision
Stile writes:
... but it does seem that "something" isn't right with this pedestrian. Perhaps a mental issue? Perhaps headphones? Perhaps an indignant "vehicles stop for me!" attitude? Perhaps a family emergency or other super-important issue is on this lady's mind at the moment? Something else?
Suicide by self-driving car? An ill-conceived bet that, "I can fool that thing."?

An honest discussion is more of a peer review than a pep rally. My toughest critics here are the people who agree with me. -- ringo

This message is a reply to:
 Message 90 by Stile, posted 03-26-2018 12:44 PM Stile has seen this message but not replied

  
Percy
Member
Posts: 22479
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.7


Message 92 of 142 (830308)
03-26-2018 3:16 PM
Reply to: Message 89 by Stile
03-26-2018 12:36 PM


Re: The Downside of Human Instructors
The approach where when the car doesn't know what to do it pulls over and stops so the human can take over would be Level 3.
--Percy

This message is a reply to:
 Message 89 by Stile, posted 03-26-2018 12:36 PM Stile has replied

Replies to this message:
 Message 93 by Stile, posted 03-26-2018 3:30 PM Percy has seen this message but not replied

  
Stile
Member
Posts: 4295
From: Ontario, Canada
Joined: 12-02-2004


Message 93 of 142 (830310)
03-26-2018 3:30 PM
Reply to: Message 92 by Percy
03-26-2018 3:16 PM


Re: The Downside of Human Instructors
Percy writes:
The approach where when the car doesn't know what to do it pulls over and stops so the human can take over would be Level 3.
That seems to be a rather simple solution to "the handover problem," no?
That way you don't have to worry about forcing it to happen while the car is moving. *If* it happens while the car is slowing down and pulling over... then all the more efficiency for the driver. If the driver doesn't react fast enough... then the vehicle just pulls over and stops.
Or maybe I'm missing something that makes this problem harder?
The worst issue with this is that if there are a lot of problems up ahead, then a lot of vehicles could "slow down and stop and wait for the driver to take over." Which, possibly, could cause increase traffic issues.
However, I think the benefit of vehicles being automated would alleviate many other traffic issues. I think it would be a pretty decent trade-off.
As well, each time the vehicle shuts down and pulls over... the data causing it to do so could be sent back to head-office so it could be analyzed for possible situational improvement of the vehicle-automation-system. No more waiting for tired people to "teach" the vehicle.
I suppose, though, that the beginning would be the worst.
And who wants to be the first company to have everyone swearing at them: "Damnit, another one of those shitty [insert company name here] cars slowing everything down!!"
Perhaps the industry is simply waiting for some company to feel risky enough to take that step.

This message is a reply to:
 Message 92 by Percy, posted 03-26-2018 3:16 PM Percy has seen this message but not replied

  
caffeine
Member (Idle past 1045 days)
Posts: 1800
From: Prague, Czech Republic
Joined: 10-22-2008


Message 94 of 142 (830314)
03-26-2018 4:34 PM
Reply to: Message 81 by Stile
03-22-2018 3:27 PM


Re: Video of the Pedestrian Collision
Surprisingly little.
The bulk of your computer's processing is running stuff it doesn't really need to run... just stuff you want it to be running so that it's ready for your 'day-to-day' stuffs at any time.
Strip all that away, design and program it specifically for the task at hand and nothing more, and things get much faster. A lot faster.
The typical 40 robot, 300 motion side-rail example I mentioned above would run off a program taking up less than 1 MB of file size.
The maximum storage of the entire industrial computer would even be in the 2-to-4 MB range.
It's not a very "powerful" computer, it's simply a specifically, efficiently designed one used for the tasks it's programmed for and nothing else.
Thanks for the feedback.
Been thinking about this a bit -two thoughts come to mind.
First is that any consumer smart car would be full of bloatware - since we would also expect it to surf the internet, play music, etc. etc. But this could be avoided by having two separate systems - one for the user interface and one to actually drive the car.
However, my second question is whether industrial computers are every required to do tasks of this complexity. You talked about your experience with factory robots; but these are operating in very controlled conditions - they have to react to things moving quickly within a fairly narrowly defined set of parameters (the production line); plus presumably some mechanism to spot when something is going beyond these parameters (such as things falling off the conveyor belt).
It strikes me that navigating a city while reacting to the natural flow of the world around you is a computational task several orders of magnitude more intensive. Or am I overestimating this or underestimating what industrial robots do?

This message is a reply to:
 Message 81 by Stile, posted 03-22-2018 3:27 PM Stile has replied

Replies to this message:
 Message 100 by Stile, posted 03-27-2018 9:15 AM caffeine has not replied

  
NoNukes
Inactive Member


Message 95 of 142 (830316)
03-26-2018 5:25 PM
Reply to: Message 87 by Percy
03-24-2018 9:08 PM


Re: The Downside of Human Instructors
I can see driver rotation alleviating attention fatigue the first week or two on the job, but might not the sheer passivity and monotony of the exercise for week after week make the effective attention period for each shift shorter and shorter?
Even if that were true, it is not an insurmountable obstacle. But we do allow folks to do complex tasks in shifts for periods of time extending months and years. There probably is a burnout point, but is there any reason to believe it is something we cannot handle with some feasible, but maybe costly, rotation? I would not think so.
To pursuing Level 5? Wouldn't Level 2 be as effective in saving lives?
I don't believe it would be as effective in saving lives. With minor, but meaningful exceptions, computers are already better drivers than are humans. So the issue of having humans have to step in just is not going to go away.

Under a government which imprisons any unjustly, the true place for a just man is also in prison. Thoreau: Civil Disobedience (1846)
"Give me your tired, your poor, your huddled masses yearning to breathe free, the wretched refuse of your teeming shore. Send these, the homeless, tempest-tossed to me, I lift my lamp beside the golden door!
We got a thousand points of light for the homeless man. We've got a kinder, gentler, machine gun hand. Neil Young, Rockin' in the Free World.
Worrying about the "browning of America" is not racism. -- Faith
I hate you all, you hate me -- Faith

This message is a reply to:
 Message 87 by Percy, posted 03-24-2018 9:08 PM Percy has replied

Replies to this message:
 Message 96 by Percy, posted 03-26-2018 9:41 PM NoNukes has replied

  
Percy
Member
Posts: 22479
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.7


Message 96 of 142 (830319)
03-26-2018 9:41 PM
Reply to: Message 95 by NoNukes
03-26-2018 5:25 PM


Re: The Downside of Human Instructors
NoNukes writes:
I can see driver rotation alleviating attention fatigue the first week or two on the job, but might not the sheer passivity and monotony of the exercise for week after week make the effective attention period for each shift shorter and shorter?
Even if that were true, it is not an insurmountable obstacle.
Insurmountable wasn't how I was thinking of it - like many problems it is solvable if sufficient resources are dedicated to it. But that's the issue - will sufficient resources be brought to bear? There's a broader question of to what degree will safety be sacrificed to the bottom line. The history of car manufacturers is that they didn't focus on safety until government regulation forced them to.
During my career in software development I observed that cost cutting forced by economic downturns always focused on areas that would not directly impact revenue. Quality assurance was one of those areas. Autonomous driving is mostly a software problem.
Car manufacturers have a long and continuous record of poor performance when it comes to safety. Ford just recalled 1.4 million cars where the steering wheel can come loose. We're still working our way through the Tanaka airbag recall. Volkswagen engaged in deception with emission controls. Toyota recalled 9 million cars for faulty accelerator pedals in 2010. And the 2009 Toyota Prius had a little known software recall involving interaction between the braking system (which is a combination of normal and regenerative braking) and uneven surfaces.
I don't think the performance of car manufacturers will be any better with autonomous driving capabilities. There will be recalls of both software and hardware. On the design side, when the autonomous driving software is dropped into a new model that has the newer or different versions of the radar and the lidar and the optical cameras, how much testing will they do? Will the same software work for a low-slung sports car as for a high sitting SUV? Car manufacturers will want to incorporate the sensors currently in the box on the roof into the car body - how much of an effect will that have?
Here's a scenario: During the 2025 model year Ford discovers they can save money by swapping out the NVidia GPU for the Gigabyte GPU. How much testing will they do?
To pursuing Level 5? Wouldn't Level 2 be as effective in saving lives?
I don't believe it would be as effective in saving lives. With minor, but meaningful exceptions, computers are already better drivers than are humans. So the issue of having humans have to step in just is not going to go away.
Could you rephrase this - wasn't sure what you meant concerning Level 5 (no human intervention) saving more lives but there always being a need for human intervention.
About "With minor, but meaningful exceptions, computers are already better drivers than are humans," I don't know what a "minor but meaningful exception" is, but I do think there are many cases where computers are not better drivers than humans. Obviously computers excel at some tasks, and arguably they will someday excel at all tasks, but can they handle snow? Heavy rain? Mud splashed up on the sensors? Does Lidar work through mud? Can the system detect the traffic light hanging over the intersection that a strong wind has twisted to face the wrong way? Will it "see" the stop sign on the country road that vegetation is hiding?
By the way, I figured out that it should be no big deal to move the Level 5 autonomous vehicle with no steering wheel or pedals that is blocking Uncle Fred. The cameras provide a 360 degree view of the surroundings, so on the display you simply drag/drop the outline of the vehicle to where (including orientation) you want it to be.
Computers can do amazing things these days, and that can cause us to forget just how stupid they can be. I was witness to some of the early days of speech recognition research back in the mid-1970s. People on the project were sure that conversations with computers were just a decade away (delusions about AI were also rampant at the time), yet here we are more than forty years later and I can only give Siri the most rudimentary of instructions. Turns out a video of that effort is still on the Internet. The opening narrator doesn't introduce himself, so I'll mention that he's Professor Raj Reddy - he must be long retired by now:
--Percy

This message is a reply to:
 Message 95 by NoNukes, posted 03-26-2018 5:25 PM NoNukes has replied

Replies to this message:
 Message 97 by NoNukes, posted 03-26-2018 10:06 PM Percy has seen this message but not replied

  
NoNukes
Inactive Member


Message 97 of 142 (830321)
03-26-2018 10:06 PM
Reply to: Message 96 by Percy
03-26-2018 9:41 PM


Re: The Downside of Human Instructors
The history of car manufacturers is that they didn't focus on safety until government regulation forced them to.
I don't think there is much danger of safety getting overlooked here. (Where here means the process of getting the first approvals for automated cars) I am glad that we agree that the problems are not insurmountable. In this case, though I think that the problems with alertness during the learning process border on being trivial to solve.
All of the levels below level 5, require humans to take over and humans make errors. Every level of automation invites or encourages some level of driver inattention, but only the higher levels provide enough assistance to deal with inattention. I am not against lower levels of automation being placed in cars, but I don't see them as a huge leap from a safety standpoint.
Edited by NoNukes, : No reason given.
Edited by NoNukes, : No reason given.

Under a government which imprisons any unjustly, the true place for a just man is also in prison. Thoreau: Civil Disobedience (1846)
"Give me your tired, your poor, your huddled masses yearning to breathe free, the wretched refuse of your teeming shore. Send these, the homeless, tempest-tossed to me, I lift my lamp beside the golden door!
We got a thousand points of light for the homeless man. We've got a kinder, gentler, machine gun hand. Neil Young, Rockin' in the Free World.
Worrying about the "browning of America" is not racism. -- Faith
I hate you all, you hate me -- Faith

This message is a reply to:
 Message 96 by Percy, posted 03-26-2018 9:41 PM Percy has seen this message but not replied

  
Percy
Member
Posts: 22479
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.7


Message 98 of 142 (830322)
03-26-2018 10:58 PM


Uber’s no good very bad week

Replies to this message:
 Message 99 by jar, posted 03-27-2018 6:59 AM Percy has seen this message but not replied

  
jar
Member (Idle past 414 days)
Posts: 34026
From: Texas!!
Joined: 04-20-2004


Message 99 of 142 (830328)
03-27-2018 6:59 AM
Reply to: Message 98 by Percy
03-26-2018 10:58 PM


Re: Uber’s no good very bad week
From the link you provided...
quote:
The car, a white Toyota Camry, got stuck at about 1:23 p.m. local time, according to the San Francisco Police Department. The car was being driven by a human at the time of the accident, according to an employee at Safeway.
And the picture shows a car without the additional pods seen on the uber self driving cars so it is very likely that it was just a normal car.

My Sister's Website: Rose Hill Studios My Website: My Website

This message is a reply to:
 Message 98 by Percy, posted 03-26-2018 10:58 PM Percy has seen this message but not replied

Replies to this message:
 Message 104 by Phat, posted 03-29-2018 9:02 AM jar has replied

  
Stile
Member
Posts: 4295
From: Ontario, Canada
Joined: 12-02-2004


Message 100 of 142 (830333)
03-27-2018 9:15 AM
Reply to: Message 94 by caffeine
03-26-2018 4:34 PM


Re: Video of the Pedestrian Collision
caffeine writes:
But this could be avoided by having two separate systems - one for the user interface and one to actually drive the car.
Exactly.
The requirement for this would be considered by those who create the system (I hope).
If one computer can do both while keeping all required reaction times within acceptable parameters... then they'll use only 1.
If two are needed, then they'll separate and use 2.
Or, at least I hope they would
It strikes me that navigating a city while reacting to the natural flow of the world around you is a computational task several orders of magnitude more intensive (then industrial robots in a controlled environment). Or am I overestimating this or underestimating what industrial robots do?
You're absolutely right.
Part of my post was dedicated to showing that equipment exists that *is* "orders of magnitude" faster/stronger than the computers used to run robots in controlled environments.
I've never used it much (I tend to work with systems that need to react on the order of seconds... not microseconds or nanoseconds). So my experience there is lacking.
But I have witnessed it, and I know it exists.
You could google things such as "high speed manufacturing" for some examples.
Things like modern bottling plants - bottles whip by at a pace of 50 or 60 per second near the end by the packaging portion. Each one has it's picture taken by a camera that doesn't care about the orientation of the bottle (because it could be spun in any direction) but it does monitor all of it and check the label for accuracy or any deficiencies.
-This would not be the "fastest of the fastest" of computing power, that would likely be in very niche-businesses.
-However, this is the basics of "high-speed capabilities" of computers
-I wouldn't be surprised if certain computers could go another order of magnitude above such things (things like high-speed bottling plants have been around for 20-30 years)
Again, I'm not saying I know that the vehicle could have totally avoided this accident.
But I am saying that, if everything worked as intended, it should have identified "an object" and began to take countermeasures (swerving? braking?).
Whether or not it could do everything fast enough to avoid the accident entirely... I'm not sure.
But it definitely should be good enough to detect the object and start doing something.
I believe the news report is that the vehicle did not react or even slow down at all before the collision.
This leads me to believe that something didn't work as intended.
-maybe object was not detected - if so, obviously need some better work on detecting pedestrians
-maybe decision making was flawed - if so, obviously need better programmers
-maybe reaction did not function as desired - if so, might need better hardware and physical equipment

This message is a reply to:
 Message 94 by caffeine, posted 03-26-2018 4:34 PM caffeine has not replied

Replies to this message:
 Message 101 by Percy, posted 03-27-2018 2:39 PM Stile has replied

  
Percy
Member
Posts: 22479
From: New Hampshire
Joined: 12-23-2000
Member Rating: 4.7


(1)
Message 101 of 142 (830350)
03-27-2018 2:39 PM
Reply to: Message 100 by Stile
03-27-2018 9:15 AM


Re: Video of the Pedestrian Collision
Stile writes:
But I am saying that, if everything worked as intended, it should have identified "an object" and began to take countermeasures (swerving? braking?).
Whether or not it could do everything fast enough to avoid the accident entirely... I'm not sure.
But it definitely should be good enough to detect the object and start doing something.
I believe the news report is that the vehicle did not react or even slow down at all before the collision.
I looked a bit into how far autonomous vehicles can detect objects. I started by looking at braking distance because that would tell me the distance available to stop. Obviously it would not be acceptable for autonomous vehicles to use maximum braking force for normal situations like red lights, stop signs, slowing traffic, etc. In my part of the country we do have 50 mph roads with traffic lights. Obviously cars actually travel at 55-60 mph on these roads. A comfortable stopping distance at 60 mph is 350 feet (Braking distance, friction and behavior). This distance must be increased if the road has a downgrade or is curved or both, so let's call it 500 feet. Therefore an autonomous vehicle must be able to see traffic lights, stop signs, stopped traffic, construction, pedestrians, etc., from a distance of at least 500 feet.
But cars travel faster than 60 mph, so let's bump that distance up to 1000 feet.
My slightly self-driving car when cruise controlling at 80 mph and approaching slower traffic begins to slow down at a distance of what feels like around 400-600 feet (the range is broad because naturally I can only estimate). My car uses millimeter-wave radar and can "see" vehicles as small as motorcyles. It can probably see further than 400-600 feet, but there's no way I would know since the car would not see the need to slow down at greater distances.
Radar obviously has the distance, but radar cannot recognize a traffic light or a stop sign.
Regarding radar and pedestrians, radar obviously reflects best off metal objects, and Radar Cross Section for Pedestrian in 76GHz Band says that a pedestrian's reflection is 15-20 dB down from the rear of a vehicle (76 GHz is a wavelength of 3.9 mm). 15-20 dB down is a lot - about a third to a quarter as strong. Radar is not the best way to see a pedestrian.
So what about lidar? Police use lidar speed guns that can see over a thousand feet. Autonomous vehicles use lidar to create a "point cloud" from which the GPU builds a schematic view of the vehicle's surroundings. Since the amount of detail increases with the cube of the distance, lidar can only look out so far, otherwise the amount of detail would overwhelm the GPU and any computational analysis of the scene.
I couldn't find anything definitive about lidar distance, but How Driverless Cars See the World Around Them says, "It [lidar] provides information only about objects that are relatively close." That tells me that seeing a traffic light or stop sign at 1000 feet might be asking a lot, but the GPU can probably be programmed to look further forward than it does off to the sides and rear, and the GPU and computer can be programmed to place additional emphasis on areas of the scene where traffic lights and stop signs are normally found. I'll guess that lidar can see traffic lights and stop signs and pedestrians at distances up to a thousand feet. But of course lidar can't tell which colored light of a traffic light is on, or read a stop sign (though it can see the shape, which is probably all it needs).
How Driverless Cars See the World Around Them also says, "And when multiple autonomous vehicles drive the same road, their lidar signals can interfere with one another." Well, that's very disturbing. If true then that problem has to be fixed before autonomous vehicles can become a reality.
Lidar also has difficulty telling snowflakes and raindrops from real objects, though Ford claims a solution: Driverless cars have a new way to navigate in rain or snow.
The lidar in autonomous vehicles uses infrared frequencies for its lasers. I think water, slush, mud and snow can strongly attenuate lidar. Anyone who has ever had a satellite dish in snow country knows the annoyance of having to sweep the snow out of the dish periodically during a snowstorm. And in any part of the country heavy rainstorms can also block the signal. Though the frequencies differ, I think lidar suffers from analogous issues.
That leaves visual cameras for telling the color of a traffic light. Once lidar has identified an object as a traffic light, the visual cameras should have little difficulty discerning the color that is on.
Here's a paper from Google on Traffic Light Mapping and Detection.
Naturally rain and mud and snow also interfere with visual cameras. If mud or slush or snow splashes up your visual camera, what then?
Anyway, bottom line about pedestrians, there seems little reason lidar wouldn't be able to detect a pedestrian on a clear night as much as 1000 feet away, particularly the one struck by the Uber vehicle. It was a software problem, possibly in the GPU, but most likely in the computer software dedicated to scene analysis.
--Percy

This message is a reply to:
 Message 100 by Stile, posted 03-27-2018 9:15 AM Stile has replied

Replies to this message:
 Message 102 by Stile, posted 03-27-2018 3:03 PM Percy has seen this message but not replied
 Message 109 by Percy, posted 05-07-2018 10:09 PM Percy has seen this message but not replied

  
Stile
Member
Posts: 4295
From: Ontario, Canada
Joined: 12-02-2004


Message 102 of 142 (830351)
03-27-2018 3:03 PM
Reply to: Message 101 by Percy
03-27-2018 2:39 PM


Re: Video of the Pedestrian Collision
Wow. Thanks for all the cool information.
"And when multiple autonomous vehicles drive the same road, their lidar signals can interfere with one another." Well, that's very disturbing. If true then that problem has to be fixed before autonomous vehicles can become a reality.
Seems rather important
Naturally rain and mud and snow also interfere with visual cameras. If mud or slush or snow splashes up your visual camera, what then?
Concerns like this go away if a company decides to stick at a Level 3-ish area.
What then? Swap to manual. Slow down, pull over. Same conclusion for every detection of "not operating within valid parameters."
Then the data be analyzed in the shop under completely safe VM environmental conditions and maybe the problems will get solved.
Once they're all solved (or some threshold of reduced problems over time is reached) - move to Level 5-ish.
If they never get solved, then I agree that Level 5-ish isn't something to aim for with so many questions still floating about.
Anyway, bottom line about pedestrians, there seems little reason lidar wouldn't be able to detect a pedestrian on a clear night as much as 1000 feet away, particularly the one struck by the Uber vehicle. It was a software problem, possibly in the GPU, but most likely in the computer software dedicated to scene analysis.
If anyone's wondering, a quick google-search tells me that lidar works just-as-well at night as it does during the day. Night day doesn't make a difference for lidar. Only physical things it can bounce off of and return to the emitter.
Concluding a software problem seems like a decent assumption to me.
Which also assumes that the lidar system was working properly itself and didn't get damaged 10 sec. before the accident...
Or, again, perhaps the software worked correctly but the brake line was cut so the system wanted to hit the brakes, but they just didn't work.
I agree, however, that "damage to the input (lidar/other) or output (brakes/other) systems" is less likely than the idea that the programming just messed up (or wasn't built with this idea in mind... or something like that).
My point is that "most likely" a software/decision-making issue doesn't mean "definitely" a software/decision-making issue.
Without further information, anyway.

This message is a reply to:
 Message 101 by Percy, posted 03-27-2018 2:39 PM Percy has seen this message but not replied

  
PaulK
Member
Posts: 17825
Joined: 01-10-2003
Member Rating: 2.2


Message 103 of 142 (830427)
03-29-2018 2:45 AM


Interesting developments
From The Register - not a top source, but usually not too bad.
Apparently the Uber car had only a single-top mounted LIDAR sensor, which creates a blind spot low and close to the vehicle. Most self-driving vehicles have 5 or 6 including side-mounted sensors.
A spokesman for Aptiv, which supplies other sensors and software, claimed that their systems were disabled on the car in question.
Looks like a case of Uber being Uber and cutting too many corners.

  
Phat
Member
Posts: 18298
From: Denver,Colorado USA
Joined: 12-30-2003
Member Rating: 1.1


Message 104 of 142 (830428)
03-29-2018 9:02 AM
Reply to: Message 99 by jar
03-27-2018 6:59 AM


Re: Uber’s no good very bad week
jar writes:
quote:
The car, a white Toyota Camry, got stuck at about 1:23 p.m. local time, according to the San Francisco Police Department. The car was being driven by a human at the time of the accident, according to an employee at Safeway.
And the picture shows a car without the additional pods seen on the uber self-driving cars so it is very likely that it was just a normal car.
It appears that this driver, though very human, was relying far too much on GPS technology rather than simply driving slow and observing the area. My point is that there are accidents caused solely by human error, accidents caused solely by driverless technology, and accidents caused by an ill-conceived synthesis between the two.
Perhaps a valid question which we need to ask ourselves is this:
As technology and computer systems advance, which tasks should we entrust to the robotic computers and which should we keep for ourselves in the interest of retaining mental sharpness?
My sister relies on her smartphone GPS device to guide her to locations that she is unfamiliar with on the city map, whereas I enjoy the challenge of writing down the directions and following the markers to the location myself. Her device actually audibly speaks out instructions to her on which direction to turn whereas I make those decisions on my own, based on my familiarity with the spatial layout of the city itself.
Of course, were I in a strange city, I might rely on the GPS voice more than I do here, but I still like to feel as if I am choosing the directions versus having them spoken at me by that annoying Siri.

Chance as a real force is a myth. It has no basis in reality and no place in scientific inquiry. For science and philosophy to continue to advance in knowledge, chance must be demythologized once and for all. —RC Sproul
"A lie can travel half way around the world while the truth is putting on its shoes." —Mark Twain "
~"If that's not sufficient for you go soak your head."~Faith
Paul was probably SO soaked in prayer nobody else has ever equaled him.~Faith

This message is a reply to:
 Message 99 by jar, posted 03-27-2018 6:59 AM jar has replied

Replies to this message:
 Message 105 by jar, posted 03-29-2018 9:15 AM Phat has replied
 Message 106 by Percy, posted 03-29-2018 10:05 AM Phat has seen this message but not replied

  
jar
Member (Idle past 414 days)
Posts: 34026
From: Texas!!
Joined: 04-20-2004


Message 105 of 142 (830429)
03-29-2018 9:15 AM
Reply to: Message 104 by Phat
03-29-2018 9:02 AM


Re: Uber’s no good very bad week
The real question to me is whether even imperfect computers are better drivers than the average human.
The automated system does not have to be perfect or even near perfect, it just has to be better than humans. The bar favoring automated driving systems over humans is really, really low.

My Sister's Website: Rose Hill Studios My Website: My Website

This message is a reply to:
 Message 104 by Phat, posted 03-29-2018 9:02 AM Phat has replied

Replies to this message:
 Message 107 by Phat, posted 03-29-2018 10:33 AM jar has replied

  
Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024