If I'm driving the car myself this isn't a problem since I have plenty of time to signal and make a safe turn into the grocery store parking lot, but if I'm in a self-driving car I assume I'll be way past the grocery store by the time I even call up the screen to enter my new destination. What is needed is a quick way to instruct the car to "turn right here."
Not necessary. The "experience" modules will have already pre-planned the safest spur-of-the-moment routes to your grocery store on the right, your liquor store on the left and that Burger King you went to once that you just passed.
"Hey, Lexus2! Take me to that Burger King we just passed." Have it your way.
In the "Residential Experience" modules the car will not just watch the road and cars around it but will be looking for the ball bouncing across the yard with a kid chasing it, will be anticipating the "maybe's" of that dog chasing the frisbee two doors up and will prepare for the possibility that the car just parked up ahead will have its driver(less)-side door suddenly pop open.
All these situations we humans know can happen but never seem to think about until they're right up in our faces will have been anticipated by your Lexus2 with full action plans ready to execute. Even in those unique events that could not be anticipated (kid pops out from behind a parked car and a van coming head-on the other way) the swerve into the van's lane would be such as to effect a glancing blow rather than a head-on strike, something that may not be achieved by a human in the midst of the panic.
There will always be something unanticipated that will happen, and some my die as a result, but when these things come rolling out to the general public I think we will be surprised at just how robust an experience set these neural nets will bring with them. Then, of course, there is the subscription for the daily updates, which will be a requirement of, probably rolled right into the premium of, your insurance. Plus, when there are accidents, the "black box" recordings from, not just the cars involved, but every vehicle within sight line will make analysis stronger.
Sit back, relax, make whoopy with the wife, it's all good.
I should hope Google is serious about the 25 mph. California law limits their vehicles to that speed and it certainly wouldn't do Google any good to be caught speeding.
These are R&D test vehicles. Are you assuming that 5 - 10 years from now their production units will be so limited? I wouldn't think so.
Ahead I see some neighbors on a walk and I want the car to stop next to them. Will it be able to do this? Yes? No? Sometimes? Sort of?
That's really not important in an R&D test vehicle. Google has a lot of more important things to do with the car right now. Wait for the production models and ask then. And even if the eventual answer is no, is that going to control your buying decision? I would think $150,000 in computer and LIDAR costs above the cost of the car would be more of a concern. Definitely gotta have wifi ... and a 16" monitor. How about AC and the power package? Heated seats?
I hadn't heard this before and wasn't able to confirm it.
In your link to the story there is a further link to the police department's blog which contains yet another link to the applicable law Google has to operate under. Neighborhood Electric Vehicle Definition per 385.5 of the California Vehicle Code.
Why California stuck Google under the Golf Cart law, and why Google accepted this, is outside my willingness to search, but there it is.
I doubt a driverless vehicle has the capacity to make extreme evasive maneuvers to avoid something like a deer darting out in the road at night as effectively as a human brain can process and react at this point in time.
My view is quite different. Even today, while the human is want to slam on the brakes most often leading to the dreaded break-lock disaster scenario anti-lock breaks take over to prevent it.
I see the self-driving car as having reaction times and reaction protocols much faster and safer than any human could follow. The self-driving car is constantly scanning for potential threats in a 360 pattern while the "driver" is distracted with pushing buttons trying to find the right tune on their 8-track tape deck. Yes, the return of the 8-track: Abba, Wayne Newton, Air Supply. The car would see the deer tracking in from the left-ahead long before while you're eyes are looking to the right at the center console and would anticipate evasive maneuvers long before you found "Danke Schoen."
I agree with the timing you stated, however. Production models of the autonomous car (the auto-car? ) are going to be quite expensive (with a capital X) for some time to come. Further, with the amount of stupid out there, even though they will be shown to be considerably safer, the Auto-CarTM will suffer image problems with every fender bender. This will delay the trust-factor in accepting the technology.
The sensor only sees what it is actually happening and cannot anticipate.
Quite the contrary, with learning algorithms AI systems do not just react but are being "taught" to anticipate.
In the scenario you describe I can imagine the car's AI seeing the truck up ahead with a load aboard. Loads sometimes spill. Before even approaching the truck the car has inquired to a central database for similar situations and has downloaded some action responses. One, of course, would be to keep well back and avoid any potential spill path. There will always be those instances where the load breaks just as the car begins passing the slower truck but I submit that the reaction timing, steering and breaking reaction, would be considerably faster and more effective than any startled human in such a situation could possibly experience.
Sometimes a situation develops where the AI and the human end up just as dead regardless. But I am confident that probabilities of survival can be enhanced by the much faster, dispassionate, reactions of a properly developed AI.
In the UK anyone using a car has to be insured by law, at least to the level of compensating third parties affected by any accidents you might cause. Is this not the case in the US?
Yes. Most states set minimums for the liability coverage necessary and usually is required in two forms, one for property liability, one for medical liability. The insurance co. must issue an insurance certificate attesting you have at least both state minimums and showing the valid coverage dates. This proof of insurance must be shown to the state's motor vehicle department in order to register the car and get your plates.
Also, when the cops stop you for any reason in your car they will ask, politely at first, usually, for your valid driver's license, your valid state registration on the vehicle and your valid proof of insurance card. If you do not have each of these you are subject to citation and/or fine, unless you're black in which case you get the citation, and the fine, and you get shot.
If the human elected to pass, then any negligence involved in his decision might be figured into the liability involved if a spill happens and some disaster results.
I don't see it. If I decide to pass a truck (assuming the pass attempt is legal) and the truck's load comes undone causing damage to me and mine, where is my liability? Please explain.
If my pass attempt was, somehow, contributory to the load coming undone (because I'm a lousy driver I hit the truck in the rear tire causing the trucker to swerve, break, jackknife, or, my dog is hanging out the window chewing on the truck's straps as we go by) then, yes, I hold, or share, responsibility.
But, even if I am one of those most stupid of drivers, can see the straps beginning to come loose and the load begin to shift, and I decide to stomp on it and attempt to pass before it all breaks loose but miss the opportunity window and get caught in the disaster, I still do not bear any responsibility since I did not contribute to the precipitating negligence.
As for the damages exceeding the insurance coverage? The courts are full of tort claims right now because of this. Nothing changes.
I'm not sure about the human's liability in an autonomous vehicle. I might try saying that the AI is the "driver" and the person is a passenger. Any attached liability is on the AI thus greater insurance limits. On the other side, I might try saying the AI is the authorized agent of the primary human as "designated operator" of the vehicle thus liability attaches to the human. Interesting.
Hope it doesn't happen in any self driven car I'm a passenger in...
It will. Hopefully not you, but people are going to get hurt, people are going to die, from these things. Tort and insurance legislation will be used to absorb that risk temporarily while the smart guys figure out how to make these things safer.
... how will they feel about having their degree of safety taken out of their hands?
Well, I would guess they/we would feel pretty much the way we feel today when we give over our safety to airplanes, trains, ships and such. The first few experiences with the technology could be concerning/exciting to the novice but at around the 100th, just like the road warriors today living between airports and hotels, it could very well become ... mundane, boring, just there .
If you drive yourself, or take a common carrier of any sort, you already participate in a risk pool of significant effects. Just add autonomous cars to the list. Right now the risk is not acceptable without an attending driver monitoring the show. So, as of now "autonomous cars" means they come with attending drivers. But, as NoseyNed conjectured upthread, this will not hold for too much longer.