Register | Sign In


Understanding through Discussion


EvC Forum active members: 65 (9162 total)
7 online now:
Newest Member: popoi
Post Volume: Total: 915,815 Year: 3,072/9,624 Month: 917/1,588 Week: 100/223 Day: 11/17 Hour: 0/0


Thread  Details

Email This Thread
Newer Topic | Older Topic
  
Author Topic:   Self-Driving Cars
Stile
Member
Posts: 4295
From: Ontario, Canada
Joined: 12-02-2004


Message 23 of 142 (767664)
08-31-2015 2:30 PM
Reply to: Message 22 by Percy
08-31-2015 10:20 AM


Re: Idiots
I am a huge fan of self-driving cars. I think they're cool.
Percy writes:
Weather. Conditions that a human might manage to navigate could be too much for a self-driving car.
I doubt it.
Now, there may be conditions that go beyond that of the capabilities of a self-driving car and cause it to stop/pull over... but those would be well beyond a human anyway.
Or would the vision systems of self-driving cars, be they lasers or radar or infrared or ultrasonic or a combination or whatever, be able to see through the weather far better than humans?
It will be a combination.
Lasers, and radar, and temperature, and ultraviolet, and infrared and vision (cameras)... all working together, all confirming and co-ordinating.
And yes, it will be far better "sensing" than any human's "sensing."
Or, at least, it would be if I programmed it
If a self-driving car provides the driver the option of asserting control, who is responsible in the event of an accident? Presumably the data history would reveal whether the car or the driver was in control at the time of an accident, so maybe it's not an issue.
If such an option exists, the data history would definitely identify it.
"In addition, some people who enjoy driving and do not want control to be taken from them may resist the move to complete automation."
I don't see a problem with having self-driving cars and human-driving cars on the same road.
Ethical issues. An example from How to Help Self-Driving Cars Make Ethical Decisions: "A child suddenly dashing into the road, forcing the self-driving car to choose between hitting the child or swerving into an oncoming van."
Extremely interesting.
Don't such ethical issues exist right now?
Is there even a right answer?
Unknown occupants in the oncoming van vs. known child.
Van's doing what it's supposed to be doing vs. child not understanding what they're "supposed" to be doing.
Imagine the liability if the car's autonomous systems directed the car into the path of an oncoming van to avoid a beach ball. Just how good will the car's recognition systems be?
Have to be better than that, or I'm not driving one.
The biggest ethical question is how quickly we move. We have a technology that potentially could save a lot of people, but is going to be imperfect and is going to kill.'
A good question for debate.
I know what side I'm on - self-driving cars. For the same reason I think it's ethical to wear a seatbelt - there are other people driving on the road other than yourself.
Given that they can hit a beach ball instead of a van, that is

This message is a reply to:
 Message 22 by Percy, posted 08-31-2015 10:20 AM Percy has seen this message but not replied

Replies to this message:
 Message 24 by xongsmith, posted 08-31-2015 3:57 PM Stile has seen this message but not replied

  
Stile
Member
Posts: 4295
From: Ontario, Canada
Joined: 12-02-2004


Message 70 of 142 (830036)
03-20-2018 11:31 AM
Reply to: Message 68 by Percy
03-19-2018 10:12 PM


Re: Driverless Car Causes First Fatality
Percy writes:
Self-driving Uber vehicle strikes and kills pedestrian
I've looked up a few articles, but I can't seem to find any specifics.
What I'm interested in is the data the car was acting on.
Did the car see nothing?
Did the car see *something* and decided it wasn't anything to worry about?
Did the car see *something* and screwed up deciding what direction that something would go in?
Did the car's normal programming account for such situations perfectly... but the programming was messed up/corrupted for some reason?
Or maybe only input sensors were malfunctioning?
Perhaps output control did not respond correctly (AI told brakes to come on... but they just physically did not)?
Did many issues happen or just one?
I can't seem to find answer to specific questions like that.
But, perhaps, it's a matter of time and it will be a bit longer before such details are available.
I would also be very interested in some over-arching statistics.
For example: I know I am perfectly happy to swap in a 5% human death rate from AI-Vehicles over a 10% human death from human-controlled vehicles sort of statistic, in most situations I can think of, anyway.
What will a self-driving car do when it encounters a policeman directing traffic?
That's a really good question
I wonder if the programmers have attempted to tackle that sort of problem yet?

This message is a reply to:
 Message 68 by Percy, posted 03-19-2018 10:12 PM Percy has replied

Replies to this message:
 Message 71 by Percy, posted 03-20-2018 1:04 PM Stile has seen this message but not replied

  
Stile
Member
Posts: 4295
From: Ontario, Canada
Joined: 12-02-2004


(1)
Message 78 of 142 (830136)
03-22-2018 10:17 AM
Reply to: Message 77 by Percy
03-21-2018 10:09 PM


Re: Video of the Pedestrian Collision
Percy writes:
It looks to me like the vehicle failed to detect the pedestrian.
Yes, I agree.
Did we learn of any specifics along the way?
I seem to remember that the vehicle did not slow down at all before the collision... does anyone else remember that or am I making that up?
If so, this seems like a failure on the AI that should be corrected, to me.
That is, I can understand why a human may not have been able to stop (or slow down) at all in that allotted time frame.
But a computer? A computer should have at least started to hit the brakes before the pedestrian was hit in this scenario, if you ask me. Otherwise, something is wrong. Non-detection of object or no decision was made (threshold for what an "object" is? program corruption or error?) or output (connection to brakes) malfunction.
Could an AI have completely stopped the vehicle? I'm not sure. Depends on how good detection can/should be during night conditions.
But begin to stop? Absolutely. And a big problem if it did not.
1 - Possible issues in low-light (night) conditions. Perhaps a monitor of 'how bright' the headlights needs to be implemented in order to verify that the AI is safe to be in control during low-light environments.
2 - Perhaps other ways to detect pedestrians/objects needs to be implemented. Not just "visual" camera-type monitoring... but perhaps radio-like density checks, or radar-like bounce-back-signals. Multiple means of detection would mean you can set up the computer to have overlapping verification for object detection.
Some will work better than others in different situations. Like optical-cameras may work best in daytime, nice weather conditions. Perhaps radar-like technology would work better at night. Or maybe infra-red-camera or even heat-detection.
I still like the direction of AI vehicles.
But I think this should be taken as a serious issue that requires serious re-programming or tweaking of the system as a whole before continuing.
"Low visibility at night" isn't exactly a new problem for driving. I would have suspected that this would have been accounted for before allowing AI vehicles to operate at night.
On the face of it, this problem (not slowing down at all in this scenario) seems to be something that should have been solved "in the lab" before allowing the vehicle out on actual roads with actual people around.
Like with moving AI vehicles into bad rain or snowy weather.
I like to assume that the solution isn't "well, it works good when it's sunny... let's send it out on busy streets and see what happens in a storm!"
Such things need to be tested in a lab-setting to at least reach certain acceptable thresholds before releasing them into live situations where people can get hurt.
Seems like basic due-diligence.
That is, with the information we have currently available on what happened, anyway.
Maybe we'll learn more soon.

This message is a reply to:
 Message 77 by Percy, posted 03-21-2018 10:09 PM Percy has replied

Replies to this message:
 Message 79 by Percy, posted 03-22-2018 11:47 AM Stile has seen this message but not replied
 Message 80 by caffeine, posted 03-22-2018 2:53 PM Stile has replied

  
Stile
Member
Posts: 4295
From: Ontario, Canada
Joined: 12-02-2004


(1)
Message 81 of 142 (830151)
03-22-2018 3:27 PM
Reply to: Message 80 by caffeine
03-22-2018 2:53 PM


Re: Video of the Pedestrian Collision
caffeine writes:
If the dataset's big enough or the cross-referencing complicated enough I have to sit there for a few minutes while my screen goes grey, the little blue circle wiggles around and everything becomes unresponsive. Sometimes this lasts a looooong time.
A normal computer does many things.
Even if you've only requested it do one thing... Windows (or whatever operating system, some are better, some are worse) is always doing many things in the background simply sustaining the system and all the other things it's capable of.
This is very different from an industrial computer.
Or, really, a computer that's specifically programmed/constructed to read inputs, make decisions, and then write outputs.
It doesn't do anything else, and it's very good at what it does.
A typical scan time (reading inputs, doing the decisions, and then writing outputs) for a very large system (say, controlling 40 robots and 300 motions in order to build a side rail for a vehicle) would be less than 100ms. Sometimes as quick as 10ms or even faster.
This is a "typical" system that is not meant for high-speed applications.
A system built for high-speed applications (think of fast conveyor belts flying by very quickly) can be designed using interrupts (specifically run a motion due to an input regardless of the rest of the program) that would result in being 100-to-1000 times faster than that (maybe even more?), depending on how much money you'd like to put in.
We are in an age where a "normal computer" can be used as an "industrial computer" when designed and programmed correctly as well.
I'd hope the computer driving me around town is more powerful than my work laptop of course, but what sort of processing power and software is necessary to do all this magical tracking of everything that's happening around us while simultaneously downloading GPS satellite data and communicating with other smart cars
Surprisingly little.
The bulk of your computer's processing is running stuff it doesn't really need to run... just stuff you want it to be running so that it's ready for your 'day-to-day' stuffs at any time.
Strip all that away, design and program it specifically for the task at hand and nothing more, and things get much faster. A lot faster.
The typical 40 robot, 300 motion side-rail example I mentioned above would run off a program taking up less than 1 MB of file size.
The maximum storage of the entire industrial computer would even be in the 2-to-4 MB range.
It's not a very "powerful" computer, it's simply a specifically, efficiently designed one used for the tasks it's programmed for and nothing else.
Do we have consumer computers that can fit in cars and do that reliably?
No. Not consumer computer.
But industrial computers can do it very reliably and very quickly.
My personal experience:
I've worked as what's called a "Controls Specialist" for the last 17 years.
I design, create, develop, commission, upgrade or fix industrial computers.
I've worked in:
-the auto industry (robots, welding, pneumatic motions and basic sensors).
-the packaging industry (robots, high-speed movement, labelling, vision-detection/sensing).
-the food and beverage industry (basic process control - temperature, pressure, fluid levels..., government verification of certain procedures such as milk pasteurization)
-the pharmaceutical industry (robots, high-speed rotating tablet presses, granulation, process control)
Computers are not magic.
But taking inputs, making decisions, and setting outputs can be extremely fast. On the order of micro-seconds.
I've seen systems react to things much, much faster than that person showing up on the street in front of the car.
Perhaps they're not using industrial computers (or something similar) to run AI in cars.
Maybe there is a bunch of bloat in it that will cause the types of delays you're talking about.
I've never worked on AI before.
But I don't think so. The industry of AI is so big and has so much potential... they have to have experts doing these sorts of things who would have to be aware of what computers are capable of when used correctly and specifically.

This message is a reply to:
 Message 80 by caffeine, posted 03-22-2018 2:53 PM caffeine has replied

Replies to this message:
 Message 82 by Percy, posted 03-24-2018 12:23 PM Stile has seen this message but not replied
 Message 94 by caffeine, posted 03-26-2018 4:34 PM Stile has replied

  
Stile
Member
Posts: 4295
From: Ontario, Canada
Joined: 12-02-2004


(1)
Message 88 of 142 (830288)
03-26-2018 12:22 PM
Reply to: Message 84 by Minnemooseus
03-24-2018 8:10 PM


Re: Video of the Pedestrian Collision
Minnemooseus writes:
It also looks like the pedestrian failed to detect the vehicle. Why would someone walk out in front of a moving vehicle like that??? Busy texting???
I completely agree that, from the video anyway, it seems the lady could have very easily saved her own life.
Of course, though, the vehicle would be "at fault" in this vehicle-pedestrian collision as pedestrians always have the right-of-way (as far as I'm aware?).
As my Dad told me when talking about the right/wrong and traffic legalities: "You don't want to be dead right."

This message is a reply to:
 Message 84 by Minnemooseus, posted 03-24-2018 8:10 PM Minnemooseus has seen this message but not replied

  
Stile
Member
Posts: 4295
From: Ontario, Canada
Joined: 12-02-2004


Message 89 of 142 (830290)
03-26-2018 12:36 PM
Reply to: Message 83 by Percy
03-24-2018 2:04 PM


Re: The Downside of Human Instructors
I would be aiming for something like this:
A level of autonomy where the user is not legally obligated to control the vehicle.
You get in, tell it where you want to go, and it takes you there.
Now, that said, the vehicle will still have the ability for manual driving. Although perhaps wheel/pedals can be stored away and become available when required.
Vehicle identifies need for manual? Pull over and stop as safely as possible. If manual is activated before then... well, fine. If not, vehicle sits at side of road until manual is engaged and problem is dealt with.
This situation in question with the lady being hit - would be under the control of the programming. I think technology/programming can deal with this situation.
Perhaps I'm wrong, but I certainly think it's a goal we can aim for in the very short term.
Construction?
Person directing traffic?
Maybe autonomy only works on highways and exit is coming up?
-Any time you can identify that the car doesn't "know" exactly what it's doing... swap to manual by slowing down and pulling over and waiting for person to take over.
How does the car "know" exactly what it's doing?
-program parameters to verify vision system is working well enough (lighting? identification of horizon? identification of current lane and surrounding objects?)
-program parameters to verify inputs are working (redundant cameras/sensors... compare input and if they are off by too much... not good enough to continue)
-program parameters to verify outputs are functioning (when used, monitor state of output and verify it's within acceptable parameters)
-program parameters to verify code is not corrupted (run two identical logic routines... stored in different locations, this way system can verify it's running itself correctly... like a redundant system... this is how "safety computers" work in industrial automation).
Can't program your vehicle so that it can safely pull over and stop?
-Then you should not be making an autonomous vehicle in the first place
-This should be highest/top priority to figure out. If you can't slow, pull over and stop under various/unknown conditions... how do you expect to go 200 mph and make reliable decisions?
That would be my very zoomed-out scope and goals, anyway.
What level of autonomy would that be? Level 3? Level 4?

This message is a reply to:
 Message 83 by Percy, posted 03-24-2018 2:04 PM Percy has replied

Replies to this message:
 Message 92 by Percy, posted 03-26-2018 3:16 PM Stile has replied

  
Stile
Member
Posts: 4295
From: Ontario, Canada
Joined: 12-02-2004


Message 90 of 142 (830291)
03-26-2018 12:44 PM
Reply to: Message 86 by Percy
03-24-2018 8:53 PM


Re: Video of the Pedestrian Collision
Percy writes:
Seems like the headlights on the vehicle had a very short range.
Just a few quick notes on this:
The video may or may not be what "the program" is seeing.
The video may be stored in a lesser quality so that review can be possible without overloading the system with too much data storage.
While a full "hi-res" version might be used in the program.
Ever notice that sometimes things look darker/lighter in pictures you take than in reality?
-Something like this may be going on.
-Perhaps the headlights do light up a lot more, if you were in the car and looking out the windshield, but maybe this is all the camera was able to pick up
We still have so little information on the pedestrian beyond her name. I saw some speculation that she was homeless. Maybe more information will become available soon.
I find it strange that she doesn't seem to be aware of the vehicle at all. I don't see her turn to look at the camera/car before it hits her. Who wouldn't notice bright lights and big noise bearing down on you? Or is this one of those more-silent vehicles? Not that this moves the legal responsibility off the vehicle... but it does seem that "something" isn't right with this pedestrian. Perhaps a mental issue? Perhaps headphones? Perhaps an indignant "vehicles stop for me!" attitude? Perhaps a family emergency or other super-important issue is on this lady's mind at the moment? Something else?

This message is a reply to:
 Message 86 by Percy, posted 03-24-2018 8:53 PM Percy has seen this message but not replied

Replies to this message:
 Message 91 by ringo, posted 03-26-2018 1:04 PM Stile has seen this message but not replied

  
Stile
Member
Posts: 4295
From: Ontario, Canada
Joined: 12-02-2004


Message 93 of 142 (830310)
03-26-2018 3:30 PM
Reply to: Message 92 by Percy
03-26-2018 3:16 PM


Re: The Downside of Human Instructors
Percy writes:
The approach where when the car doesn't know what to do it pulls over and stops so the human can take over would be Level 3.
That seems to be a rather simple solution to "the handover problem," no?
That way you don't have to worry about forcing it to happen while the car is moving. *If* it happens while the car is slowing down and pulling over... then all the more efficiency for the driver. If the driver doesn't react fast enough... then the vehicle just pulls over and stops.
Or maybe I'm missing something that makes this problem harder?
The worst issue with this is that if there are a lot of problems up ahead, then a lot of vehicles could "slow down and stop and wait for the driver to take over." Which, possibly, could cause increase traffic issues.
However, I think the benefit of vehicles being automated would alleviate many other traffic issues. I think it would be a pretty decent trade-off.
As well, each time the vehicle shuts down and pulls over... the data causing it to do so could be sent back to head-office so it could be analyzed for possible situational improvement of the vehicle-automation-system. No more waiting for tired people to "teach" the vehicle.
I suppose, though, that the beginning would be the worst.
And who wants to be the first company to have everyone swearing at them: "Damnit, another one of those shitty [insert company name here] cars slowing everything down!!"
Perhaps the industry is simply waiting for some company to feel risky enough to take that step.

This message is a reply to:
 Message 92 by Percy, posted 03-26-2018 3:16 PM Percy has seen this message but not replied

  
Stile
Member
Posts: 4295
From: Ontario, Canada
Joined: 12-02-2004


Message 100 of 142 (830333)
03-27-2018 9:15 AM
Reply to: Message 94 by caffeine
03-26-2018 4:34 PM


Re: Video of the Pedestrian Collision
caffeine writes:
But this could be avoided by having two separate systems - one for the user interface and one to actually drive the car.
Exactly.
The requirement for this would be considered by those who create the system (I hope).
If one computer can do both while keeping all required reaction times within acceptable parameters... then they'll use only 1.
If two are needed, then they'll separate and use 2.
Or, at least I hope they would
It strikes me that navigating a city while reacting to the natural flow of the world around you is a computational task several orders of magnitude more intensive (then industrial robots in a controlled environment). Or am I overestimating this or underestimating what industrial robots do?
You're absolutely right.
Part of my post was dedicated to showing that equipment exists that *is* "orders of magnitude" faster/stronger than the computers used to run robots in controlled environments.
I've never used it much (I tend to work with systems that need to react on the order of seconds... not microseconds or nanoseconds). So my experience there is lacking.
But I have witnessed it, and I know it exists.
You could google things such as "high speed manufacturing" for some examples.
Things like modern bottling plants - bottles whip by at a pace of 50 or 60 per second near the end by the packaging portion. Each one has it's picture taken by a camera that doesn't care about the orientation of the bottle (because it could be spun in any direction) but it does monitor all of it and check the label for accuracy or any deficiencies.
-This would not be the "fastest of the fastest" of computing power, that would likely be in very niche-businesses.
-However, this is the basics of "high-speed capabilities" of computers
-I wouldn't be surprised if certain computers could go another order of magnitude above such things (things like high-speed bottling plants have been around for 20-30 years)
Again, I'm not saying I know that the vehicle could have totally avoided this accident.
But I am saying that, if everything worked as intended, it should have identified "an object" and began to take countermeasures (swerving? braking?).
Whether or not it could do everything fast enough to avoid the accident entirely... I'm not sure.
But it definitely should be good enough to detect the object and start doing something.
I believe the news report is that the vehicle did not react or even slow down at all before the collision.
This leads me to believe that something didn't work as intended.
-maybe object was not detected - if so, obviously need some better work on detecting pedestrians
-maybe decision making was flawed - if so, obviously need better programmers
-maybe reaction did not function as desired - if so, might need better hardware and physical equipment

This message is a reply to:
 Message 94 by caffeine, posted 03-26-2018 4:34 PM caffeine has not replied

Replies to this message:
 Message 101 by Percy, posted 03-27-2018 2:39 PM Stile has replied

  
Stile
Member
Posts: 4295
From: Ontario, Canada
Joined: 12-02-2004


Message 102 of 142 (830351)
03-27-2018 3:03 PM
Reply to: Message 101 by Percy
03-27-2018 2:39 PM


Re: Video of the Pedestrian Collision
Wow. Thanks for all the cool information.
"And when multiple autonomous vehicles drive the same road, their lidar signals can interfere with one another." Well, that's very disturbing. If true then that problem has to be fixed before autonomous vehicles can become a reality.
Seems rather important
Naturally rain and mud and snow also interfere with visual cameras. If mud or slush or snow splashes up your visual camera, what then?
Concerns like this go away if a company decides to stick at a Level 3-ish area.
What then? Swap to manual. Slow down, pull over. Same conclusion for every detection of "not operating within valid parameters."
Then the data be analyzed in the shop under completely safe VM environmental conditions and maybe the problems will get solved.
Once they're all solved (or some threshold of reduced problems over time is reached) - move to Level 5-ish.
If they never get solved, then I agree that Level 5-ish isn't something to aim for with so many questions still floating about.
Anyway, bottom line about pedestrians, there seems little reason lidar wouldn't be able to detect a pedestrian on a clear night as much as 1000 feet away, particularly the one struck by the Uber vehicle. It was a software problem, possibly in the GPU, but most likely in the computer software dedicated to scene analysis.
If anyone's wondering, a quick google-search tells me that lidar works just-as-well at night as it does during the day. Night day doesn't make a difference for lidar. Only physical things it can bounce off of and return to the emitter.
Concluding a software problem seems like a decent assumption to me.
Which also assumes that the lidar system was working properly itself and didn't get damaged 10 sec. before the accident...
Or, again, perhaps the software worked correctly but the brake line was cut so the system wanted to hit the brakes, but they just didn't work.
I agree, however, that "damage to the input (lidar/other) or output (brakes/other) systems" is less likely than the idea that the programming just messed up (or wasn't built with this idea in mind... or something like that).
My point is that "most likely" a software/decision-making issue doesn't mean "definitely" a software/decision-making issue.
Without further information, anyway.

This message is a reply to:
 Message 101 by Percy, posted 03-27-2018 2:39 PM Percy has seen this message but not replied

  
Stile
Member
Posts: 4295
From: Ontario, Canada
Joined: 12-02-2004


Message 110 of 142 (832746)
05-09-2018 11:15 AM
Reply to: Message 109 by Percy
05-07-2018 10:09 PM


Re: Video of the Pedestrian Collision
The plot thickens!
Percy's Article writes:
The sources cited by The Information say that Uber has determined B was the problem. Specifically, it was that the system was set up to ignore objects that it should have attended to; Herzberg seems to have been detected but considered a false positive.
I wonder where the issue lies here.
Options I can think of (can certainly be more than 1 going on at a time):
1 - Hardware (radar, lidar, visions system, sensors...) was not purchased at a level it should have been for the application.
That is, the company saved money on "cheaper equipment" that could only be right most-of-the-time instead of all-the-time for this scenario.
-fault is on designers
2 - Programmers were not very good. Bad programmers = bad programming = they simply "didn't think" that this scenario would come up.
-fault is on programmers
3 - Programmers were good, but not given enough time to setup the system to the levels the equipment is capable of... they were pushed to get something out that was "good enough" even though it could have been better given more time/money.
-fault is on leaders (owners/managers...)
Taking the quote literally "the system was set up to ignore objects that it should have attended to" implies to me that it's more on the programmers and/or leaders. But it's possible this wording is not meant to be taken that literally and it's still a design problem.

This message is a reply to:
 Message 109 by Percy, posted 05-07-2018 10:09 PM Percy has replied

Replies to this message:
 Message 111 by Percy, posted 05-10-2018 9:55 AM Stile has seen this message but not replied
 Message 115 by Phat, posted 12-12-2018 9:53 AM Stile has seen this message but not replied

  
Stile
Member
Posts: 4295
From: Ontario, Canada
Joined: 12-02-2004


Message 122 of 142 (845131)
12-12-2018 12:14 PM
Reply to: Message 113 by Percy
12-12-2018 9:10 AM


Crickets of agreement
I simply still agree with you when we talked about this earlier:
Stile writes:
Percy writes:
Naturally rain and mud and snow also interfere with visual cameras. If mud or slush or snow splashes up your visual camera, what then?
Concerns like this go away if a company decides to stick at a Level 3-ish area.
What then? Swap to manual. Slow down, pull over. Same conclusion for every detection of "not operating within valid parameters."
Then the data be analyzed in the shop under completely safe VM environmental conditions and maybe the problems will get solved.
Once they're all solved (or some threshold of reduced problems over time is reached) - move to Level 5-ish.
If they never get solved, then I agree that Level 5-ish isn't something to aim for with so many questions still floating about.
I don't think a goal of time for going to Level 5 is reasonable.
I think the goal should be "meeting some threshold of reduced problems over time."
There's nothing wrong with getting to such a goal as-fast-as-possible... but the goal for moving to fully automatic vehicles should be measured in "safer than human drivers by x amount" terms... not "within x years" terms.

This message is a reply to:
 Message 113 by Percy, posted 12-12-2018 9:10 AM Percy has seen this message but not replied

  
Newer Topic | Older Topic
Jump to:


Copyright 2001-2023 by EvC Forum, All Rights Reserved

™ Version 4.2
Innovative software from Qwixotic © 2024