Suppose there exists some algorithm that underlies the entirety of the human propensity to acquire mathematical beliefs on the basis of proof. That is, suppose the entire notion of proof has been formalized into a single definite and specified algorithm. This algorithm would be such that when given a selection of mathematical formulas that code arithmatical propositions, it will output a 1 (for humanly believable on the basis of proof), or a 0 (not humanly believable) in a finite time. Call this algorithm A.
Since this algorithm must be able to model elementary arithmetic, we know that we can construct a Gdel proposition for which A will not return an output value
at all in a finite time as long as A is consistent. Since we as humans can know the Gdel statement to be true, yet since algorithm A cannot return an output in a finite time for that statement, we can conclude from this contradiction that A does NOT in fact entirely underlie human mathematical reasoning, and that no such algorithm can.
That is a brief synopsis of J.R. Lucas' (and more recently, Roger Penrose's) argument against the computability of human reasoning. It rides upon 3 assumptions that cannot all be simultaneously true. Namely:
Pick at least one of these to be false, because at least one of them must be.
This post is largely excerpted from bits and pieces of
Yesterdays Algorithm by William Seager, and I highly recommend it. In that paper the author covers the vulnerabilities of this argument a bit more thoroughly. Basically, the argument doesn't show that human A.I. is strictly impossible, but that it is impossible to know if it has ever been truly acheived. In other words, we could try and try to build an artificial human intelligence -- and we might even succeed -- but we would never really know if we had actually accomplished it.