I would say yes; or rather they will be able to accurately simulate sentience, which is arguably indistinguishable from the metaphysical state some might consider it to be.
The human brain has about 100 billion neurons. The computer I am posting from has 3.28 billion transistors in its CPU. In addition it cycles those transistors 2,830,000,000 times per second, while the human brain has analog chemical reactions taking place. In other words, the human brain's processing power still beats the pants off my CPU.
On the other hand, a computer can simulate at a rate significantly slower than that of a human. The reactions that take place in the human brain over a period of seconds could perhaps be modeled in weeks; after all, since we can control the speed of sensory input to the neural simulation it wouldn't have any way of telling otherwise.
Personally though, I think our most likely and beneficial method of progress in this area would be to build AI from the ground up. For most applications where an AI would be desirable a simulated human brain is not at all what we want. Humans forget things, they think inefficiently and sloppily, and they fairly often don't do what they are told. Instead we want an AI that can respond appropriately and creatively to unexpected situations, but that will absolutely follow guidelines without question.
The military aspects of AI are clear; once the decision is made to attack a target there are a plethora of decisions to attain that goal which can be automated based on fairly simple criteria. For instance, steering a bomb onto a laser dot is something already handled by computers. An extension of this would be driving a vehicle through an environment to a destination without striking obstacles. I don't see any clear dividing line between automation and AI as long as the AI never ignores the programmed rule set.
I would argue that the ability of computers today to multi-task is roughly analogous to being self-aware. So for the answer of "when", I would say it has already happened.