Tagged: evolution darwin
This topic contains 1 reply, has 2 voices, and was last updated by Ashish August 12, 2019 at 3:59 pm.
- August 12, 2019 at 1:00 pm #7448AnkurParticipant
- August 12, 2019 at 3:59 pm #7449AshishParticipant
This particular argument has been made many times before, but it always sort of dies down. Richard Thompson also wrote a paper about it in the 80s I think. It was basically arguing that the total amount of information required to create the biological complexity through random mutations needs an amount of time which hasn’t yet elapsed. But the problem is this — suppose earth has existed for 50 trillion years (as we say in Vedic cosmology), and the mutations were happening much faster at some specific duration in time (because of whatever reason we can speculate) then it would be possible to generate that complexity in a short period of time. It is like saying that if monkeys started typing much faster then they could produce the collected works of Shakespeare. Or if the monkeys had a much longer time, it would eventually happen.
These are not ‘mathematical’ arguments. They are more of statistical arguments, extrapolating the current rate of mutations back in the past, assuming a certain life span of the earth, and using carbon dating for determining the age. All of these methods have problems. For example, if the rate of mutation changes with time, how would it change the argument? Or if the earth has existed for much longer than we think it does, will that validate the argument? Or, if the rate of decay of C14 changes with time, will it alter the fossil ages we are using to decide?
In Signs of Life I have presented many novel arguments, which I think are better than these (sorry for tooting my own horn). For example, I have shown that randomly generated code sequences constitute programs that will never halt, and there are now theorems that prove that most programs either halt very quickly or never halt. That amounts to saying that randomly generated sequences will produce eternally living organisms, but we don’t find any.
There are other arguments from complex systems theory in which there are natural stability points in a complex system — called ‘attractors’ — and the system always settles into one of these attractors. If you move the system slightly away from the attractor, it will revert back to it. This means that you cannot slowly and incrementally evolve the system to a new state, because the system will always revert back to the ‘attractor’. To make a change, you have to move the entire system — e.g. a full ecosystem — from one state to another, and that requires a concerted huge change in the full ecosystem rather than an individual mutation in a species.
Then there is an argument from general relativity where the universe is consistent with many matter redistributions, and doesn’t pick out one of them. This corresponds to the problem that there are infinite ways in which a peg-hole consistency can be built — the type that we suppose natural selection achieves — and there is no way you can pick one of them. So, natural selection is not going to fix the evolution because there infinite combinations of species (the peg) and their environment (the hole) which will be mutually adapted. How do you single out one of them?
Then there is the argument against random mutation itself. It traces the problem back to the issue of probability in atomic theory, which seems random. But if that description is corrrect, then there could never be stable objects, which constitutes the problem of measurement. So, pending the resolution of the problem of measurement, there is no stable classical world, which includes life. There are issues of measurement in which you can change the basis of an ensemble by which all the atomic states change. This means that there isn’t even a fixed set of molecules until you make a measurement. So what are we going to mutate if there aren’t a fixed set of molecules? Chemistry assumes that the measurement problem is solved, but it isn’t.
Then there is the issue of language. We are capable of using multiple modes in language such as names, concepts, things, etc. and the same number can represent any of these modes. Ordinary semantics relies on these modes, but there cannot be a mechanical system or computer that can work with these modes. The problem is with mathematics itself and hence in all physical theories that use mathematics. So, if mathematics is incapable of semantics, then all physical theories are incapable of it. Then, how do you explain the existence of thinking and language? At what point in evolution did this ability to think — which violates number theory — arise, and how?
There is another argument from game theory regarding the issue of altruism in species. Game theory shows that best strategies for winning games are tit-for-tat rather than altruism. If we model an ecosystem according to game theory, there can never be altruism in species. Every living organism will just respond favorably only if it has received a favorable response prior. That becomes the chicken-and-egg problem of how altruism can ever emerge in nature. In fact, if by random chance someone acts altruistically, others who haven’t mutated in this way would exploit that altruism for their selfish ends, so the altruistic mutation will not survive.
There is an argument about thermodynamic irreversibility — you can bypass this irreversibility in atomic theory because conformational changes to molecules are reversible. The issue arises from the fact that atomic theory (like classical mechanics) is reversible. Classical statistical mechanics is irreversible because we assume that the system is in all possible states at once. But that assumption is already built into current atomic theory as the system is always in a state of possibility and only measurement creates irreversibility. So, now, to create irreversibility you have to always perform a measurement, because nature is otherwise reversible. In short, even to create the effect of passing time, you must prior solve the problem of observation.
IMHO, these are far more solid ‘mathematical’ arguments than one relying on statistical probabilities. The argument from design fails rather spectacularly when you say that if intelligent design was designing things so well, then why did it not design things even better — such that we will not fall sick, or we would not do stupid things like wars, etc. They bring up this issue quite well in the argument, and Stephen Meyer goes back to saying that he needs a theological perspective to address this problem — accepting that while there is design, there is also chaos, and somehow these two forces of design and chaos are competing with each other. So, at best, ID is identify one aspect — i.e. design — neglecting the other one — i.e. chaos. Now, if you induct both, then the claim of ID falls apart because how do you reconcile this contradiction?
Interesting video, but all in all, the argument is very old and subject to many holes. ID as an alternative is also not a real alternative. We need something different, and I believe that the theoyr of Punctuated Equilibrium comes close to at least identifying at a phenomena level what is happening. They haven’t still got any mechanism for it right. I still think — not to be prude about it — that Signs of Life is a much thoroughly argued set of mathematical arguments.
You must be logged in to reply to this topic.