The article addresses the issue of increasing complexity
within genomes.
It first defines complexity as the amount of information that
the genome contains. Information here is used to mean functional
portions of the genome. They state that this can only be an
approximate measure, since we cannot be 100% certain of a
lack of function for any segment.
The segments that do not contain 'information' in this sense
are referred to as 'entropy' (from Shannon information
theory).
The 'entropy' is considered 'blank tape' upon which new function
can be recorded by random mutation + selection.
Selection acts as a 'measure' by which changes in the genome
are filtered such that an increase in entropy (or corruption
of information) are filtered out due to the resulting lack
of fitness.
Since only changes that decrease the entropy are allowed through
the filter, complexity (in the context defined in the article)
is forced to increase.
As to research approach ... the authors appear to have applied
mathematical approaches from information theory to define
complexity within the genome.
The major assumption they appear to have made is that the
'entropy' is indeed non-functional (an assupmtion that PB
has also made here).
That's how I read the article, and would tend to agree that
with selection causeing a bias towards keeping 'functional'
DNA that any mutations that 'broke' functional DNA would
be removed (by death probably) while any that caused a non-functional
section to become functional would be likely to be retained
should it make the organism more fit (in the context of
it's environment).
Note the article makes no claim that genomic complexity is
related to structural complexity.