The wiki seems a little light on the subject but to nutshell it a cycle can be defined as a unit of time it takes to perform the basic elemental operation or one rise and fall of the cpu clock.
Previous cores took 4 of more cycles to perform one operation. The simplest like Increment Reg - Fetch, Load or wait state, Execute then bump program counter. Later this was reduced to 2 by adding prefetch reducing the lag time then execute and bump the program counter during the execute. Current processors now sideband operations with multiple operations performing together in the same cycle. Since all CISC instructions are converted to micro-op code then the whole paradigm changes. It is no longer useful to time events from the execution of program code as you no longer have an exact count of cycles that will occur in sequence but for reference you can see what the single instruction typically takes for deciding what instructions are more efficient. Microcontrolllers do still run on direct RISC and are single process along with allowing you full control of when task switching occurs. Only interrupts can affect your timebase.