Casinos go through extraordinary effort to ensure randomness. Failure to do so would cost them dearly. Humans looking for patterns will nearly always find patterns that don't exist.
Some scientific studies were seriously flawed because of poor PRNGs.
Casinos (and game designers) often fail in their efforts. APs often try to find & exploit these failures.
Recently one casino host told me about a mis-set slot machine where a pair of people made six figures in one day before the problem was spotted.
I've heard of other mis-set slot machines which have cost the VP Slots his job...because of six-figure losses.
It doesn't matter that "casinos go through extraordinary effort to ensure randomness", it's that players may need to protect themselves.
Most "must-hit" progressive slot machines in the past decade have had a uniform distribution of progressive drop points (e.g. a $50 progressive starting at $25 has an equal chance of dropping anywhere between 25 and 50).
However, CJ has released very popular progressives which are programmed to fall very close to the must-hit point (maybe 92-96% towards the top).
So in this case, it's not a question of "random", but "WHICH KIND oF RANDOM"?
e.g. what is the distribution?
A numeric sequence is said to be statistical random when it contains no recognizable patterns or regularities; sequences such as the results of an ideal dice roll, or the digits of π exhibit statistical randomness.
Statistical randomness does not necessarily imply "true" randomness, i.e., objective unpredictability. Pseudorandomness is sufficient for many uses, such as statistics, hence the name statistical randomness.
Legislation concerning gambling imposes certain standards of statistical randomness to slot machines.
Global randomness and local randomness are different. Most philosophical conceptions of randomness are global—because they are based on the idea that "in the long run" a sequence looks truly random, even if certain sub-sequences would not look random. In a "truly" random sequence of numbers of sufficient length, for example, it is probable there would be long sequences of nothing but repeating numbers, though on the whole the sequence might be random.
Local randomness refers to the idea that there can be minimum sequence lengths in which random distributions are approximated. Long stretches of the same numbers, even those generated by "truly" random processes, would diminish the "local randomness" of a sample (it might only be locally random for sequences of 10,000 numbers; taking sequences of less than 1,000 might not appear random at all, for example).
Over the history of random number generation, many sources of numbers thought to appear "random" under testing have later been discovered to be very non-random when subjected to certain types of tests. The notion of quasi-random numbers was developed to circumvent some of these problems, though pseudorandom number generators are still extensively used in many applications (even ones known to be extremely "non-random"), as they are "good enough" for most applications.https://en.wikipedia.org/wiki/Randomness_tests
The use of an ill-conceived random number generator can put the validity of an experiment in doubt by violating statistical assumptions. Though there are commonly used statistical testing techniques such as NIST standards, Yongge Wang showed that NIST standards are not sufficient. Furthermore, Yongge Wang  designed statistical–distance–based and law–of–the–iterated–logarithm–based testing techniques. Using this technique, Yongge Wang and Tony Nicol  detected the weakness in commonly used pseudorandom generators such as the well known Debian version of OpenSSL pseudorandom generator which was fixed in 2008.