I can't post a link to the method, but it's from the website "Fermat's Library."
Say we want to check if "12345" is an anagram of "54321";
1. map each possible character to a prime number: 1->2, 2->3, 3->5; 4->7; 5->11
2. take the product of all the characters in each string: "12345" -> 2*3*5*7*11, "54321" -> 11*7*5*3*2
3. If the products are equal, the two strings are anagrams.
Very very neatQuote: kuroshivoI just read today's newsletter from the Wizard. He says he wants to know if a string is an anagram of another.
I can't post a link to the method, but it's from the website "Fermat's Library."
Say we want to check if "12345" is an anagram of "54321";
1. map each possible character to a primer number: 1->2, 2->3, 3->5; 4->7; 5->11
2. take the product of all the characters in each string: "12345" -> 2*3*5*7*11, "54321" -> 11*7*5*3*2
3. If the products are equal, the two strings are anagrams.
Quote: kuroshivoI just read today's newsletter from the Wizard. He says he wants to know if a string is an anagram of another.
I can't post a link to the method, but it's from the website "Fermat's Library."
Say we want to check if "12345" is an anagram of "54321";
1. map each possible character to a prime number: 1->2, 2->3, 3->5; 4->7; 5->11
2. take the product of all the characters in each string: "12345" -> 2*3*5*7*11, "54321" -> 11*7*5*3*2
3. If the products are equal, the two strings are anagrams.
That is super cool in my book! Thank you.
Here is a related question.
Suppose you have two decks of cards, each with an ace to 10. You shuffle both decks. What is the probability that there are no positional matches? For example:
0-1-2-3-4-5-6-7-8-9
2-4-6-8-0-1-3-5-9-7
Please don't say (9/10)^10. There are positional correlations.
Assume the order of the first deck is 0-1-2-3-4-5-6-7-8-9.
There are 3,628,800 possible arrangements in Deck #2 and exactly 1,334,961 of them contain no positional matches with that particular order. 1,334,961 / 3,628,880 = .367879.
What I found interesting is that seems to be the general answer for most decks of n cards each.
Two decks of 4 cards; 24 possible arrangements and 9 of them contain no positional matches. 9 / 24 = .375
Two decks of 5 cards; 120 possible arrangements and 44 of them contain no positional matches. 44 / 120 = .3666
Two decks of 6 cards; 720 possible arrangements and 265 of them contain no positional matches. 265 / 720 = .3680
Two decks of 7 cards; 5,040 possible arrangements and 1,854 of them contain no positional matches. 1,854 / 5,040 = .36785
Quote: EdCollinsI think the probability is .367879.
Assume the order of the first deck is 0-1-2-3-4-5-6-7-8-9.
There are 3,628,800 possible arrangements in Deck #2 and exactly 1,334,961 of them contain no positional matches with that particular order. 1,334,961 / 3,628,880 = .367879.
Thank you! Can you show or explain a solution other than brute force looping?
Quote: https://en.wikipedia.org/wiki/DerangementDerangements
Main article: Derangement
Another application of e, also discovered in part by Jacob Bernoulli along with Pierre Remond de Montmort, is in the problem of derangements, also known as the hat check problem:[18] n guests are invited to a party, and at the door, the guests all check their hats with the butler, who in turn places the hats into n boxes, each labelled with the name of one guest. But the butler has not asked the identities of the guests, and so he puts the hats into boxes selected at random. The problem of de Montmort is to find the probability that none of the hats gets put into the right box. This probability, denoted by p n {\displaystyle p_{n}\!} {\displaystyle p_{n}\!}, is:
p n = 1 − 1 1 ! + 1 2 ! − 1 3 ! + ⋯ + ( − 1 ) n n ! = ∑ k = 0 n ( − 1 ) k k ! . {\displaystyle p_{n}=1-{\frac {1}{1!}}+{\frac {1}{2!}}-{\frac {1}{3!}}+\cdots +{\frac {(-1)^{n}}{n!}}=\sum _{k=0}^{n}{\frac {(-1)^{k}}{k!}}.} {\displaystyle p_{n}=1-{\frac {1}{1!}}+{\frac {1}{2!}}-{\frac {1}{3!}}+\cdots +{\frac {(-1)^{n}}{n!}}=\sum _{k=0}^{n}{\frac {(-1)^{k}}{k!}}.}
As the number n of guests tends to infinity, pn approaches 1/e. Furthermore, the number of ways the hats can be placed into the boxes so that none of the hats are in the right box is n!/e (rounded to the nearest integer for every positive n).[19]
Quote: charliepatrickIt approaches 1/e as n gets larger and is also known as the Hat Check Problem. link
Cool! Here's another one related e:
Prove this: Select numbers between 0 and 1 randomly until sum is > 1. The expected number of selections needed is equal to e.
Quote: kuroshivoProve this: Select numbers between 0 and 1 randomly until sum is > 1. The expected number of selections needed is equal to e.
I know this one, but I don't have time at the moment to go through it. I'll leave this hint, which is an infinite series needed for my proof.
I have a feeling there may also be a calculus proof, but I don't know it. Don or Ace?
In other words 1/1 + 2/2 + 3/6 + 4/24 + 5/120 + 6/720 + ... = e
Quote: ThatDonGuyThe use of heap sort in the title leads me to ask: is heap sort still the fastest method?
Well, after reading Donald Knuth's "The Art of Computer Programming" volume 3 on sorting and searching (more than 10 years ago) I have the feeling that there is no one-size-fits-all answer. I think there are some shallow discussions on Wikipedia, but the actual meat is in Knuth's TAOCP.
There are many different websites that help to visualize how each algorithm works. Here's just one of many:
https://www.toptal.com/developers/sorting-algorithms
And yes, there is no one size fits all answer as to which algorithm is best/fastest. A lot of it has to do with what your data looks like before it is sorted. Is it completely random? Is it nearly sorted already? Is it completely reversed? Etc.
Quote: kuroshivoWell, after reading Donald Knuth's "The Art of Computer Programming" volume 3 on sorting and searching (more than 10 years ago) I have the feeling that there is no one-size-fits-all answer. I think there are some shallow discussions on Wikipedia, but the actual meat is in Knuth's TAOCP.
Speaking of Knuth, Volume 4a has a few pages on generating permutations, which was the original question in this post - so of course it's back at my office while I have been teleworking for over a year now.
Bin sort would be even faster for this problem. Just create an incidence array of size 10 and count how many of each digit. It works for arbitrarily long sequences, whereas the prime algorithm will quickly exceed the max data size. Heap sort is O(n log n). Bin sort is O(n).Quote: ThatDonGuyThe use of heap sort in the title leads me to ask: is heap sort still the fastest method? 40 years ago, it was, but I have done some comparisons since then, and I have found that, with the vast increases in processing speed, quicksort seems to be faster now. (Well, what I call "quicksort-N", where N is usually 8 or 9; if there are N or more values in the current comparison, do a quicksort, but if there are fewer, do a "wimpysort" like bubble or shaker.)