Two Open Problems for the Subgroup-Reduction based Dedekindian HSP Algorithm
One of my main idea after having studied the Hidden Subgroup Problem is that subgroup simplification is likely to play an important role. Basically, rather than directly finding the hidden subgroup by working on the whole initial group , we only try to get partial information on in a first time. This information allows us to move to a simpler HSP problem and we can iterate this process until the complete determination of . Several reductions of this kind exist and I think the sophisticated solution to HSP over 2-nil groups illustrates well how this technique can be efficient.
Using only subgroup reduction, I've been able to design an alternative algorithm for the Dedekindian HSP i.e. over groups that have only normal subgroups. Recall that the standard Dedekindian HSP algorithm is to use Weak Fourier Sampling, measure a polynomial number of representations and then get with high probability the hidden subgroup as the intersection of kernels . When the group is dedekindian, we always have . Hence my alternative algorithm is rather to start by measuring one representation , move the problem to HSP over and iterate this procedure. I've been able to show that we reach the group after a polynomial number of steps, the idea being that when we measure a non-trivial representation the size of the underlying group becomes at least twice smaller. One difficulty of this approach is to determine the structure of the new group so that we can work on it. However, for the cyclic case this is determination is trivial and for the abelian case I've used the group decomposition algorithm, based on the cyclic HSP. Finally I've two open questions:
- Can my algorithm work for the Hamiltonian HSP i.e. over non-abelian dedekindian groups?
- Is my algorithm more efficient than the standard Dedekindian HSP?
For the first question, I'm pretty sure that the answer is positive, but I admit that I've not really thought about it. For the second one, it depends on what we mean by more efficient. The decomposition of the kernel seems to increase the time complexity but since we are working on smaller and smaller groups, we decrease the space complexity. However, if we are only considering the numbers of sample, my conjecture is that both algorithms have the same complexity and more precisely yield the same markov process. In order to illustrate this, let's consider the cyclic case and . The markov chain of the my alternative algorithm is given by the graph below, where the edge labels are of course probabilities and the node labels are the underlying group. We start at and want to reach .
One can think that moving to smaller and smaller subgroups will be faster than the standard algorithm which always works on . However, it turns out that the markov chain of the standard algorithm is exactly the same. The point being that while it is true that working over or provides less possibilities of samples (3 and 2 respectively, instead of 6 for ) the repartition of "good" and "bad" samples is the same and thus we get the same transition probabilities. I guess it would be possible to formalize this for a general cyclic group. The general abelian case seems more difficult, but I'm sure that the same phenomenon can be observed.