Frédéric Wang Yet another non-exponentially growing weblog

About Me  Blog Archive

Two Open Problems for the Subgroup-Reduction based Dedekindian HSP Algorithm

One of my main idea after having studied the Hidden Subgroup Problem is that subgroup simplification is likely to play an important role. Basically, rather than directly finding the hidden subgroup H by working on the whole initial group G , we only try to get partial information on H in a first time. This information allows us to move to a simpler HSP problem and we can iterate this process until the complete determination of H . Several reductions of this kind exist and I think the sophisticated solution to HSP over 2-nil groups illustrates well how this technique can be efficient.

Using only subgroup reduction, I've been able to design an alternative algorithm for the Dedekindian HSP i.e. over groups that have only normal subgroups. Recall that the standard Dedekindian HSP algorithm is to use Weak Fourier Sampling, measure a polynomial number of representations ρ 1 , , ρ m and then get with high probability the hidden subgroup as the intersection of kernels H = i Ker ρ i . When the group is dedekindian, we always have H Ker ρ i . Hence my alternative algorithm is rather to start by measuring one representation ρ 1 , move the problem to HSP over Ker ρ 1 and iterate this procedure. I've been able to show that we reach the group H after a polynomial number of steps, the idea being that when we measure a non-trivial representation the size of the underlying group becomes at least twice smaller. One difficulty of this approach is to determine the structure of the new group Ker ρ i so that we can work on it. However, for the cyclic case this is determination is trivial and for the abelian case I've used the group decomposition algorithm, based on the cyclic HSP. Finally I've two open questions:

  1. Can my algorithm work for the Hamiltonian HSP i.e. over non-abelian dedekindian groups?
  2. Is my algorithm more efficient than the standard Dedekindian HSP?

For the first question, I'm pretty sure that the answer is positive, but I admit that I've not really thought about it. For the second one, it depends on what we mean by more efficient. The decomposition of the kernel seems to increase the time complexity but since we are working on smaller and smaller groups, we decrease the space complexity. However, if we are only considering the numbers of sample, my conjecture is that both algorithms have the same complexity and more precisely yield the same markov process. In order to illustrate this, let's consider the cyclic case G = 18 and H = 6 18 . The markov chain of the my alternative algorithm is given by the graph below, where the edge labels are of course probabilities and the node labels are the underlying group. We start at G = 18 and want to reach H 3 .

Z18 1/6 Z9 1/6 Z6 1/3 Z3 1/3 1/3 2/3 1/2 1/2 1

One can think that moving to smaller and smaller subgroups will be faster than the standard algorithm which always works on 18 . However, it turns out that the markov chain of the standard algorithm is exactly the same. The point being that while it is true that working over 9 or 6 provides less possibilities of samples (3 and 2 respectively, instead of 6 for 18 ) the repartition of "good" and "bad" samples is the same and thus we get the same transition probabilities. I guess it would be possible to formalize this for a general cyclic group. The general abelian case seems more difficult, but I'm sure that the same phenomenon can be observed.