I'm skeptical that convex hull problems in dimension higher than 8 have great significance in practice, compared to those in lower dimension; this may be, of course, because effective algorithms weren't available for the higher dimensional problems. However, most of those who have expressed interest in, or used, my hull code have wanted to compute Delaunay triangulations in two or three dimensions. I think this is true also for the very much larger group of users of qhull. The question here is: in what dimensional regime do "important" convex hull problems lie? It would be helpful to have a collection of convex hull problem instances, arising in practice, with which to compare algorithms and to help answer these questions.
Avis and Fukuda contrast asymptotic worst-case complexity with output-sensitive bounds, and note that the worst-case optimal complexity of O(n^\floor{d/2}) held by some convex hull algorithms is so much higher than the output size for practical problems as to be irrelevant. I think it's worth mentioning here a long-known bound for some randomized algorithms that is not output-sensitive, but may have some relation to practice. Some randomized convex hull algorithms take time on the order of
n sum_{1\leq r\leq n} d^2 f_r/r^2,where f_r is the expected complexity of the convex hull of a random subset of size r. (More precisely, f_r bounds the complexity of a triangulation of the convex hull, cf. Avis and Fukuda's problem CH4.) I believe that for many convex hull problems arising in practice, f_r is O(r) or even o(r), giving an expected running time of O(n log n). While it is quite easy to concoct problem instances where f_r is Omega(r) but the output size is O(1), I think that nature is not usually as perverse as that. Again, however, I really don't know.
I agree that in very high dimensions, such randomized algorithms based on beneath/beyond methods may not competitive with output-sensitive algorithms. I would also not be surprised to find that parts of these randomized algorithms could be replaced by heuristic procedures such as bucketing, and be somewhat faster. However, the relationship between theory and practice for these algorithms is much less problematic than Avis and Fukuda suggest. The algorithms are simple and are not significantly slower than more complicated algorithms that use heuristics, even for problem instances for which the heuristics work well.
Finally, I see no reason to consider different algorithms, for different dimensional regimes, to be in competition.
\[O(n d^2)+(\log n)O(d)^{d/2+O(1)}+O(d^4\sqrt{n}\log n),\]gives just a little bit too much detail on the dependence on the dimension d. Note that the dependence on d here is not huge, and the exponential dependence on d follows from a worst-case bound for the simplex method; in practice, this bound can be replaced by a low-degree polynomial.
Avis and Fukuda suggest that in contrast to work in computational geometry, most researchers in linear programming are more interested in greater understanding of the interior-point and simplex methods. It's worth mentioning that at least three algorithms from CG can be viewed as versions of the simplex algorithm with particular pivoting rules: a recursive one of mine, and the algorithms of Kalai, and of Matousek, Sharir, and Welzl; the latter amazing algorithms are the only provably subexponential simplex algorithms known. Note that this work comes out of the discrete and computational geometry communities, and is the only progress toward a provably polynomial-time simplex algorithm.
I agree that low-dimensional linear programming is much less important than the general case. However, the above algorithms, and many others proposed in the CG literature, are applicable in the general case. They have the added "insurance" that they are provably fast when n>>d. It would be valuable to have a collection of problem instances for low-dimensional linear programming that have arisen in practice; if it's hard to come up with such a set, that would simply be evidence against the practical importance of the problem. It would certainly be of interest to determine the relative speed of CG algorithms and simplex algorithms with various pivoting rules. My own limited experience suggests that modest speedups are possible with randomized algorithms; the programming overhead is quite small.
As I mentioned twice, a library of problem instances would useful. If I feel foolish enough, I'll volunteer to maintain such a library; I'm sure that several groups have some good instances already.
-Ken