Wikipedia
''L''-notation is an asymptotic notation analogous to big-O notation, denoted as L[α, c] for a bound variable n tending to infinity. Like big-O notation, it is usually used to roughly convey the computational complexity of a particular algorithm.
It is defined as
L[α, c] = e
where c is a positive constant, and α is a constant 0 ≤ α ≤ 1.
L-notation is used mostly in computational number theory, to express the complexity of algorithms for difficult number theory problems, e.g. sieves for integer factorization and methods for solving discrete logarithms. The benefit of this notation is that it simplifies the analysis of these algorithms. The e expresses the dominant term, and the e takes care of everything smaller.
When α is 0, then
L[α, c] = L[0, c] = e = (lnn)
is a polynomial function of ln n; when α is 1 then
L[α, c] = L[1, c] = e = n
is a fully exponential function of ln n (and thereby polynomial in n).
If α is between 0 and 1, the function is subexponential of ln n (and superpolynomial).