Frequently Asked Questions
about the SGI Standard Template Library
Is the STL Y2K compliant?
Yes. The STL does not store or manipulate dates in any way, so there are no year 2000 issues.
Which compilers are supported?
The STL has been tested on these compilers: SGI 7.1 and later, or 7.0 with the -n32 or -64 flag; gcc 2.8 or egcs 1.x; Microsoft 5.0 and later. (But see below.) Boris Fomitchev distributes a port for some other compilers.
If you succeed in using the SGI STL with some other compiler, please let us know, and please tell us what modifications (if any) you had to make. We expect that most of the changes will be restricted to the <stl_config.h>
header.
What about older SGI compilers?
Given the rate of improvement in C++ implementations, SGI strongly recommends that you upgrade your compiler. If this is not possible, you might try the version of the STL for older Borland and Microsoft compilers (see the Download the STL page), or Boris Fomitchev's port. Neither of these is supported.
How do I install the SGI STL?
You should unpack the STL include files in a new directory, and then use the -I
(or /I
) option to direct the compiler to look there first. We don't recommend overwriting the vendor's include files.
At present the SGI STL consists entirely of header files. You don't have to build or link in any additional runtime libraries.
Are there any compatibility issues with Visual C++?
Visual C++ provides its own STL implementation, and some of the other Microsoft C++ library headers may rely on that implementation. In particular, the SGI STL has not been tested in combination with Microsoft's new <iostream>
header. It has been used successfully with the older <iostream.h>
header.
Is the SGI STL thread safe?
Yes. However, you should be aware that not everyone uses the phrase "thread safe" the same way. See our thread_safety for our design goals.
Are hash tables part of the C++ standard?
No. The hash table classes (hash_set
, hash_map
hash_multiset
hash_multimap
hash
) are an extension. They may be added to a future revision of the C++ standard.
The Rope
and Slist
classes are also extensions.
Why is list<>size()
linear time?
The size()
member function, for List
and Slist
, takes time proportional to the number of elements in the list. This was a deliberate tradeoff. The only way to get a constant-time size()
for linked lists would be to maintain an extra member variable containing the list's size. This would require taking extra time to update that variable (it would make splice()
a linear time operation, for example), and it would also make the list larger. Many list algorithms don't require that extra word (algorithms that do require it might do better with vectors than with lists), and, when it is necessary to maintain an explicit size count, it's something that users can do themselves.
This choice is permitted by the C++ standard. The standard says that size()
"should" be constant time, and "should" does not mean the same thing as "shall". This is the officially recommended ISO wording for saying that an implementation is supposed to do something unless there is a good reason not to.
One implication of linear time size()
: you should never write
Instead, you should write
Why doesn't map
's operator< use the map
's comparison function?
A map
has a notion of comparison, since one of its template parameters is a comparison function. However, operator<
for maps uses the elements' operator<
rather than that comparison function. This appears odd, but it is deliberate and we believe that it is correct.
At the most trivial level, this isn't a bug in our implementation because it's what's mandated by the C++ standard. (The behavior of operator<
is described in Table 65, in section 23.1.)
A more interesting question: is the requirement in the standard correct, or is there actually a bug in the standard?
We believe that the requirements in the standard are correct.
First, there's a consistency argument: operator<
for a Vector
(or Deque
, or List
) uses the element's operator<
. Should Map
's operator<
do something else, just because there is another plausible way to compare objects? It's reasonable to say, for all containers, that operator<
always means operator<
, and that if you need a different kind of comparison you can explicitly use lexicographical_compare
.
Second, if we did use the Map
's comparison function, there would be a problem: which one do we use? There are two map
arguments, and, while we know that their comparison functions have the same type, we don't know that they have the same behavior. The comparison function, after all, is a function object, and it might have internal state that affects the comparison. (You might have a function object to compare strings, for example, with a boolean flag that determines whether the comparison is case-sensitive.)
There's also a related question, incidentally: how should operator==
behave for sets? A set
's comparison function induces an equivalence relation, so, just as you can use the set
's comparison function for lexicographical ordering, you could also use it for a version of equality. Again, though, we define operator==(const set&, const set&)
so that it just calls the elements' operator==
.
Why does a vector
expand its storage by a factor of two when it performs a reallocation?
Expanding a Vector
by a factor of two is a time-space tradeoff; it means that each element will (on average) be copied twice when you're building a Vector
one element at a time, and that the ratio of wasted to used space is at most 1. (In general, if the exponent for expansion is r, the worst case wasted/used ratio is r - 1 and the number of times an element is copied approaches r/(r - 1). If r = 1.25, for example, then elements are copied five times instead of twice.)
If you need to control Vector
's memory usage more finely, you can use the member functions capacity()
and reserve()
instead of relying on automatic reallocation.
Why do the pop member functions return void
?
All of the STL's pop member functions (pop_back
in Vector
, List
, and Deque
; pop_front
in List
, Slist
, and Deque
; pop
in stack
, queue
, and priority_queue
) return void
, rather than returning the element that was removed. This is for the sake of efficiency.
If the pop
member functions were to return the element that was removed then they would have to return it by value rather than by reference. (The element is being removed, so there wouldn't be anything for a reference to point to.) Return by value, however, would be inefficient; it would involve at least one unnecessary copy constructor invocation. The pop
member functions return nothing because it is impossible for them to return a value in a way that is both correct and efficient.
If you need to retrieve the value and then remove it, you can perform the two operations explicitly. For example:
std::stack<T> s;
...
T old_value = s.top();
s.pop();
How do I sort a range in descending order instead of ascending?
sort(first, last, greater<T>());
(Note that it must be greater
, not greater_eq
. The comparison function f
must be one such that f(x, x)
is false
for every x
.)
Why am I getting uninitialized memory reads from PurifyTM?
We believe that the uninitialized memory read (UMR) messages in STL data structures are artifacts and can be ignored.
There are a number of reasons the compiler might generate reads from uninitialized memory (e.g. structure padding, inheritance from empty base classes, which still have nonzero size). Purify tries to deal with this by distinguishing between uninitialized memory reads (UMR) and uninitialized memory copies (UMC). The latter are not displayed by default.
The distinction between the two isn't completely clear, but appears to be somewhat heuristic. The validity of the heuristic seems to depend on compiler optimizations, etc. As a result, some perfectly legitimate code generates UMR messages. It's unfortunately often hard to tell whether a UMR message represents a genuine problem or just an artifact.
Why does Bounds CheckerTM say that I have memory leaks?
This is not an STL bug. It is an artifact of certain kinds of leak detectors.
In the default STL allocator, memory allocated for blocks of small objects is not returned to malloc
. It can only be reused by subsequent allocate
requests of (approximately) the same size. Thus programs that use the default may appear to leak memory when monitored by certain kinds of simple leak detectors. This is intentional. Such "leaks" do not accumulate over time. Such "leaks" are not reported by garbage-collector-like leak detectors.
The primary design criterion for the default STL allocator was to make it no slower than the HP STL per-class allocators, but potentially thread-safe, and significantly less prone to fragmentation. Like the HP allocators, it does not maintain the necessary data structures to free entire chunks of small objects when none of the contained small objects are in use. This is an intentional choice of execution time over space use. It may not be appropriate for all programs. On many systems malloc_alloc
may be more space efficient, and can be used when that is crucial.
The HP allocator design returned entire memory pools when the entire allocator was no longer needed. To allow this, it maintains a count of containers using a particular allocator. With the SGI design, this would only happen when the last container disappears, which is typically just before program exit. In most environments, this would be highly counterproductive; free
would typically have to touch many long unreferenced pages just before the operating system reclaims them anyway. It would often introduce a significant delay on program exit, and would possibly page out large portions of other applications. There is nothing to be gained by this action, since the OS reclaims memory on program exit anyway, and it should do so without touching that memory.
In general, we recommend that leak detection tests be run with malloc_alloc
. This yields more precise results with GC-based detectors (e.g. Pure Atria's PurifyTM), and it provides useful results even with detectors that simply count allocations and deallocations.