As an industry we became enamored of type-safety in the early '80s. Many of us had been badly burned by C or other type-unsafe languages. When type-safe languages like C++, Pascal, and Ada came to the fore, we found that whole classes of errors were eliminated by the compiler.
This safety came at a price. Every variable had to be declared before it was used. Every usage had to be consistent with its declaration. In essence, a kind of "dual-entry bookkeeping" was established for languages. If you wanted to use a variable (first entry) you had to declare it (second entry). This double checking gave the compiler vast power to detect inconsistency and error on the part of the programmer, but at the cost of the double entries, and of making sure that the compiler had access to the declarations.
With the advent of agile processes like extreme programming (XP), we have come to find that unit testing is far more important than we had at first expected. In XP, we write unit tests for absolutely everything. Indeed, we write them before we write the production code that passes them. This, too, is a kind of dual-entry bookkeeping. But instead of the two entries being a declaration and a usage, the two entries are a test and the code that makes it pass.
Again, this double-checking eliminates lots of errors, including the errors that a type-safe compiler finds. Thus, if we write unit tests in the XP way, we don't need type safety. If we don't need type safety, then its costs become very severe. It is much easier to change a program written in a dynamically typed language than it is to change a program written in a type-safe language. That cost of change becomes a great liability if type safety isn't needed.
But there is another cost: The cost of compilation and deployment. If you want to compile a program in C++, you must give the compiler access to all the declarations it needs. These declarations are typically held in header files. Each header file containing a declaration used by the program must be compiled by the compiler. If you have a program with N modules, then to compile just one of them you may have to read in all N header files. With a little thought you'll realize that this means that compile time goes up with the square of the number of modules.
As code size increases, compile time rises in a distinctly nonlinear manner. For C++, the knee of the curve comes at about half a million lines. Prior to that point, compiles are pretty short. After that point compile times start to stretch and can get absurdly long. I know of one company that compiles 14.5 million lines of code using 50 SPARC-20s overnight.