Assessing large software systems is traditionally tackled using off-the-shelf tools that offer static reports. However, complex software systems exhibit specific problems that require specific strategies to understand and solve. This talk argues that software assessment should embrace these peculiarities and instead of generic hardcoded tools, it should rely on dedicated exploratory tools to answer specific problems. The message is exemplified through demos of the Glamour and Mondrian scripting engines contained in the Moose analysis platform.