What is assessment? The process of understanding a given situation to support decision making. During software development, engineers spend as much as 50% of the overall effort on doing precisely that: they try to understand the current status of the system to know what to do next. In other words, assessing the current system accounts for half of the development budget. These are just the direct costs. The indirect costs can be seen in the quality of the decisions made as a result. That is why you should care about it.
What can you do about it?
"Technical debt" is a successful metaphor that exposes software engineers to economics, and managers to a significant technical problem. But, "technical debt" is a negative metaphor. Internal structure is too important to only have it described in negative terms. We need a positive focus.
Assessment is an inherently human activity. When assessing large data sets or complex software systems, tools are indeed a prerequisite because we need to deal with the sheer size of details, but eventually it is the human that has to understand and make decisions.
Traditionally, the process of assessment is dominated by the use of standard reporting tools. However, these tools are often not useful out-of-the-box, because problems tend to not be standard. I argue for an assessment process that is centered on humans rather than on tools, and I propose a new, humane, approach trough which custom tools are crafted to meet custom needs.
"Architect without architects. Emerge your architecture" goes the agile mantra. That’s great. Developers get empowered and fluffy papers make room for real code structure. But, how do you ensure the cohesiveness of the result?
One key aspect of dealing with the Work in Progress is to visualize the queues. We have come a long way with dealing with explicit requests that come from outside. But, how do we deal with technical problems that come from within?
As part of the humane assessment (http://humane-assessment.com) philosophy, I introduce the daily assessment, a simple technique based on the following daily routine:
We cannot continue to let systems loose in the wild without any concern for how we will deal with them at a later time. Two decades ago, Richard Gabriel coined the idea of software habitability. Indeed, given that engineers spend a significant part of their active life inside software systems, it is desirable for that system to be suitable for humans to live there. We go further and introduce the concept of software environmentalism as a systematic discipline to pursue and achieve habitability.
Developers are data scientists. Or at least they should be.
As programmers, we spend most of our time reading code yet we never talk about it. Because we deserve better than boring, in this talk we start from looking at why we read code and we explore alternatives. If we look closely, it turns out that even if reading is the most pervasive way to assess a system, it is not at all the only way. There are cooler options out there. Much cooler.
In this talk, we show live examples of how software engineering decisions can be made quickly and accurately by building custom analysis tools that enable browsing, visualizing or measuring code and data. All shown examples make use of the Moose analysis platform.
In this hands-on session, you will have the chance of seeing Moose up close in action on concrete Java projects.
Moose is an extensive platform that was conceived exactly to ease the building of customized analysis tools.
Pharo is the cool new kid on the object-oriented languages arena. It is Smalltalk-inspired. It is dynamic. It comes with a live programming environment in which objects are at the center. And, it is tiny.
But, most of all, it makes serious programming fun by challenging almost everything that got to be popular.For example, imagine an enviroment in which you can extend Object, modify the compiler, customize object the inspector, or even build your own the domain-specific debugger. And, you do not even have to stop the system while doing that.
Pharo offers a unique combination of concepts that enables the creation of new tools and hence new kinds of development. In this session we focus on the Glamorous Toolkit (http://gtoolkit.org), the IDE of Pharo.
Designing an interface starts from understanding the needs. In this talk, we take a systematic look at how a developer experience could look like and what an environment for developers could be made of.
An agile process replaces upfront design with the focus on the ability to react and to adapt to the current situation. To take the right decision, we need to be able to assess the situation accurately, and given the short iteration cycle, we need to do it in a timely fashion, too.
This talk proposes a fresh perspective on software assessment. First, we assert that software assessment is critical for taking accurate decisions that involve technical aspects. Second, we provide an overview of some software assessment tools and techniques that can scale to large amounts of information. And third, we discuss the implications of integrating these tools and techniques in an agile development process.
Simply saying "Inspect and Adapt" will not make it happen. In this talk we look at Inspection and Adaptation and construct an underlying theory of reflective thinking to help organizations practice these activities. We draw inspiration from the field of system reflection, and show that when used right, reflection is a versatile tool that should be used both for designing systems and for designing organizations.
Since a couple of years I initiated an unlikely project. No backlog. No meetings. No manager. Yet, we already delivered two significant versions and earned international recognition. This talk tells a story of this project and explores the conditions for making something like this possible.
Pharo offers a unique combination of concepts that enables the creation of new tools and hence new kinds of development. In this session we focus on the Glamorous Toolkit (http://gtoolkit.org), the IDE of Pharo.
We need a radical change in the way we approach software assessment both in practice and in research. Assessment is a critical software engineering activity, often accounting to as much as 50% of the overall development effort. However, in practice this activity is regarded as secondary and it is dealt with in an ad-hoc way. This does not service. We should recognize it explicitly and approach it holistically as a discipline.
Assessing large software systems is traditionally tackled using off-the-shelf tools that offer static reports. However, complex software systems exhibit specific problems that require specific strategies to understand and solve. This talk argues that software assessment should embrace these peculiarities and instead of generic hardcoded tools, it should rely on dedicated exploratory tools to answer specific problems. The message is exemplified through demos of the Glamour and Mondrian scripting engines contained in the Moose analysis platform.
To control the development of software systems effectively, we need to be able to assess their status. As modern systems are large and complex, we must go beyond reading raw data and rely on a combination of tools and techniques. Furthermore, as systems have particularities we need to customize the analysis to match the situation.
To manage software systems, we need to be able to assess their quality. However, software systems are large and complex, and documentation is often not reliable. To handle this situation we need assessment techniques and tools that provide an accurate overview. Metrics are such tools.
Understanding software systems is hampered by their sheer size. Software visualization encodes the data found in these systems into pictures and enables the human eye to interpret it. In this presentation we place software visualization in the context of reverse engineering and we present several examples of how it can help in understanding software systems. We also go behind the scene and discuss the principles that make for a good visualization.
Meeting real deadlines is a hard and stressful job. It’s a job that typically eats all resources available, because when we know exactly what the best way is, we want to go full steam ahead. After all, we want to utilize our productivity to the maximum. Except that we typically do not know the best way. We know just a way and we get comfortable with it.
Feedback is the central source of agile value. The most effective way to obtain feedback from stakeholders is a demo. That is what reviews are about. If a demo is the means to value, shouldn’t preparing the demo be a significant concern? Shouldn’t the preparation of demos not be left for the last minute? Should it not be part of the definition of done?
The technical world is governed by facts. In this world Excel files, specifications and technical details are everywhere. Yet, too often, this way of looking at the world makes us forget that the goal of our job is not to fulfill technical specifications, but to produce value.
Research is less about discovering the fantastic, as it is about revealing the obvious. The obvious is always there. It needs no discovery. It only needs us to take a look from a different perspective to see it. Thus, the most important challenge is not the fight against nature, but against our own assumptions. One way of fighting against our own assumptions is to expose them to other people. That is why, I advocate and practice what I call demo-driven research, a way of doing research that puts emphasis on presenting the state of research with any given chance and to any audience willing to listen.
Browsers are crucial to make software models accessible. Problem domains often require multiple views to access, interpret and edit the underlying elements. However, browsers are expensive to create and burdensome to maintain.
Glamour is a platform dedicated to building such browsers. Glamour is built in Smalltalk (both in VW and in Pharo), and comes with actual renderers for Widgetry, Morphic and Seaside. It uses a components and connectors architecture, and it comes with an embedded domain specific language that allows the user to build dedicated browsers quickly. It accommodates any kind of domain models via on-the-fly transformations and it enforces a strict and explicit separation between the presentation of the data and the navigation flow between different entities.
To analyze complex data we need to visualize it and to interact with it. Mondrian is a novel information visualization engine that lets the visualization and the interaction be specified via a script. Rather than featuring a dedicated scripting language, we designed Mondrian to use the hosting language for scripting. Our original implementation is in Smalltalk, and makes use of the dynamic nature of the language to provide an expressive scripting language. Several other implementations of the concept have been built in other languages as well. Mondrian is based on a graph model and works directly with the objects to be represented.
Modern software systems are large and complex, and to effectively understand them we need to employ a combination of analysis techniques. Moose is a reengineering environment that allows for tools integration by making a clear distinction between the meta-model and the analysis techniques. In this presentation we describe the philosophy of Moose and demonstrate how it enables an integrated, yet agile approach to reverse engineering. Furthermore, as Moose is the result of 10 years of research we also encompass the symbiotic relationship between the implementation effort and the basic research.
Over the past three decades, more and more research has been spent on understanding software evolution. However, the approaches developed so far rely on ad-hoc models, or on too specific meta-models, and thus, it is difficult to reuse or compare their results. We argue for the need of an explicit and generic meta-model that recognizes evolution as an explicit phenomenon and models it as a first class entity. Our solution is to encapsulate the evolution in the explicit notion of history as a sequence of versions, and to build a meta-model around these notions called Hismo. To show the usefulness of our meta-model we exercise its different characteristics by building several reverse engineering applications.
These days preparing a presentation is synonym with writing bullets on slides, presenting is synonym with delivering the pack of slides. While this approach might seem straightforward, it is ineffective as it fails to animate and transmit the message. In this presentation we strive to give another perspective on how presenting can be.
The last snapshot from the versioning system can tell us how the software system looks like, but it does not tell us how and why it got in this state. This talk argues that software history needs to be taken into account during the software assessment process, and it provides several examples of how history can be measured, predicted, visualized or how it can reveal behavior patterns of developers.
Research is the game of finding new points of view. Only through play do we relax enough for finding new points of view. But how should this game look like? What are the rules? What are the good practices? In this talk I present the game I play.
This lecture introduces the problem of maintaining and evolving software systems, presents various approaches to understand them and offers an outlook of possible reengineering techniques.
While the status quo can be comfortable, it is certainly not perfect. There always is something to improve. However, when entrenched in a routine we typically have no clue of what that something is and how to improve it. How can we find that something and how can we improve it?